CN112313950B - Video image component prediction method, device and computer storage medium - Google Patents
Video image component prediction method, device and computer storage medium Download PDFInfo
- Publication number
- CN112313950B CN112313950B CN201880094931.9A CN201880094931A CN112313950B CN 112313950 B CN112313950 B CN 112313950B CN 201880094931 A CN201880094931 A CN 201880094931A CN 112313950 B CN112313950 B CN 112313950B
- Authority
- CN
- China
- Prior art keywords
- image component
- value
- pixel point
- reference value
- coding block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000004364 calculation method Methods 0.000 claims description 39
- 238000004590 computer program Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 18
- 238000005070 sampling Methods 0.000 description 18
- 238000001914 filtration Methods 0.000 description 7
- 238000013139 quantization Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012417 linear regression Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007373 indentation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The embodiment of the application discloses a method and a device for predicting video image components and a computer storage medium, wherein the method comprises the following steps: acquiring a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to the coding block; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively; determining model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; and obtaining a second image component predicted value corresponding to each pixel point in the coding block according to the model parameters.
Description
Technical Field
The embodiment of the application relates to the technical field of video encoding and decoding, in particular to a method and a device for predicting video image components and a computer storage medium.
Background
With the improvement of the requirements of people on the video display quality, new video application forms such as high-definition and ultra-high-definition videos and the like are generated. With the wider and wider application of such high resolution and high quality video appreciation, the requirements for video compression technology are also increasing. H.265/high efficiency video coding (High Efficiency Video Coding, HEVC) is the current latest international video compression standard, and the compression performance of h.265/HEVC is improved by about 50% compared with the previous generation video coding standard h.264/advanced video coding (Advanced Video Coding, AVC), but the requirements of rapid development of video applications, especially new video applications such as ultra-high definition, virtual Reality (VR), etc., are still not satisfied.
The ITU-T video coding expert group and the ISO/IEC moving picture expert group set up in 2015 in the joint video research group (Joint Video Exploration Team, jfet) set forth the next generation video coding standard. The joint exploration test model (Joint Exploration Test Model, JEM) is a generic reference software platform on which different coding tools verify. In month 4 of 2018, jfet formally names the next generation video coding standard as multi-function video coding (Versatile Video Coding, VVC), with its corresponding test model as VTM. In JEM and VTM reference software, a linear model-based prediction method has been integrated, by which a chrominance component can be derived from a luminance component by a linear model. However, the accuracy of the calculated chromaticity prediction value is low when constructing the linear model.
Disclosure of Invention
In order to solve the above technical problems, it is desirable to provide a method, a device and a computer storage medium for predicting a video image component, which can effectively improve the prediction accuracy of the video image component, so that the predicted value of the video image component is closer to the original value of the video image component, and further, the coding rate is saved.
The technical scheme of the embodiment of the application can be realized as follows:
in a first aspect, embodiments of the present application provide a method for predicting a video image component, the method including:
acquiring a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to the coding block; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively;
determining model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
And obtaining a second image component predicted value corresponding to each pixel point in the coding block according to the model parameters.
In a second aspect, embodiments of the present application provide a prediction apparatus for a video image component, where the prediction apparatus for a video image component includes: an acquisition section, a determination section, and a prediction section;
the acquisition part is configured to acquire a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to the coding block; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively;
the determining part is configured to determine model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
and the prediction part is configured to acquire a second image component predicted value corresponding to each pixel point in the coding block according to the model parameters.
In a third aspect, an embodiment of the present application provides a prediction apparatus for a video image component, where the prediction apparatus for a video image component includes: a memory and a processor;
the memory is used for storing a computer program capable of running on the processor;
the processor is configured to perform the steps of the method of the first aspect when the computer program is run.
In a fourth aspect, embodiments of the present application provide a computer storage medium storing a prediction program of a video image component, which when executed by at least one processor implements the steps of the method of the first aspect.
The embodiment of the application provides a prediction method, a prediction device and a computer storage medium of video image components, wherein a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block are obtained; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively; determining model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameters; because the determination of the model parameters in the embodiment of the application considers not only the first image component adjacent reference value and the second image component adjacent reference value, but also the first image component reconstruction value, the second image component predicted value can be more approximate to the second image component original value, thereby effectively improving the prediction accuracy of the video image component, leading the video image component predicted value to be more approximate to the video image component original value, and further saving the coding code rate.
Drawings
Fig. 1A to 1C are schematic structural diagrams of a video image sampling format in a related art;
fig. 2A and 2B are schematic diagrams of sampling a first image component neighboring reference value and a second image component neighboring reference value of an encoding block in the related art;
fig. 3A to fig. 3C are schematic structural diagrams of a CCLM default model in the related art;
FIG. 4 is a block diagram of a first image component neighboring reference value and a second image component neighboring reference value in an MMLM prediction mode in a related art scheme;
fig. 5 is a schematic distribution diagram of each pixel point and adjacent reference pixels in a coding block according to an embodiment of the present application;
fig. 6 is a schematic block diagram of a video coding system according to an embodiment of the present application;
fig. 7 is a schematic block diagram of a video decoding system according to an embodiment of the present application;
fig. 8 is a flowchart of a method for predicting a video image component according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a prediction apparatus for video image components according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another apparatus for predicting video image components according to an embodiment of the present application;
Fig. 11 is a schematic structural diagram of a prediction apparatus for video image components according to another embodiment of the present application;
fig. 12 is a schematic diagram of a specific hardware structure of a prediction apparatus for video image components according to an embodiment of the present application.
Detailed Description
For a more complete understanding of the features and technical content of the embodiments of the present application, reference should be made to the following detailed description of the embodiments of the present application, taken in conjunction with the accompanying drawings, which are for purposes of illustration only and not intended to limit the embodiments of the present application.
In video images, the first, second and third image components are typically employed to characterize the encoded block; wherein the three image components are a luminance component, a blue chrominance component and a red chrominance component, respectively, and in particular the luminance component is generally represented by the symbol Y, the blue chrominance component is generally represented by the symbol Cb and the red chrominance component is generally represented by the symbol Cr.
In the embodiment of the present application, the first image component may be the luminance component Y, the second image component may be the blue chrominance component Cb, and the third image component may be the red chrominance component Cr, but the embodiment of the present application is not particularly limited thereto. The sampling format commonly used at present is YCbCr format, which includes the following types, respectively shown in fig. 1A to 1C, wherein a cross (X) in the drawing indicates a first image component sampling point, and a circle (∈) indicates a second image component or a third image component sampling point. The YCbCr format includes:
4:4: format 4: as shown in fig. 1A, it is indicated that the second image component or the third image component is not downsampled; taking 4 sampling samples of first image components, 4 sampling samples of second image components and 4 sampling samples of third image components from every 4 continuous pixel points on each scanning line;
4:2:2 format: as shown in fig. 1B, it is represented that the first image component is subjected to 2 with respect to the second image component or the third image component: 1, without vertical downsampling; taking 4 sampling samples of a first image component, 2 sampling samples of a second image component and 2 sampling samples of a third image component from every 4 continuous pixel points on each scanning line;
4:2: format 0: as shown in fig. 1C, it is represented that the first image component is subjected to 2 with respect to the second image component or the third image component: horizontal downsampling of 1 and 2:1 vertical downsampling; it takes 2 samples of the first image component, 1 sample of the second image component and 1 sample of the third image component every 2 consecutive pixels on the horizontal scan line and the vertical scan line.
YCbCr is used for video image to be 4:2: in the case of the 0 format, if the first image component of the video image is a coding block of size 2n×2n, the corresponding second image component or third image component is a coding block of size nxn, where N is the side length of the coding block. In the present embodiment, the following will be referred to as 4:2: the 0 format is described by way of example, but the technical solution of the embodiment of the present application is equally applicable to other sampling formats.
In the next generation video coding standard h.266, in order to further improve coding performance and coding efficiency, an expansion improvement is made for inter-component prediction (Cross-component Prediction, CCP), and inter-component linear model prediction (Cross-component Linear Model Prediction, CCLM) is proposed. In h.266, CCLM implements prediction between the first image component to the second image component, the first image component to the third image component, and between the second image component and the third image component, and the following description will take prediction between the first image component and the second image component as an example, but the technical solution of the embodiment of the present application may also be applicable to prediction between other image components.
It will be appreciated that in order to reduce redundancy between the first image component and the second image component, in a prediction mode using CCLM, the first image component and the second image component are of the same coding block, and the second image component is predicted based on the reconstructed value of the first image component of the same coding block, for example using a preset model as shown in equation (1):
Pred C [i,j]=α·Rec Y [i,j]+β (1)
wherein i, j represents the position coordinates of the sampling point in the coding block, i represents the horizontal direction, j represents the vertical direction, pred C [i,j]Representing the position coordinates of the coding block as [ i, j ]]Second image component predicted value corresponding to sampling point of (c), rec Y [i,j]Representing the (downsampled) position coordinates in the same code block as [ i, j]The first image component reconstruction value corresponding to the sampling point of the (a) is the modulus of the preset modelThe type parameter may be derived by minimizing regression errors of the first image component neighboring reference values and the second image component neighboring reference values around the coding block, as calculated using equation (2):
wherein Y (N) represents all first image component adjacent reference values on the left and upper sides after downsampling, C (N) represents all second image component adjacent reference values on the left and upper sides, N is the side length of the second image component coding block, n=1, 2. Referring to fig. 2A and 2B, there are shown schematic diagrams of sampling of a first image component neighboring reference value and a second image component neighboring reference value of an encoding block in the related art scheme, respectively; wherein in fig. 2A, a larger box with bold is used to highlight the first image component encoding block 21, and a gray solid circle is used to indicate the adjacent reference value Y (n) of the first image component encoding block 21; in fig. 2B, a larger box with bold is used to highlight the second image component encoding block 22, and a gray solid circle is used to indicate the adjacent reference value C (n) of the second image component encoding block 22. Fig. 2A shows a first image component encoding block 21 of size 2n×2n for 4:2: for video images in 0 format, the size of the second image component corresponding to the first image component of size 2n×2n is n×n, as shown by 22 in fig. 2B; that is, fig. 2A and 2B are schematic diagrams of a coded block obtained by sampling a first image component and a second image component for the same coded block, respectively. Here, for square code blocks, equation (2) may be directly applied; for non-square code blocks, the adjacent samples of the longer edge are first downsampled to obtain a number of samples equal to the number of samples of the shorter edge. Alpha and beta do not need to be transmitted, and can be calculated by the formula (2) in a decoder; in the embodiment of the present application, this is not particularly limited.
Fig. 3A to 3C show schematic structural diagrams of a CCLM preset model in a related technical solution, as shown in fig. 3A to 3C, a, b, C are first image component adjacent reference values, A, B, C is a second image component adjacent reference value, E is a first image component reconstruction value corresponding to a pixel point in a coding block, and E is a second image component prediction value corresponding to the pixel point; wherein, α and β can be calculated according to formula (2) by using all the first image component adjacent reference values Y (n) and the second image component adjacent reference values C (n) of the encoding block, and a preset model can be established according to the calculated α and β and formula (1), as shown in fig. 3C; and (3) introducing the first image component reconstruction value E corresponding to a certain pixel point in the coding block into the preset model shown in the formula (1), and calculating to obtain a second image component prediction value E corresponding to the pixel point.
In JEM, there are currently two predictive modes of CCLM: one is a prediction mode of a single model CCLM; the other is a predictive mode of a multi-Model CCLM (MMLM), also known as MMLM. As the name suggests, the prediction mode of the single model CCLM is only one preset model to realize the prediction of the second image component from the first image component; the MMLM prediction mode has a plurality of predetermined models to implement the prediction of the second image component from the first image component. For example, in the prediction mode of MMLM, the first image component neighboring reference value and the second image component neighboring reference value of the coding block are divided into two groups, each of which can be separately used as a training set for deriving model parameters in a preset model, i.e., each group can derive a set of model parameters α and β; and the first image component reconstruction values of the coding blocks can be grouped according to a classification method of the adjacent reference values of the first image components, and corresponding model parameters alpha and beta are respectively used for establishing a preset model.
Fig. 4 shows a block diagram of a first image component neighboring reference value and a second image component neighboring reference value in an MMLM prediction mode in a related art scheme; the threshold is a set value used for indicating establishment of a plurality of preset models, and the magnitude of the threshold is obtained by averaging according to a first image component adjacent reference value Y (n); as can be seen from fig. 4, assuming that the Threshold is represented by Threshold, and Threshold is taken as a demarcation point, if the adjacent reference value of the first image component is less than or equal to the Threshold, the first image component is classified into the first group; if the adjacent reference value of the first image component is larger than the threshold value, dividing the first image component into a second group; here, model parameters α1 and β1 of the first preset model, such as α1=2, β1=1, may be derived from the first image component neighboring reference values and the second image component neighboring reference values of the first group; model parameters alpha 2 and beta 2 of the second preset model can be derived according to the first image component adjacent reference value and the second image component adjacent reference value of the second group, such as alpha 2 = 1/2 and beta 2 = -1; the first preset model M1 and the second preset model M2 are established as shown in the formula (3),
wherein Rec Y [i,j]Representing the position coordinates of the coding block as [ i, j ] ]A first image component reconstruction value corresponding to the pixel point; pred 1C [i,j]Representing the position coordinates of the coding block as [ i, j ]]The pixel point Pred according to the second image component predicted value obtained by the first preset model M1 2C [i,j]Representing the position coordinates of the coding block as [ i, j ]]The pixel points obtain second image component predicted values according to a second preset model M2.
The prior art scheme is that a first image component adjacent reference value and a second image component adjacent reference value of a coding block are utilized to calculate and obtain model parameters alpha and beta of a preset model; specifically, α and β are obtained by minimizing regression errors of the first image component adjacent reference value and the second image component adjacent reference value, as shown in the above formula (2).
However, due to different spatial regions, the texture of the image space tends to change, and the distribution characteristics of pixels in different regions are different, for example, part of the pixels are high-brightness, and part of the pixels are low-brightness; if the model parameters of the preset model are simply constructed using adjacent reference pixels, the constructed model parameters are not optimal due to insufficient consideration, so that the second image component predicted values obtained by the preset model are not accurate enough.
In this embodiment of the present application, a prediction method of a video image component proposes to construct model parameters of a preset model based on a first image component reconstruction value and a second image component temporary value of a coding block, where the second image component temporary value is obtained according to a degree of similarity between the first image component reconstruction value and a first image component neighboring reference value of the coding block, so that the constructed model parameters are as close to optimal model parameters as possible; referring to fig. 5, a schematic distribution diagram of each pixel point and adjacent reference pixels in an encoding block according to an embodiment of the present application is shown; as shown in fig. 5, adjacent reference pixels of the coding block are mainly high-luminance, and pixel points in the coding block are mainly medium-low-luminance; according to the embodiment of the application, the first image component adjacent reference value and the second image component adjacent reference value corresponding to the adjacent reference pixels are considered, and meanwhile, the similarity degree between the first image component reconstruction value and the first image component adjacent reference value is considered, so that the constructed model parameters are close to the optimal model parameters as much as possible. In the embodiment of the present application, the first image component reconstruction value of the encoding block participates in not only the application of the preset model, but also the calculation of the model parameters, so that the second image component prediction value obtained by the embodiment of the present application is further closer to the second image component original value. The technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
Referring to fig. 6, an example of a block diagram of a video coding system is shown; as shown in fig. 6, the video coding system 600 includes transform and quantization 601, intra estimation 602, intra prediction 603, motion compensation 604, motion estimation 605, inverse transform and inverse quantization 606, filter control analysis 607, deblocking filtering and sample adaptive indentation (Sample Adaptive Offset, SAO) filtering 608, header information coding, and Context-based adaptive binary arithmetic coding (CABAC) 609 and decoded image buffer 610; for an input original video signal, a video Coding block can be obtained through Coding Tree Unit (CTU) division, and then the video Coding block is transformed through transformation and quantization 601, including transforming residual information from a pixel domain to a transform domain, and quantizing the obtained transform coefficients to further reduce bit rate; the intra-frame estimation 602 and intra-frame prediction 603 are used to intra-predict the video encoded block; in particular, intra-frame estimation 602 and intra-frame prediction 603 are used to determine an intra-prediction mode to be used to encode the video coding block; motion compensation 604 and motion estimation 605 are used to perform inter-prediction encoding of the received video encoded block relative to one or more blocks in one or more reference frames to provide temporal prediction; the motion estimation performed by motion estimation 605 is a process of generating a motion vector that can estimate the motion of the video coding block, and then motion compensation is performed by motion compensation 604 based on the motion vector determined by motion estimation 605; after determining the intra prediction mode, the intra prediction 603 is also used to provide the selected intra prediction data to the header information encoding and CABAC 609, and the motion estimation 605 also sends the computationally determined motion vector data to the header information encoding and CABAC 609; in addition, inverse transform and inverse quantization 606 is used for reconstruction of the video coding block, reconstructing residual blocks in the pixel domain, the reconstructed residual blocks removing blocking artifacts by filter control analysis 607 and deblocking and SAO filtering 608, and then adding the reconstructed residual blocks to a predictive block in the frame of decoded image buffer 610 to generate reconstructed video coding blocks; the header information coding and CABAC 609 is used to code quantized transform coefficients, and in a CABAC-based coding algorithm, context content may be based on neighboring coding blocks, and may be used to code information indicating the determined intra prediction mode, and output a code stream of the video signal; the decoded image buffer 610 is used for storing reconstructed video coding blocks, and as video coding proceeds, new reconstructed video coding blocks are continuously generated, and these reconstructed video coding blocks are stored in the decoded image buffer 610.
Referring to fig. 7, an example of a block diagram of a video decoding system is shown; as shown in fig. 7, the video decoding system 700 includes header information decoding and CABAC decoding 701, inverse transform and inverse quantization 702, intra prediction 703, motion compensation 704, deblocking filtering and SAO filtering 705, and decoded image buffer 706; after the input video signal is subjected to the encoding process of fig. 6, a code stream of the video signal is output; the code stream is input into a video decoding system 700, and is first subjected to header information decoding and CABAC decoding 701 to obtain decoded transform coefficients; processing by inverse transform and inverse quantization 702 the transform coefficients to generate a residual block in the pixel domain; intra prediction 703 may be used to generate prediction data for a current video decoded block based on the determined intra prediction mode and data from a previously decoded block of a current frame or picture; motion compensation 704 is determining prediction information for a video decoding block by parsing the motion vector and other associated syntax elements and using the prediction information to generate a predictive block of the video decoding block being decoded; forming a decoded video block by summing the residual block from the inverse transform and inverse quantization 702 with a corresponding predictive block generated by motion compensation 704; the decoded video signal is passed through deblocking filtering and SAO filtering 705 to remove blocking artifacts, which may improve video quality; the decoded video blocks are then stored in a decoded picture buffer 706, the decoded picture buffer 706 storing reference pictures for subsequent motion compensation and also storing the video signal output by the decoding, i.e. the restored original video signal.
The embodiment of the application is mainly applied to an intra-frame prediction 603 part shown in fig. 6 and an intra-frame prediction 703 part shown in fig. 7; that is, the embodiments of the present application may function simultaneously with the encoding system and the decoding system, which are not particularly limited.
Based on the application scenario examples of fig. 6 or fig. 7, referring to fig. 8, a video image component prediction method provided in an embodiment of the present application is shown, where the method may include:
s801: acquiring a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to the coding block; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively;
s802: determining model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
S803: and obtaining a second image component predicted value corresponding to each pixel point in the coding block according to the model parameters.
It should be noted that, the coding block is a current coding block to be subjected to the second image component prediction or the third image component prediction; the first image component reconstruction value is used for representing a reconstruction value of a first image component corresponding to at least one pixel point in the coding block, the first image component adjacent reference value is used for representing a reference value of the first image component corresponding to the coding block adjacent reference pixel point, and the second image component adjacent reference value is used for representing a reference value of a second image component corresponding to the coding block adjacent reference pixel point.
In the technical scheme shown in fig. 8, a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block are obtained; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively; determining model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameters; in the embodiment of the application, the determination of the model parameters not only considers the first image component adjacent reference value and the second image component adjacent reference value, but also considers the first image component reconstruction value, so that the second image component predicted value is more approximate to the second image component original value, the prediction accuracy of the video image component can be effectively improved, the video image component predicted value is more approximate to the video image component original value, and the coding code rate is further saved.
It can be understood that the optimal model parameters of the preset model are obtained from the mathematical theory angle; for the second image component predicted value corresponding to each pixel point in the coding block, it is generally expected that the smaller the predicted residual between the second image component original value and the second image component predicted value, the better, so the optimized objective function is
Wherein i and j represent position coordinates of pixel points in the coding block, i represents a horizontal direction, j represents a vertical direction,for the position coordinates [ i, j ] in the encoded block]First image component reconstruction value, C [ i, j ] corresponding to pixel point of (C1)]For the position coordinates [ i, j ] in the encoded block]Corresponding to the pixel point of the second image component original value C Pred [i,j]For the position coordinates [ i, j ] in the encoded block]A second image component predicted value corresponding to the pixel point of (a); here, the optimal model parameter α of the model is preset opt And beta opt Can be obtained by least square method, as shown in formula (5),>
as can be seen from equation (5), the values are reconstructed using the n×n first image components within the encoded blockAnd a second image component original value C [ i, j ]]The optimal model parameter alpha under the condition of minimum mean square error can be obtained by performing linear regression opt And beta opt 。
However, due to the optimal model parameter α opt And beta opt The second image component original value of the coding block is used for calculation, but the second image component original value is not available at the decoding end, so alpha is needed to be calculated opt And beta opt Transmission is performed, in which case additional bit overhead is introduced; in addition, model parameter alpha opt And beta opt Large absolute value amplitude and large span, which may be uneconomical for predictive performance if transmitted. Therefore, in the embodiment of the present application, the second image component temporary value is constructed based on the similarity between the first image component reconstruction value and the first image component adjacent reference value of the encoding block, and the constructed second image component temporary value is used to replace the second image component original value corresponding to each pixel point in the encoding block; in this case, the optimal model parameters can be obtained without introducing extra bit overhead.
Based on the solution shown in fig. 8, in a possible implementation manner, the determining a model parameter according to the acquired first image component reconstruction value, the first image component neighboring reference value, and the second image component neighboring reference value includes:
acquiring a second image component temporary value according to the first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; wherein the second image component temporary value characterizes a temporary value of the second image component corresponding to at least one pixel point of the encoded block;
And determining model parameters according to the first image component reconstruction value and the acquired second image component temporary value.
It should be noted that, in the embodiment of the present application, the determination of the model parameter not only considers the first image component adjacent reference value and the second image component adjacent reference value, but also considers the first image component reconstruction value; according to the similarity between the first image component reconstruction value and the first image component adjacent reference value, a matched pixel point of each pixel point in the coding block can be obtained, and the second image component temporary value is obtained according to the second image component adjacent reference value corresponding to the matched pixel point; that is, the second image component temporary value is obtained based on a second image component adjacent reference value corresponding to a matching pixel point of the at least one pixel point of the coding block in the coding block adjacent reference pixels.
It will be appreciated that for the acquisition of the second image component temporary value, in one possible implementation, the acquiring the second image component temporary value from the first image component reconstruction value, the first image component neighboring reference value and the second image component neighboring reference value includes:
For each pixel point of the coding block, performing difference value calculation on any one of the adjacent reference values of the first image component and a first image component reconstruction value corresponding to each pixel point;
obtaining a matched pixel point of each pixel point from adjacent reference pixels of the coding block according to the difference value calculation result;
and taking the second image component adjacent reference value corresponding to the matched pixel point as a second image component temporary value corresponding to each pixel point.
Optionally, when only 1 neighboring pixel point corresponding to the first image component neighboring reference value with the smallest difference is obtained according to the result of the difference calculation, in the above implementation manner, specifically, the obtaining, according to the result of the difference calculation, the matching pixel point of each pixel point from the neighboring reference pixels of the coding block includes:
acquiring adjacent pixel points corresponding to the adjacent reference values of the first image components with the minimum difference according to the difference calculation result;
and taking the adjacent pixel points as matched pixel points of each pixel point.
Optionally, when the number of adjacent pixels corresponding to the first image component adjacent reference value with the smallest difference is multiple according to the result of the difference calculation, in the above implementation manner, specifically, the obtaining, according to the result of the difference calculation, the matching pixel point of each pixel point from the adjacent reference pixels of the coding block includes:
Acquiring a neighboring pixel point set corresponding to a first image component neighboring reference value with the minimum difference value according to the difference value calculation result;
and calculating a distance value between each adjacent pixel point in the adjacent pixel point set and each pixel point, and selecting the adjacent pixel point with the minimum distance value as a matched pixel point of each pixel point.
In the prediction mode of CCLM, the position coordinates of the coded block are [ i, j ]]Assuming that a first image component reconstruction value corresponding to the pixel point has been obtainedSearching for and +/from the first image component neighbor reference value of the coding block>The nearest first image component is adjacent to the reference value, i.e. the difference between the two is the smallest; if the searched AND +.>The adjacent pixel points corresponding to the nearest adjacent reference values of the first image component are only 1, and the adjacent pixel points are the position coordinates of [ i, j ]]Matching pixel points of the pixel points; if the searched AND +.>If there are multiple adjacent pixel points corresponding to the nearest first image component adjacent reference value, namely, an adjacent pixel point set is obtained, then a distance value between each adjacent pixel point in the adjacent pixel point set and each pixel point is required to be calculated, and then the adjacent pixel point with the minimum distance value is selected as a position coordinate [ i, j ] ]Matching pixel points of the pixel points; after the matching pixel is obtained, the second image component corresponding to the matching pixel can be used as the position coordinate of [ i, j ]]The temporary value of the second image component corresponding to the pixel point is C' [ i, j ]]A representation; when the temporary values of the second image components of all pixels of the coding block are foundAt this time, can be usedReplace->Calculating model parameters and using the model parameters to construct a second image component predictor; by using the method, a set which is relatively close to the original value of the second image component corresponding to the coding block can be constructed.
It will be appreciated that, for the acquisition of the second image component temporary value, the second image component temporary value may be obtained by a construction method such as interpolation, in addition to the second image component temporary value obtained by the matching method using the closest pixel as described above; thus, in another possible implementation, the obtaining a second image component temporary value from the first image component reconstruction value, the first image component neighboring reference value, and the second image component neighboring reference value includes:
for each pixel point of the coding block, performing difference value calculation on any one of the adjacent reference values of the first image component and a first image component reconstruction value corresponding to each pixel point;
According to the result of the difference calculation, a first matched pixel point and a second matched pixel point of each pixel point are obtained from adjacent reference pixels of the coding block; the first matching pixel points represent adjacent pixel points corresponding to a first image component adjacent reference value which is larger than the first image component reconstruction value and has the smallest difference value in the first image component adjacent reference values, and the second matching pixel points represent adjacent pixel points corresponding to a first image component adjacent reference value which is smaller than the first image component reconstruction value and has the smallest difference value in the first image component adjacent reference values;
and carrying out interpolation operation according to the second image component adjacent reference value corresponding to the first matched pixel point and the second image component adjacent reference value corresponding to the second matched pixel point to obtain a second image component temporary value corresponding to each pixel point.
In the prediction mode of CCLM, the position coordinates of the coded block are [ i, j ]]Assuming that a first image component reconstruction value corresponding to the pixel point has been obtainedBased on the principle that the first image component neighboring reference value is closest to the first image component reconstruction value, it is desirable to find a desired first image component neighboring reference value Y from the first image component neighboring reference values C And hope to be required +.>But the required Y is not directly obtained from the adjacent reference value of the first image component C The method comprises the steps of carrying out a first treatment on the surface of the At this time, a search can be made from the first image component neighboring reference values of the encoded block, first of all, to obtain a value greater than +.>And is in charge of>Nearest first image component adjacent reference value Y 1 ,Y 1 The corresponding adjacent pixel point is a first matched pixel point, and the second image component adjacent reference value corresponding to the first matched pixel point is C 1 The method comprises the steps of carrying out a first treatment on the surface of the And then get a weight less than->And is in charge of>Nearest first image component adjacent reference value Y 2 ,Y 2 The corresponding adjacent pixel point is a second matching pixel point, and the second image component adjacent reference value corresponding to the second matching pixel point is C 2 The method comprises the steps of carrying out a first treatment on the surface of the Namely the first matched pixel point and the second matched pixel point which are obtained by searching are the position coordinates of [ i, j ]]The first matched pixel point and the second matched pixel point of the pixel points; y can be used at this time C 、Y 1 、C 1 、Y 2 And C 2 Interpolation is performed, here assumedThe obtained product is equal to Y C Corresponding second image component adjacent reference value C C =C 2 +(Y C -Y 2 )×(C 1 -C 2 )/(Y 1 -Y 2 ) This value can be used as a position coordinate of [ i, j ]]The temporary value of the second image component corresponding to the pixel point is C' [ i, j ]]A representation; when the second image component temporary value of all pixels of the coding block has been found, the +. >Instead ofCalculating model parameters and using the model parameters to construct a second image component predictor; by the method, a set which is relatively close to the original value of the second image component corresponding to the coding block can be constructed.
In this embodiment of the present application, for obtaining the second image component temporary value, not only the second image component temporary value may be obtained according to a matching method using the closest pixel, but also the second image component temporary value may be obtained using an interpolation method, and even the second image component temporary value may be obtained by using a matching method using a closest pixel for a part of pixels and an interpolation method for a part of pixels together; the embodiments of the present application are not particularly limited.
In the embodiment of the present application, for the acquisition of the temporary value of the second image component, the search range of the matched pixel point may be further enlarged or reduced. For example, the search range may be defined in such a manner that a difference between a row and a column of pixel point coordinate positions corresponding to the temporary value of the second image component to be determined is not more than n, n being an integer greater than 1; the searching range can be expanded to the pixel point position information of m adjacent rows and/or m columns, wherein m is an integer greater than 1; pixel location information of other encoded blocks in the lower left or upper right region may even be used; the embodiments of the present application are not particularly limited.
It will be appreciated that after all the temporary values of the second image component corresponding to the encoded blocks are obtained, model parameters may be determined; in a possible implementation manner, the determining a model parameter according to the acquired first image component reconstruction value and the acquired second image component temporary value includes:
inputting the first image component reconstruction value and the second image component temporary value into a first preset factor calculation model to obtain a first model parameter;
and inputting the first model parameter, the first image component reconstruction value and the second image component temporary value into a second preset factor calculation model to obtain a second model parameter.
It should be noted that the model parameters include a first model parameter and a second model parameter; after all the temporary values of the second image component corresponding to the coding block are obtained, linear regression is still performed by using a least square method, so that a first model parameter alpha 'and a second model parameter beta' in a preset model can be obtained as follows:
it can be understood that after the first model parameter and the second model parameter are obtained, a second image component predicted value corresponding to each pixel point in the coding block can be obtained according to the established preset model; therefore, in the above implementation manner, specifically, the obtaining, according to the model parameter, the second image component predicted value corresponding to each pixel point in the coding block includes:
Establishing a preset model based on the first model parameter and the second model parameter; the preset model is used for representing a calculation relation between a first image component reconstruction value and a second image component prediction value corresponding to each pixel point in the coding block;
and obtaining a second image component predicted value corresponding to each pixel point in the coding block according to the preset model and the first image component reconstructed value corresponding to each pixel point in the coding block.
It should be noted that, according to the obtained first model parameter a 'and second model parameter β', the established preset model is shown in formula (7),
for position coordinates of [ i, j ]]Is reconstructed from the first image component values that have been obtainedCombining the preset model described in the above formula (7) to obtain the position coordinates [ i, j ]]Second image component predicted value Pred corresponding to pixel point C [i,j]。
In the embodiment of the application, the temporary value of the second image component is constructed to replace the original value of the second image component corresponding to the coding block, then the first model parameter and the second model parameter are calculated according to the reconstructed value of the first image component and the original value of the second image component, and the first model parameter and the second model parameter are used for establishing a preset model, so that the deviation between the established preset model and an expected model is smaller, the predicted value of the second image component obtained according to the preset model is more close to the original value of the second image component, and the prediction accuracy of the video image component is improved.
In this embodiment of the present application, in addition to obtaining the second image component predicted value according to the first model parameter, the second model parameter, and the established preset model, the obtained second image component temporary value may be directly used as the second image component predicted value; the embodiment of the present application is not particularly limited thereto. If the second image component temporary value is directly used as the second image component predicted value, the first model parameter and the second model parameter do not need to be calculated and a preset model is not needed to be established, so that the calculated amount of the second image component prediction is greatly reduced.
Based on the technical solution shown in fig. 8, in a possible implementation manner, the method further includes:
acquiring a second image component reconstruction value and a third image component adjacent reference value corresponding to the coding block; the second image component reconstruction value represents a second image component reconstruction value corresponding to at least one pixel point of the coding block, and the third image component adjacent reference value represents a third image component reference value corresponding to each adjacent pixel point in the adjacent reference pixels of the coding block;
determining sub-model parameters according to the acquired second image component reconstruction value, the second image component adjacent reference value and the third image component adjacent reference value;
And acquiring a third image component predicted value corresponding to each pixel point in the coding block according to the sub-model parameters.
In the above implementation manner, specifically, the determining a submodel parameter according to the acquired second image component reconstruction value, the second image component neighboring reference value, and the third image component neighboring reference value includes:
acquiring a third image component temporary value according to the second image component reconstruction value, the second image component adjacent reference value and the third image component adjacent reference value; the third image component temporary value is obtained based on a third image component adjacent reference value corresponding to a matched pixel point of at least one pixel point of the coding block in the coding block adjacent reference pixel;
determining sub-model parameters according to the second image component reconstruction value and the acquired third image component temporary value.
In addition, in the embodiment of the present application, prediction from the second image component to the third image component or prediction from the third image component to the second image component may be performed in addition to prediction from the first image component to the second image component; wherein, the prediction from the third image component to the second image component is similar to the prediction from the second image component to the third image component, the embodiment of the present application will be described below taking the prediction from the second image component to the third image component as an example.
Specifically, after the second image component reconstruction value of the encoding block and the third image component neighboring reference value of the encoding block are obtained, the same method for determining the second image component temporary value is used in combination with the second image component neighboring reference value, for example, a matching method of closest pixels may be used, or an interpolation method may also be used, so that the third image component temporary value may be obtained. Here, the determined sub-model parameters include a first sub-model parameter and a second sub-model parameter based on the second image component reconstruction value and the third image component temporary value; establishing a sub-preset model according to the first sub-model parameter and the second sub-model parameter; and obtaining a third image component predicted value corresponding to each pixel point in the coding block according to the sub-preset model and the second image component reconstructed value corresponding to each pixel point in the coding block.
In the present embodiment, it is assumed that the second image component reconstruction value of the encoded block is usedRepresenting the temporary value of the third image component of the coding block +.>Representing the first sub-model parameter alpha in the sub-preset model * And a second sub-model parameter beta * The following are provided:
at the time of obtaining the first submodel parameter alpha * And a second sub-model parameter beta * Then, a sub-preset model can be established, the established sub-preset model is shown as a formula (9),
thus, for position coordinates [ i, j ]]Is reconstructed from the second image component values that have been obtainedCombining the sub-preset model described in the above formula (9) to obtain the position coordinates [ i, j ]]Third image component prediction value +.>
It can be understood that the prediction method applied to the prediction mode of the CCLM is also applicable to the prediction mode of the MMLM; as the name suggests, the prediction mode of MMLM is one having a plurality of preset models to enable prediction of the second image component from the first image component; thus, based on the technical solution shown in fig. 8, in one possible implementation, the method further includes:
obtaining at least one threshold according to a first image component reconstruction value corresponding to at least one pixel point of the coding block;
grouping according to the comparison result of the first image component reconstruction value and the at least one threshold value to obtain at least two groups of first image component reconstruction values and second image component temporary values;
and determining model parameters according to each of the at least two groups of first image component reconstruction values and the second image component temporary values, and acquiring at least two groups of model parameters.
In the foregoing implementation manner, specifically, the obtaining, according to the model parameter, a second image component predicted value corresponding to each pixel point in the coding block includes:
establishing at least two preset models based on the acquired at least two groups of model parameters; wherein the at least two preset models have a corresponding relationship with the at least two sets of model parameters;
selecting a preset model corresponding to each pixel point in the coding block from the at least two preset models according to a comparison result of the first image component reconstruction value and the at least one threshold;
and obtaining a second image component predicted value corresponding to each pixel point in the coding block according to a preset model corresponding to each pixel point in the coding block and the first image component reconstructed value.
The threshold value is a classification basis of the reconstructed value of the first image component of the encoded block. The threshold is a set value used for indicating establishment of a plurality of preset models, and the size of the threshold is related to a first image component reconstruction value corresponding to the coding block; specifically, the average value of the first image component reconstruction values corresponding to the encoding blocks may be calculated, or the median value of the first image component reconstruction values corresponding to the encoding blocks may be calculated, which is not particularly limited in the embodiment of the present application.
In the embodiment of the present application, it is assumed that, according to the reconstructed value of the first image component corresponding to at least one pixel point of the coding block and equation (10), a Mean is calculated:
wherein Mean represents the Mean value of the reconstructed values of the first image component corresponding to the encoded block,and representing the sum of the first image component reconstruction values corresponding to the coding blocks, and M represents the sampling number of the first image component reconstruction values corresponding to the coding blocks.
After calculating the Mean, the Mean can be directly used as a threshold value, and two preset models can be established by using the threshold value; however, it should be noted that the embodiments of the present application are not limited to only establishing two preset models. Here, in this embodiment, the Mean value Mean of the reconstructed values of the first image components corresponding to the encoding blocks is taken as the Threshold, that is, the threshold=mean, and the reconstructed values of the first image components corresponding to the encoding blocks are compared with the Threshold to obtain two valuesA first image component reconstruction value and a second image component temporary value are set; according to the two sets of first image component reconstruction values and second image component temporary values, a first model parameter alpha of a first preset model can be respectively deduced 1 ' and second model parameter beta 1 ' first model parameter alpha of second preset model 2 ' and second model parameter beta 2 'A'; in combination with (11), a first preset model M1 'and a second preset model M2' are established:
after obtaining the first preset model M1 'and the second preset model M2', reconstructing a first image component value of each pixel point in the coding blockComparing with Threshold, if +.>Then a first preset model M1' is selected, and the position coordinates of the coding block are acquired as [ i, j ] according to the first preset model]Second image component predicted value Pred corresponding to pixel point 1C [i,j]The method comprises the steps of carrying out a first treatment on the surface of the If->A second preset model M2' is selected, and the position coordinates of the coding blocks are acquired as [ i, j ] according to the second preset model]Second image component predicted value Pred corresponding to pixel point 2C [i,j]。
In the embodiment of the application, after calculating according to the first image component reconstruction value corresponding to the coding block to obtain a Mean value Mean, comparing the first image component reconstruction value with the Mean value Mean, and dividing the Mean value Mean into a first group if the first image component reconstruction value is smaller than or equal to a threshold value by taking the Mean as a demarcation point; if the first image component reconstruction value is greater than the threshold value, dividing into a second group; this results in a first set of image component reconstruction values for the first group and a first set of image component reconstruction values for the second group. In order to simplify the establishment of the preset model, for example, according to the principle of "two-point determination of one line", at this time, median calculation may be further performed on the first image component reconstruction value set of the first group to obtain a median value of the first group (it should be noted that only one reconstruction value corresponding to a pixel point remains in the first group); continuing to perform median calculation on the first image component reconstruction value set of the second group to obtain a median value of the second group (it is to be noted that only one reconstruction value corresponding to a pixel point is left in the second group); according to the value corresponding to only one pixel point in the first group and the value corresponding to only one pixel point in the second group, a first image component reconstruction value and a second image component temporary value corresponding to the two pixel points respectively can be obtained, so that model parameters can be determined, a preset model is established according to the model parameters, and a second image component prediction value can be obtained according to the established preset model; this prediction method can also greatly reduce the calculation amount of the second image component prediction.
The embodiment provides a prediction method of video image components, which comprises the steps of obtaining a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively; determining model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameters; in the embodiment of the application, the determination of the model parameters not only considers the first image component adjacent reference value and the second image component adjacent reference value, but also considers the first image component reconstruction value, so that the second image component predicted value is more approximate to the second image component original value, thereby effectively improving the prediction accuracy of the video image component, enabling the video image component predicted value to be more approximate to the video image component original value, and further saving the coding rate.
Based on the same inventive concept as the previous embodiments, referring to fig. 9, which illustrates a composition of a prediction apparatus 90 for video image components provided in an embodiment of the present application, the prediction apparatus 90 for video image components may include: an acquisition section 901, a determination section 902, and a prediction section 903; wherein,,
the acquiring section 901 is configured to acquire a first image component reconstruction value, a first image component neighboring reference value, and a second image component neighboring reference value corresponding to the encoding block; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively;
the determining part 902 is configured to determine model parameters according to the acquired first image component reconstruction value, the first image component neighboring reference value and the second image component neighboring reference value;
the prediction portion 903 is configured to obtain a second image component predicted value corresponding to each pixel in the coding block according to the model parameter.
In the above aspect, the obtaining section 901 is further configured to obtain a second image component temporary value according to the first image component reconstruction value, the first image component adjacent reference value, and the second image component adjacent reference value; wherein the second image component temporary value characterizes a temporary value of the second image component corresponding to at least one pixel point of the encoded block;
the determining part 902 is configured to determine model parameters based on the first image component reconstruction value and the acquired second image component temporary value.
In the above-described scheme, referring to fig. 10, the prediction apparatus 90 of video image components further includes a calculation section 904, in which,
the calculating section 904 is configured to perform, for each pixel point of the encoding block, a difference calculation between any one of the first image component adjacent reference values and a first image component reconstructed value corresponding to the each pixel point;
the obtaining section 901 is further configured to obtain a matching pixel point of each pixel point from the adjacent reference pixels of the coding block according to the result of the difference calculation; and taking the second image component adjacent reference value corresponding to the matched pixel point as a second image component temporary value corresponding to each pixel point.
In the above aspect, the obtaining portion 901 is configured to obtain, according to a result of the difference calculation, an adjacent pixel point corresponding to a first image component adjacent reference value with a minimum difference; and taking the adjacent pixel points as matched pixel points of each pixel point.
In the above aspect, the obtaining portion 901 is configured to obtain, according to a result of the difference calculation, a set of adjacent pixels corresponding to a first image component adjacent reference value with a minimum difference;
the calculating part 904 is further configured to calculate a distance value between each adjacent pixel point in the adjacent pixel point set and each pixel point;
the obtaining portion 901 is further configured to select an adjacent pixel point with the smallest distance value as a matching pixel point of each pixel point.
In the above aspect, the calculating section 904 is configured to perform, for each pixel point of the coding block, a difference calculation between any one of the first image component adjacent reference values and a first image component reconstructed value corresponding to the each pixel point;
the obtaining part 901 is further configured to obtain a first matching pixel point and a second matching pixel point of each pixel point from the adjacent reference pixels of the coding block according to the result of the difference calculation; the first matching pixel points represent adjacent pixel points corresponding to a first image component adjacent reference value which is larger than the first image component reconstruction value and has the smallest difference value in the first image component adjacent reference values, and the second matching pixel points represent adjacent pixel points corresponding to a first image component adjacent reference value which is smaller than the first image component reconstruction value and has the smallest difference value in the first image component adjacent reference values; and performing interpolation operation according to the second image component adjacent reference value corresponding to the first matched pixel point and the second image component adjacent reference value corresponding to the second matched pixel point to obtain a second image component temporary value corresponding to each pixel point.
In the above aspect, the model parameters include a first model parameter and a second model parameter, and the obtaining portion 901 is further configured to input the first image component reconstruction value and the second image component temporary value into a first preset factor calculation model, to obtain the first model parameter;
the obtaining portion 901 is further configured to input the first model parameter, the first image component reconstruction value, and the second image component temporary value into a second preset factor calculation model, and obtain the second model parameter.
In the above-described scheme, referring to fig. 11, the prediction apparatus 90 of a video image component further includes a setup section 905, in which,
the establishing part 905 is configured to establish a preset model based on the first model parameter and the second model parameter; the preset model is used for representing a calculation relation between a first image component reconstruction value and a second image component prediction value corresponding to each pixel point in the coding block;
the prediction portion 903 is further configured to obtain a second image component predicted value corresponding to each pixel point in the coding block according to the preset model and the first image component reconstructed value corresponding to each pixel point in the coding block.
In the above aspect, the obtaining portion 901 is further configured to obtain a second image component reconstruction value and a third image component neighboring reference value corresponding to the encoding block; the second image component reconstruction value represents a second image component reconstruction value corresponding to at least one pixel point of the coding block, and the third image component adjacent reference value represents a third image component reference value corresponding to each adjacent pixel point in the adjacent reference pixels of the coding block;
the determining part 902 is further configured to determine a submodel parameter according to the acquired second image component reconstruction value, second image component neighboring reference value and third image component neighboring reference value;
the prediction portion 903 is further configured to obtain a third image component predicted value corresponding to each pixel in the coding block according to the sub-model parameter.
In the above aspect, the acquiring section 901 is configured to acquire a third image component temporary value according to the second image component reconstruction value, the second image component neighboring reference value, and the third image component neighboring reference value; the third image component temporary value is obtained based on a third image component adjacent reference value corresponding to a matched pixel point of at least one pixel point of the coding block in the coding block adjacent reference pixel;
The determining part 902 is configured to determine sub-model parameters based on the second image component reconstruction value and the acquired third image component temporary value.
In the above aspect, the obtaining portion 901 is further configured to obtain at least one threshold according to a first image component reconstruction value corresponding to at least one pixel point of the coding block; grouping according to the comparison result of the first image component reconstruction value and the at least one threshold value to obtain at least two groups of first image component reconstruction values and second image component temporary values; and determining model parameters according to each of the at least two sets of first image component reconstruction values and the second image component temporary values, and obtaining at least two sets of model parameters.
In the above aspect, the establishing portion 905 is further configured to establish at least two preset models based on the obtained at least two sets of model parameters; wherein the at least two preset models have a corresponding relationship with the at least two sets of model parameters;
the predicting part 903 is further configured to select a preset model corresponding to each pixel point in the coding block from the at least two preset models according to a comparison result of the first image component reconstruction value and the at least one threshold; and obtaining a second image component predicted value corresponding to each pixel point in the coding block according to the preset model corresponding to each pixel point in the coding block and the first image component reconstructed value.
It will be appreciated that in this embodiment, a "part" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and of course may be a unit, or a module may be non-modular.
In addition, each component in the present embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional modules.
The integrated units, if implemented in the form of software functional modules, may be stored in a computer-readable storage medium, if not sold or used as separate products, and based on such understanding, the technical solution of the present embodiment may be embodied essentially or partly in the form of a software product, which is stored in a storage medium and includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or processor to perform all or part of the steps of the method described in the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Accordingly, the present embodiment provides a computer storage medium storing a prediction program of a video image component, which when executed by at least one processor implements the steps of the method described in the technical solution shown in fig. 8 above.
Based on the composition of the above-mentioned video image component prediction apparatus 90 and the computer storage medium, referring to fig. 12, which shows a specific hardware structure of the video image component prediction apparatus 90 provided in the embodiment of the present application, may include: a network interface 1201, a memory 1202, and a processor 1203; the various components are coupled together by a bus system 1204. It is appreciated that the bus system 1204 is used to facilitate connected communications between these components. The bus system 1204 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration, the various buses are labeled as bus system 1204 in fig. 12. The network interface 1201 is configured to receive and send signals during the process of receiving and sending information with other external network elements;
a memory 1202 for storing a computer program capable of running on the processor 1203;
A processor 1203 configured to, when executing the computer program, perform:
acquiring a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to the coding block; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively;
determining model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
and obtaining a second image component predicted value corresponding to each pixel point in the coding block according to the model parameters.
It is to be appreciated that the memory 1202 in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). The memory 1202 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
While the processor 1203 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the method described above may be performed by integrated logic circuitry in hardware or instructions in software in the processor 1203. The processor 1203 described above may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 1202, and the processor 1203 reads the information in the memory 1202 and performs the steps of the above method in combination with its hardware.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (DSP devices, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general purpose processors, controllers, microcontrollers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the processor 1203 is further configured to execute the steps of the method for predicting a video image component in the technical solution shown in fig. 8 described above when executing the computer program.
It should be noted that: the technical solutions described in the embodiments of the present application may be arbitrarily combined without any conflict.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Industrial applicability
In the embodiment of the application, a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to a coding block are obtained; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively; determining model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameters; therefore, the prediction accuracy of the video image component can be effectively improved, the predicted value of the video image component is more approximate to the original value of the video image component, and the coding rate is further saved.
Claims (14)
1. A method of predicting a video image component, the method comprising:
acquiring a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to the coding block; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively;
determining model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
acquiring a second image component predicted value corresponding to each pixel point in the coding block according to the model parameters;
wherein the determining a model parameter according to the acquired first image component reconstruction value, the first image component neighboring reference value and the second image component neighboring reference value comprises:
acquiring a second image component temporary value according to the first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value; wherein the second image component temporary value characterizes a temporary value of the second image component corresponding to at least one pixel point of the encoded block;
And determining model parameters according to the first image component reconstruction value and the acquired second image component temporary value.
2. The method of claim 1, wherein the obtaining a second image component temporary value from the first image component reconstruction value, the first image component neighbor reference value, and the second image component neighbor reference value comprises:
for each pixel point of the coding block, performing difference value calculation on any one of the adjacent reference values of the first image component and a first image component reconstruction value corresponding to each pixel point;
obtaining a matched pixel point of each pixel point from adjacent reference pixels of the coding block according to the difference value calculation result;
and taking the second image component adjacent reference value corresponding to the matched pixel point as a second image component temporary value corresponding to each pixel point.
3. The method according to claim 2, wherein the obtaining the matching pixel point of each pixel point from the adjacent reference pixels of the coding block according to the result of the difference calculation includes:
acquiring adjacent pixel points corresponding to the adjacent reference values of the first image components with the minimum difference according to the difference calculation result;
And taking the adjacent pixel points as matched pixel points of each pixel point.
4. The method according to claim 2, wherein the obtaining the matching pixel point of each pixel point from the adjacent reference pixels of the coding block according to the result of the difference calculation includes:
acquiring a neighboring pixel point set corresponding to a first image component neighboring reference value with the minimum difference value according to the difference value calculation result;
and calculating a distance value between each adjacent pixel point in the adjacent pixel point set and each pixel point, and selecting the adjacent pixel point with the minimum distance value as a matched pixel point of each pixel point.
5. The method of claim 1, wherein the obtaining a second image component temporary value from the first image component reconstruction value, the first image component neighbor reference value, and the second image component neighbor reference value comprises:
for each pixel point of the coding block, performing difference value calculation on any one of the adjacent reference values of the first image component and a first image component reconstruction value corresponding to each pixel point;
according to the result of the difference calculation, a first matched pixel point and a second matched pixel point of each pixel point are obtained from adjacent reference pixels of the coding block; the first matching pixel points represent adjacent pixel points corresponding to a first image component adjacent reference value which is larger than the first image component reconstruction value and has the smallest difference value in the first image component adjacent reference values, and the second matching pixel points represent adjacent pixel points corresponding to a first image component adjacent reference value which is smaller than the first image component reconstruction value and has the smallest difference value in the first image component adjacent reference values;
And carrying out interpolation operation according to the second image component adjacent reference value corresponding to the first matched pixel point and the second image component adjacent reference value corresponding to the second matched pixel point to obtain a second image component temporary value corresponding to each pixel point.
6. The method of claim 1, wherein the model parameters include a first model parameter and a second model parameter, the determining model parameters from the acquired first image component reconstruction values and the acquired second image component temporary values comprising:
inputting the first image component reconstruction value and the second image component temporary value into a first preset factor calculation model to obtain the first model parameter;
and inputting the first model parameter, the first image component reconstruction value and the second image component temporary value into a second preset factor calculation model to obtain the second model parameter.
7. The method according to claim 6, wherein the obtaining, according to the model parameter, the second image component predicted value corresponding to each pixel point in the coding block includes:
establishing a preset model based on the first model parameter and the second model parameter; the preset model is used for representing a calculation relation between a first image component reconstruction value and a second image component prediction value corresponding to each pixel point in the coding block;
And obtaining a second image component predicted value corresponding to each pixel point in the coding block according to the preset model and the first image component reconstructed value corresponding to each pixel point in the coding block.
8. The method of any one of claims 1 to 7, wherein the method further comprises:
acquiring a second image component reconstruction value and a third image component adjacent reference value corresponding to the coding block; the second image component reconstruction value represents a second image component reconstruction value corresponding to at least one pixel point of the coding block, and the third image component adjacent reference value represents a third image component reference value corresponding to each adjacent pixel point in the adjacent reference pixels of the coding block;
determining sub-model parameters according to the acquired second image component reconstruction value, the second image component adjacent reference value and the third image component adjacent reference value;
and acquiring a third image component predicted value corresponding to each pixel point in the coding block according to the sub-model parameters.
9. The method of claim 8, wherein the determining sub-model parameters from the acquired second image component reconstruction value, second image component neighboring reference value, and third image component neighboring reference value comprises:
Acquiring a third image component temporary value according to the second image component reconstruction value, the second image component adjacent reference value and the third image component adjacent reference value; the third image component temporary value is obtained based on a third image component adjacent reference value corresponding to a matched pixel point of at least one pixel point of the coding block in the coding block adjacent reference pixel;
determining sub-model parameters according to the second image component reconstruction value and the acquired third image component temporary value.
10. The method of any one of claims 1 to 7, wherein the method further comprises:
obtaining at least one threshold according to a first image component reconstruction value corresponding to at least one pixel point of the coding block;
grouping according to the comparison result of the first image component reconstruction value and the at least one threshold value to obtain at least two groups of first image component reconstruction values and second image component temporary values;
and determining model parameters according to each of the at least two groups of first image component reconstruction values and the second image component temporary values, and acquiring at least two groups of model parameters.
11. The method according to claim 10, wherein the obtaining, according to the model parameter, a second image component predicted value corresponding to each pixel point in the coding block includes:
Establishing at least two preset models based on the acquired at least two groups of model parameters; wherein the at least two preset models have a corresponding relationship with the at least two sets of model parameters;
selecting a preset model corresponding to each pixel point in the coding block from the at least two preset models according to a comparison result of the first image component reconstruction value and the at least one threshold;
and obtaining a second image component predicted value corresponding to each pixel point in the coding block according to a preset model corresponding to each pixel point in the coding block and the first image component reconstructed value.
12. A prediction apparatus of a video image component, the prediction apparatus of a video image component comprising: an acquisition section, a determination section, and a prediction section;
the acquisition part is configured to acquire a first image component reconstruction value, a first image component adjacent reference value and a second image component adjacent reference value corresponding to the coding block; the first image component reconstruction value represents a reconstruction value of a first image component corresponding to at least one pixel point of the coding block, and the first image component adjacent reference value and the second image component adjacent reference value represent a reference value of a first image component and a reference value of a second image component corresponding to each adjacent pixel point in the coding block adjacent reference pixels respectively;
The determining part is configured to determine model parameters according to the acquired first image component reconstruction value, the first image component adjacent reference value and the second image component adjacent reference value;
the prediction part is configured to obtain a second image component predicted value corresponding to each pixel point of the coding block according to the model parameters;
wherein the obtaining section is further configured to obtain a second image component temporary value from the first image component reconstruction value, the first image component neighboring reference value, and the second image component neighboring reference value; wherein the second image component temporary value characterizes a temporary value of the second image component corresponding to at least one pixel point of the encoded block;
the determining section is further configured to determine model parameters based on the first image component reconstruction value and the acquired second image component temporary value.
13. A prediction apparatus of a video image component, wherein the prediction apparatus of a video image component comprises: a memory and a processor;
the memory is used for storing a computer program capable of running on the processor;
the processor being adapted to perform the steps of the method of any of claims 1 to 11 when the computer program is run.
14. A computer storage medium storing a prediction program for a video image component, which when executed by at least one processor implements the steps of the method of any one of claims 1 to 11.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/107109 WO2020056767A1 (en) | 2018-09-21 | 2018-09-21 | Video image component prediction method and apparatus, and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112313950A CN112313950A (en) | 2021-02-02 |
CN112313950B true CN112313950B (en) | 2023-06-02 |
Family
ID=69888083
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880094931.9A Active CN112313950B (en) | 2018-09-21 | 2018-09-21 | Video image component prediction method, device and computer storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112313950B (en) |
WO (1) | WO2020056767A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112784830B (en) * | 2021-01-28 | 2024-08-27 | 联想(北京)有限公司 | Character recognition method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103379321A (en) * | 2012-04-16 | 2013-10-30 | 华为技术有限公司 | Prediction method and prediction device for video image component |
CN103533374A (en) * | 2012-07-06 | 2014-01-22 | 乐金电子(中国)研究开发中心有限公司 | Method and device for video encoding and decoding |
CN105306944A (en) * | 2015-11-30 | 2016-02-03 | 哈尔滨工业大学 | Chrominance component prediction method in hybrid video coding standard |
JP6005572B2 (en) * | 2013-03-28 | 2016-10-12 | Kddi株式会社 | Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, and program |
CN106688237A (en) * | 2014-09-30 | 2017-05-17 | 凯迪迪爱通信技术有限公司 | Video coding device, video decoding device, video compression transmission system, video coding method, video decoding method and program |
CN107580222A (en) * | 2017-08-01 | 2018-01-12 | 北京交通大学 | An Image or Video Coding Method Based on Linear Model Prediction |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2495941B (en) * | 2011-10-25 | 2015-07-08 | Canon Kk | Method and apparatus for processing components of an image |
EP3198874A4 (en) * | 2014-10-28 | 2018-04-04 | MediaTek Singapore Pte Ltd. | Method of guided cross-component prediction for video coding |
CN107211121B (en) * | 2015-01-22 | 2020-10-23 | 联发科技(新加坡)私人有限公司 | Video encoding method and video decoding method |
US10652575B2 (en) * | 2016-09-15 | 2020-05-12 | Qualcomm Incorporated | Linear model chroma intra prediction for video coding |
-
2018
- 2018-09-21 CN CN201880094931.9A patent/CN112313950B/en active Active
- 2018-09-21 WO PCT/CN2018/107109 patent/WO2020056767A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103379321A (en) * | 2012-04-16 | 2013-10-30 | 华为技术有限公司 | Prediction method and prediction device for video image component |
CN103533374A (en) * | 2012-07-06 | 2014-01-22 | 乐金电子(中国)研究开发中心有限公司 | Method and device for video encoding and decoding |
JP6005572B2 (en) * | 2013-03-28 | 2016-10-12 | Kddi株式会社 | Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, and program |
CN106688237A (en) * | 2014-09-30 | 2017-05-17 | 凯迪迪爱通信技术有限公司 | Video coding device, video decoding device, video compression transmission system, video coding method, video decoding method and program |
CN105306944A (en) * | 2015-11-30 | 2016-02-03 | 哈尔滨工业大学 | Chrominance component prediction method in hybrid video coding standard |
CN107580222A (en) * | 2017-08-01 | 2018-01-12 | 北京交通大学 | An Image or Video Coding Method Based on Linear Model Prediction |
Non-Patent Citations (2)
Title |
---|
Algorithm Description of Joint Explorafion Test Model 5;Jianle Chen;《Joint Video Exploration Team (JVET)》;20170120;全文 * |
视频压缩中的高效帧内编码技术研究;张涛;《中国博士学位论文全文数据库》;20180115;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112313950A (en) | 2021-02-02 |
WO2020056767A1 (en) | 2020-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114041288A (en) | Image component prediction method, encoder, decoder, and storage medium | |
CN113068028B (en) | Video image component prediction method, device and computer storage medium | |
US20120320985A1 (en) | Motion vector predictive encoding method, motion vector predictive decoding method, moving picture encoding apparatus, moving picture decoding apparatus, and programs thereof | |
CN113784128B (en) | Image prediction method, encoder, decoder, and storage medium | |
CN113068026B (en) | Coding prediction method, device and computer storage medium | |
CN113676732A (en) | Image component prediction method, encoder, decoder, and storage medium | |
CN116472707A (en) | Image prediction method, encoder, decoder, and computer storage medium | |
CN113766233B (en) | Image prediction method, encoder, decoder, and storage medium | |
CN112313950B (en) | Video image component prediction method, device and computer storage medium | |
CN113395520B (en) | Decoding prediction method, device and computer storage medium | |
CN113196770B (en) | Image component prediction method, device and computer storage medium | |
JP6174966B2 (en) | Image coding apparatus, image coding method, and program | |
JP6480790B2 (en) | Image determination apparatus, encoding apparatus, and program | |
CN113261279B (en) | Prediction value determination method, encoder, decoder, and storage medium | |
RU2781240C1 (en) | Prediction method and apparatus for decoding and computer data storage medium | |
WO2024207136A1 (en) | Encoding/decoding method, code stream, encoder, decoder and storage medium | |
US20170094303A1 (en) | Method and device for encoding/decoding motion merging candidate using depth information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |