[go: up one dir, main page]

CN114866777B - A decoding and encoding method and device thereof - Google Patents

A decoding and encoding method and device thereof Download PDF

Info

Publication number
CN114866777B
CN114866777B CN202210351116.4A CN202210351116A CN114866777B CN 114866777 B CN114866777 B CN 114866777B CN 202210351116 A CN202210351116 A CN 202210351116A CN 114866777 B CN114866777 B CN 114866777B
Authority
CN
China
Prior art keywords
image block
motion information
current image
block
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210351116.4A
Other languages
Chinese (zh)
Other versions
CN114866777A (en
Inventor
陈方栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202210351116.4A priority Critical patent/CN114866777B/en
Publication of CN114866777A publication Critical patent/CN114866777A/en
Application granted granted Critical
Publication of CN114866777B publication Critical patent/CN114866777B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请提供一种解码、编码方法及其设备,该方法包括:获取当前图像块的运动模型;根据所述运动模型建立所述当前图像块的运动信息列表;从所述运动信息列表中选择备选运动信息;根据选择的所述备选运动信息和所述当前图像块的差值信息确定所述当前图像块的原始运动信息;根据所述原始运动信息确定所述当前图像块的目标运动信息;根据所述原始运动信息或所述目标运动信息对编码比特流进行解码。通过本申请的技术方案,解决了预测质量不高,预测错误等问题,可以提高解码效率,降低解码时延,提高编解码性能。

The present application provides a decoding and encoding method and device thereof, the method comprising: obtaining a motion model of a current image block; establishing a motion information list of the current image block according to the motion model; selecting candidate motion information from the motion information list; determining the original motion information of the current image block according to the difference information between the selected candidate motion information and the current image block; determining the target motion information of the current image block according to the original motion information; decoding the encoded bit stream according to the original motion information or the target motion information. Through the technical solution of the present application, the problems of low prediction quality and prediction errors are solved, the decoding efficiency can be improved, the decoding delay can be reduced, and the encoding and decoding performance can be improved.

Description

Decoding and encoding method and equipment thereof
Technical Field
The present application relates to the field of video encoding and decoding technologies, and in particular, to a decoding and encoding method and apparatus thereof.
Background
For the purpose of saving space, the video images are transmitted after being encoded, and the complete video encoding method can comprise the processes of prediction, transformation, quantization, entropy encoding, filtering and the like. The prediction coding comprises intra-frame coding and inter-frame coding, wherein the inter-frame coding utilizes the correlation of video time domains, and pixels adjacent to the coded image are used for predicting pixels of the current image so as to achieve the purpose of effectively removing video time domain redundancy.
In inter-frame coding, a Motion Vector (MV) may be used to represent the relative displacement between a current image block of a current frame video image and a reference image block of a reference frame video image. For example, when there is a strong temporal correlation between the video image a of the current frame and the video image B of the reference frame, and when the image block A1 (current image block) of the video image a needs to be transmitted, a motion search may be performed in the video image B to find the image block B1 (i.e., the reference image block) that is the best match with the image block A1, and determine the relative displacement between the image block A1 and the image block B1, that is, the motion vector of the image block A1.
The encoding end may send the motion vector to the decoding end instead of sending the image block A1 to the decoding end, and the decoding end may obtain the image block A1 according to the motion vector and the image block B1. Obviously, since the number of bits occupied by the motion vector is smaller than that occupied by the image block A1, the above manner can save a large number of bits.
However, if the video image a is divided into a large number of image blocks, a relatively large number of bits are occupied when transmitting the motion vector of each image block. To further save bits, the motion vector of the image block A1 may also be predicted using the spatial correlation between candidate image blocks. For example, the motion vector of the image block A2 adjacent to the image block A1 may be determined as the motion vector of the image block A1. Based on this, the encoding end may transmit the index value of the image block A2 to the decoding end, and the decoding end may determine the motion vector of the image block A2, that is, the motion vector of the image block A1, based on the index value. Since the index value of the image block A2 occupies less bits than the motion vector, the above manner can further save bits.
Since there may be a difference between the motion of the image block A1 and the motion of the image block A2, that is, the motion vector of the image block A2 may not coincide with the motion vector of the image block A1, determining the motion vector of the image block A2 as the motion vector of the image block A1 has problems of low prediction quality, erroneous prediction, and the like.
Disclosure of Invention
The application provides a decoding and encoding method and equipment thereof, which solve the problems of low prediction quality, prediction error and the like, can improve the decoding efficiency, reduce the decoding time delay and improve the encoding and decoding performance.
The application provides a decoding method, which is applied to a decoding end, and comprises the following steps:
Acquiring a motion model of a current image block;
establishing a motion information list of the current image block according to the motion model;
selecting alternative motion information from the motion information list; determining original motion information of the current image block according to the selected alternative motion information and the difference information of the current image block;
determining target motion information of the current image block according to the original motion information;
and decoding the coded bit stream according to the original motion information or the target motion information.
The application provides a coding method, which is applied to a coding end, and comprises the following steps:
Acquiring a motion model of a current image block;
establishing a motion information list of the current image block according to the motion model;
selecting alternative motion information from the motion information list; determining original motion information of the current image block according to the selected alternative motion information and the difference information of the current image block;
determining target motion information of the current image block according to the original motion information;
and encoding the current image block according to the original motion information or the target motion information.
The application provides a decoding end device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to perform the method steps described above.
The application provides a coding end device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to perform the method steps described above.
As can be seen from the above technical solutions, in the embodiments of the present application, the target motion information of the current image block may be determined according to the original motion information, and encoding/decoding may be performed according to the original motion information or the target motion information, instead of directly encoding/decoding according to the original motion information, so as to improve accuracy of the motion information, solve problems of low prediction quality, prediction error, and the like, and improve encoding performance/decoding performance. In addition, the method can improve the coding efficiency/decoding efficiency and reduce the coding delay/decoding delay.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a flow chart of a decoding method in one embodiment of the application;
FIGS. 2A-2C are flowcharts of candidate image blocks in one embodiment of the application;
FIGS. 3A-3E are schematic diagrams of templates in one embodiment of the application;
FIG. 4 is a flow chart of a decoding method in one embodiment of the application;
FIG. 5 is a flow chart of an encoding method in one embodiment of the application;
FIG. 6 is a schematic diagram of a video encoding framework in one embodiment of the application;
fig. 7 is a block diagram of a decoding apparatus according to an embodiment of the present application;
FIG. 8 is a block diagram of an encoding apparatus in one embodiment of the present application;
FIG. 9 is a hardware configuration diagram of a decoding side device in an embodiment of the present application;
fig. 10 is a hardware configuration diagram of an encoding-side apparatus in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
The embodiment of the application provides a decoding and encoding method, which can relate to the following concepts:
Motion Vector (MV): in inter-frame coding, a motion vector is used to represent a relative displacement between a current image block of a video image of a current frame and a reference image block of a video image of a reference frame, for example, there is a strong temporal correlation between a video image a of the current frame and a video image B of the reference frame, a motion search may be performed in the video image B when transmitting an image block A1 (current image block) of the video image a, an image block B1 (reference image block) that is the best match with the image block A1 is found, and a relative displacement between the image block A1 and the image block B1, that is, a motion vector of the image block A1 is determined. Wherein each divided image block has a corresponding motion vector transmitted to the decoding side, and consuming a considerable amount of bits if the motion vector of each image block is independently encoded and transmitted, in particular divided into a large number of image blocks of small size. In order to reduce the number of bits used for encoding a motion vector, spatial correlation between neighboring image blocks may be utilized, a motion vector of a current image block to be encoded may be predicted from a motion vector of a neighboring encoded image block, and then a prediction difference may be encoded, so that the number of bits representing the motion vector may be effectively reduced.
Motion information (Motion Information): since the motion vector indicates the positional offset of the current image block from a certain reference image block, in order to accurately acquire information directed to the image block, index information of reference frame images is required in addition to the motion vector to indicate which reference frame image is used. In the video coding technology, for a current frame image, a reference frame image list may be generally established, and the reference frame image index information indicates what reference frame image in the reference frame image list is adopted by the current image block. In addition, many coding techniques support multiple reference picture lists, and therefore, an index value, which may be referred to as a reference direction, may also be used to indicate which reference picture list is used. In the video coding technology, information related to motion such as a motion vector, a reference frame index, a reference direction, and the like may be collectively referred to as motion information.
Template (Template): in video coding techniques, the coding process is performed on a block-by-block basis, and the reconstruction information of surrounding coded blocks is available when coding the current block. The template refers to decoding information of a fixed shape around the current image block (adjacent region of the time domain or the space domain). At the encoding end and decoding end, the templates are identical, so that some operations performed at the encoding end using the templates can obtain completely consistent results at the decoding end, that is, information derived by the encoding end based on the templates can be recovered losslessly at the decoding end without transferring additional information, thereby further reducing the number of transmission bits.
Rate distortion principle (Rate-Distortion Optimized): there are two major indicators for evaluating coding efficiency: the smaller the bit stream, the larger the compression rate, the larger the PSNR, the better the reconstructed image quality, and the discrimination formula is essentially the comprehensive evaluation of the code rate and the PSNR (PEAK SIGNAL to Noise Ratio) during mode selection. For example, the cost of pattern correspondence: j (mode) =d+λ×r, where D represents Distortion, which can be measured generally using an SSE index, which is the mean square sum of the differences between the reconstructed image block and the source image; λ is the lagrangian multiplier, and R is the actual number of bits required for coding an image block in this mode, including the sum of bits required for coding mode information, motion information, residuals, etc.
Intra-prediction and inter-prediction (intra prediction and inter prediction) techniques: intra prediction refers to predictive coding using reconstructed pixel values of spatial neighboring image blocks of the current image block (i.e., in the same frame as the current image block), while inter prediction refers to predictive coding using reconstructed pixel values of temporal neighboring image blocks of the current image block (i.e., in a different frame from the current image block).
The CTU (Coding Tree Unit) is a maximum Coding Unit supported by the Coding end and a maximum decoding Unit supported by the decoding end. Further, a frame of image may be divided into several disjoint CTUs, and each CTU may determine whether to further divide into smaller blocks based on the actual situation.
The above-described decoding method and encoding method will be described in detail below with reference to several specific embodiments.
Example 1:
Referring to fig. 1, a flow chart of a decoding method may be applied to a decoding end, and may include:
Step 101, a motion model of the current image block is obtained. The motion model may include, but is not limited to: a 2-parameter motion model (e.g., a 2-parameter motion vector), a 4-parameter motion model (e.g., a 4-parameter affine model), a 6-parameter motion model (e.g., an affine model), and an 8-parameter motion model (e.g., a projection model).
And 102, establishing a motion information list of the current image block according to the motion model.
Step 103, selecting alternative motion information from the motion information list.
And step 104, determining the original motion information of the current image block according to the selected alternative motion information and the difference information of the current image block. Specifically, before step 104, the difference information of the current image block may be acquired, and then the original motion information of the current image block may be determined according to the alternative motion information and the difference information.
The difference information may be a motion information difference, and the difference information is not limited.
And step 105, determining target motion information of the current image block according to the original motion information. Wherein the target motion information may be a motion information different from the original motion information.
And step 106, decoding the coded bit stream according to the original motion information or the target motion information.
As can be seen from the above technical solutions, in the embodiments of the present application, the target motion information of the current image block may be determined according to the original motion information, and decoded according to the original motion information or the target motion information, instead of directly decoding according to the original motion information, so as to improve the accuracy of the motion information, solve the problems of low prediction quality, prediction error, etc., and improve the decoding performance, improve the decoding efficiency, and reduce the decoding delay.
Example 2:
For step 101, a motion model of the current image block is acquired, which may include, but is not limited to: and acquiring a motion model of the current image block according to the mode information of the current image block. Specifically, the mode information may include size information of the current image block, based on which it may be determined that the motion model of the current image block is a 2-parameter motion model if the size information of the current image block is smaller than a preset size. If the size information of the current image block is not smaller than the preset size, determining that the motion model of the current image block is a 4-parameter motion model or a 6-parameter motion model, and of course, the motion model of the current image block can also be a motion model of other parameters; or if the size information of the current image block is not smaller than the preset size, other modes can be adopted to determine the motion model of the current image block, for example, the motion model of the current image block is obtained through encoding the first indication information in the bit stream.
In another example, the motion model of the current image block is obtained, which may include, but is not limited to: acquiring first indication information from the coded bit stream, wherein the first indication information is used for indicating a motion model; then, a motion model of the current image block may be acquired according to the first indication information. For example, if the coding end adopts a 2-parameter motion model, the first indication information added by the coding end in the coded bit stream is a first identifier, if the coding end adopts a 4-parameter motion model, the first indication information added by the coding end in the coded bit stream is a second identifier, and so on. Based on the above, after the decoding end obtains the first indication information from the encoded bitstream, if the first indication information is the first identifier, it is determined that the motion model is a 2-parameter motion model, if the first indication information is the second identifier, it is determined that the motion model is a 4-parameter motion model, and so on.
For the encoding end, in order to determine the motion model of the current image block, the encoding end may obtain the motion model of the current image block according to the mode information of the current image block, and the specific mode refers to the decoding end, which is not described herein. In another example, the encoding end may pre-encode the bit stream sequentially using each motion model (e.g., 2-parameter motion model, 4-parameter motion model, 6-parameter motion model), then determine the encoding performance of each motion model based on the rate-distortion principle, and then determine the motion model with the optimal encoding performance as the motion model of the current image block. Of course, the above manner is only two examples of determining the motion model, and the encoding end may also determine the motion model in other manners, which will not be described in detail herein.
Further, after the encoding end determines the motion model of the current image block, the first indication information may be added to the encoded bitstream, and the first indication information may be used for the motion model of the current image block.
Example 3:
Step 102, establishing a motion information list of the current image block according to the motion model may include:
Step 1021, determining a candidate image block corresponding to the current image block according to the motion model.
Step 1022 determines alternative motion information for the current image block based on the motion information for the candidate image block. The motion information of the candidate image block includes a motion vector, a motion direction and the like corresponding to the fixed reference frame.
Step 1023, adding the alternative motion information of the current image block to the motion information list.
Wherein a plurality of candidate image blocks corresponding to the current image block may be determined according to the motion model, and the candidate motion information of the current image block is obtained based on each candidate image block, such that the plurality of candidate motion information may be added to the motion information list, i.e., the motion information list may include a plurality of candidate motion information.
Example 4: when the motion model is a 2-parameter motion model, for step 1021, the candidate image block includes one or any combination of the following: in the current frame of the current image block, the image block adjacent to the current image block; in the current frame of the current image block, the image block which is not adjacent to the current image block; the image block in the adjacent frame of the current frame where the current image block is located, for example, the reference image block in the adjacent frame of the current frame where the current image block is located, which is the same as the current image block in position, and the adjacent image block of the reference image block.
For step 1022, the motion information of the candidate image block may be determined as the candidate motion information of the current image block, e.g., the motion information (decoded motion information) of the image block adjacent to the current image block in the current frame where the current image block is located is determined as the candidate motion information of the current image block; determining the motion information (decoded motion information) of an image block which is not adjacent to the current image block in the current frame of the current image block as the alternative motion information of the current image block; and determining the motion information (decoded motion information) of the image block in the adjacent frame of the current frame where the current image block is located as the alternative motion information of the current image block.
In addition, fine tuning may be performed on the motion information of the candidate image block (for example, the x component+1 of the motion vector, the x component-1 of the motion vector, the y component+1 of the motion vector, the y component-1 of the motion vector, etc., which are not limited thereto), so as to obtain new motion information, and determine these new motion information as the candidate motion information of the current image block. The default motion information may also be determined as alternative motion information of the current image block, such as a zero vector, a zero reference frame index, and a motion direction being unidirectional.
In one example, the motion information of the candidate image block may be original motion information of the candidate image block or target motion information of the candidate image block. For example, if the candidate image block is in the same decoding unit as the current image block, the candidate motion information of the current image block may be determined according to the original motion information of the candidate image block; and/or if the candidate image block is in the previous decoding unit of the current image block, determining the alternative motion information of the current image block according to the original motion information of the candidate image block.
For another example, when decoding the current image block, the target motion information of the candidate image block may be acquired, and the candidate motion information of the current image block may be determined according to the target motion information of the candidate image block. Specifically, if the target motion information of the candidate image block already exists, the target motion information of the candidate image block may be obtained, and the candidate motion information of the current image block may be determined according to the target motion information of the candidate image block (the determination mode of the target motion information of the candidate image block may be the same as the determination mode of the target motion information of the current image block, which is not described herein in detail); if the target motion information of the candidate image block does not exist, the original motion information of the candidate image block can be acquired, and the alternative motion information of the current image block is determined according to the original motion information of the candidate image block.
When the current image block is decoded, if the candidate image block adopts a motion information adjustment method (i.e. the target motion information is obtained through the original motion information), and the decoding process of the candidate image block is finished, the target motion information of the candidate image block can be obtained, so that the target motion information of the candidate image block is determined as the candidate motion information of the current image block. If the decoding process of the candidate image block is not finished (for example, the candidate image block is in the same decoding unit of the current image block, and when the candidate image block is in the previous decoding unit of the current image block, the decoding process may not be finished), the target motion information of the candidate image block is not acquired, and therefore, the original motion information of the candidate image block is determined as the candidate motion information of the current image block, and the target motion information of the candidate image block is not determined as the candidate motion information of the current image block.
The decoding unit may be a maximum allowable decoding block, such as a maximum decoding unit (CTU).
In one example, when motion information of a candidate image block is finely tuned to obtain new motion information, original motion information of the candidate image block may be adjusted, and alternative motion information of the current image block may be determined according to the adjusted motion information (e.g., the adjusted motion information is determined as alternative motion information); and/or adjusting the target motion information of the candidate image block, and determining the alternative motion information of the current image block according to the adjusted motion information (such as determining the adjusted motion information as the alternative motion information).
The fine adjustment of the motion information of the candidate image block may include: the motion information of the candidate image block is subjected to transformation processing such as expansion and contraction in a predetermined direction and weighting, and the like, and this is not limited.
For step 1023, determining an adding sequence of the candidate motion information of the current image block corresponding to the motion information of each candidate image block according to the position relationship between each candidate image block and the current image block; further, each piece of alternative motion information may be added to the motion information list in turn according to the adding sequence until the number of pieces of alternative motion information in the motion information list reaches a preset number.
For example, the order of addition of the alternative motion information may be, in order: in the current frame of the current image block, the alternative motion information corresponding to the motion information of the image block adjacent to the current image block; in the current frame of the current image block, the alternative motion information corresponding to the motion information of the image block which is not adjacent to the current image block; candidate motion information corresponding to the motion information of the image block in the adjacent frame of the current frame where the current image block is located; motion information obtained after fine adjustment is performed based on the motion information of the candidate image block; default motion information.
Based on the above-mentioned adding sequence of the candidate motion information, each candidate motion information may be sequentially added to the motion information list, when the candidate motion information is added to the motion information list, the candidate motion information may be compared with the candidate motion information in the motion information list, if the candidate motion information is the same as the candidate motion information, the candidate motion information is not added to the motion information list, and if the candidate motion information is different from the candidate motion information, the candidate motion information may be added to the motion information list, so as to prevent the same candidate motion information from occurring in the motion information list as much as possible; and the like, until the number of the alternative motion information in the motion information list reaches the preset number N, no alternative motion information is added.
Example 5: the motion model is a non-2-parameter motion model, such as a motion model with 4 parameters, or a motion model with 6 parameters, or a motion model with 8 parameters. Then
For step 1021, candidate image blocks may include, but are not limited to: in the current frame of the current image block, the image block adjacent to the current image block; the motion model of the candidate image block may be the same as that of the current image block, and the candidate image block includes a plurality of sub-blocks, and different sub-blocks correspond to the same or different motion information. For example, if the motion model of the current image block is a 4-parameter motion model, an image block of the 4-parameter motion model may be selected from the neighboring image blocks, and the selected image block may be used as a candidate image block.
For step 1022, motion information of the number of matches to the parameters of the motion model may be selected from the motion information of the plurality of sub-blocks of the candidate image block, and the candidate motion information of the current image block may be determined according to the selected motion information. If the motion model is a motion model with 4 parameters, the parameter matching number of the motion model is 2; if the motion model is a motion model with 6 parameters, the parameter matching number of the motion model is 3; if the motion model is an 8-parameter motion model, the parameter matching number of the motion model is 4; and so on.
In one example, if the motion model is a motion model with 4 parameters, selecting the motion information matching the parameters of the motion model from the motion information of the plurality of sub-blocks of the candidate image block may include:
Mode one, motion information of an upper left sub-block (e.g., an upper left sub-block) and motion information of an upper right sub-block (e.g., an upper right sub-block) of a candidate image block are selected. Referring to fig. 2A, the image block A0, the image block A1, the image block B0, the image block B1, and the image block B2 are candidate image blocks, and the motion information of the upper left sub-block and the motion information of the upper right sub-block may be selected from the motion information of all sub-blocks of the candidate image blocks.
Mode two, for the candidate image block located on the upper side of the current image block, selecting the motion information of the lower left sub-block (such as the lower left corner sub-block) and the motion information of the lower right sub-block (such as the lower right corner sub-block) of the candidate image block; for a candidate image block that is not located on the upper side of the current image block, motion information of an upper left sub-block (e.g., an upper left sub-block) and motion information of an upper right sub-block (e.g., an upper right sub-block) of the candidate image block are selected.
Referring to fig. 2B, the image block B0, the image block B1, and the image block B2 are candidate image blocks located at the upper side of the current image block, and the motion information of the lower left sub-block and the motion information of the lower right sub-block may be selected from the motion information of all sub-blocks of the candidate image blocks. Referring to fig. 2B, the image block A0 and the image block A1 are candidate image blocks not located at the upper side of the current image block, and the motion information of the upper left sub-block and the motion information of the upper right sub-block may be selected from the motion information of all sub-blocks of the candidate image block. Wherein the above-described manner may be adopted when the candidate image block and the current image block are not in the same decoding unit (CTU).
Selecting motion information of a lower left sub-block (such as a lower left corner sub-block) and motion information of a lower right sub-block (such as a lower right corner sub-block) of a candidate image block for the candidate image block positioned on the upper side of the current image block; for a candidate image block that is not located on the upper side of the current image block, motion information of an upper right sub-block (e.g., an upper right sub-block) and motion information of a lower right sub-block (e.g., a lower right sub-block) of the candidate image block are selected.
Referring to fig. 2C, the image block B0, the image block B1, and the image block B2 are candidate image blocks located at the upper side of the current image block, and the motion information of the lower left sub-block and the motion information of the lower right sub-block may be selected from the motion information of all sub-blocks of the candidate image blocks. Referring to fig. 2C, the image block A0 and the image block A1 are candidate image blocks not located at the upper side of the current image block, and the motion information of the upper right sub-block and the motion information of the lower right sub-block may be selected from the motion information of all sub-blocks of the candidate image block. Wherein the above-described manner may be adopted when the candidate image block and the current image block are not in the same decoding unit (CTU).
Of course, the selection of sub-blocks may also be performed in any combination of the above three ways.
In one example, if the motion model is a motion model with 6 parameters, selecting motion information matching the parameters of the motion model from the motion information of a plurality of sub-blocks of the candidate image block may further include: motion information of an upper left sub-block (e.g., an upper left sub-block), motion information of an upper right sub-block (e.g., an upper right sub-block), and motion information of a lower left sub-block (e.g., a lower left sub-block) of the candidate image block are selected. If the motion model is an 8-parameter motion model, selecting motion information of the number matching the parameters of the motion model from among the motion information of the plurality of sub-blocks of the candidate image block may further include: motion information of an upper left sub-block (e.g., an upper left sub-block), motion information of an upper right sub-block (e.g., an upper right sub-block), motion information of a lower left sub-block (e.g., a lower left sub-block), and motion information of a lower right sub-block (e.g., a lower right sub-block) of the candidate image block are selected. Of course, the above selection is only an example, and is not limited thereto.
In one example, if the motion model is a 4-parameter motion model, determining the candidate motion information of the current image block according to the selected motion information may include: the 4-parameter affine transformation from point (x, y) to point (vx, vy) can be expressed as: v x=ax-by+e;vy=bx+ay + f, (a, b, e, f) is the affine model parameters, the 4 parameter motion model in the above embodiment, and this 4 parameter is (a, b, e, f). Based on this, the four parameter values (a, b, e, f) may be determined using the motion information (i.e., motion vector) of the two sub-blocks of the candidate image block, and then the four parameter values may be used as parameters of the motion model of the current image block.
Specifically, based on the two sub-blocks of the candidate image block (taking image block B2 as an example), the 4 parameters of the current image block can be obtained as follows: for the image block B2, if the motion information of the lower left corner is (x 0, y 0), the motion information of the lower right left corner is (x 1, y 1), and if the width of the image block B2 is W and the coordinates of the lower left corner of the image block B2 are (0, 0), the coordinates of the lower right corner of the image block B2 are (W, 0).
Thus, the lower left sub-block may be moved from (0, 0) to (x 0, y 0), while the lower right sub-block may be moved from (w, 0) to (w+x1, y 1), that is,By solving the above equation, four parameter values (a, b, e, f) can be obtained:
Example 6: for step 103, selecting alternative motion information from the motion information list may include: selecting alternative motion information from the motion information list according to the motion information index; wherein the motion information index may be obtained from the encoded bitstream or the motion information index may be a default index (e.g., 1). For example, if the motion information index is 1, the first alternative motion information is selected from the motion information list.
In one example, the default index may be determined directly as the motion information index. In another example, second indication information may be obtained from the encoded bitstream, the second indication information being used to indicate the motion information index. For example, if the encoding end adopts the 2 nd alternative motion information in the motion information list, the second indication information added by the encoding end in the encoded bit stream is 2, which indicates the 2 nd alternative motion information in the motion information list, and so on. Based on this, after obtaining the second indication information from the encoded bitstream, the decoding end determines that the motion information index is 2, and indicates the 2 nd alternative motion information if the second indication information is 2.
For the encoding end, in order to determine the motion information index, the encoding end may directly determine the default index as the motion information index; or the coding end can pre-code the bit stream by using each piece of alternative motion information in the motion information list in turn, then determine the coding performance of each piece of alternative motion information based on the rate distortion principle, and then determine the position of the alternative motion information with the optimal coding performance in the motion information list as a motion information index. Of course, the above manner is merely two examples, and is not limited thereto.
Example 7: prior to step 104, it may further include: obtaining the difference information of the current image block, and specifically selecting the difference information from a difference list according to a difference information index; the difference list is configured with a plurality of difference information; the difference information index is obtained from the encoded bitstream or the difference information index is a default difference index. For example, if the difference information index is 1, the first difference information is selected from the difference list.
The difference list may be empirically configured at the encoding end/decoding end, where the difference list is configured with a plurality of difference information, and the difference information may be 0, i.e. no difference, or may not be 0, which is not limited.
In one example, the default difference index may be determined directly as the difference information index. In another example, third indication information may be obtained from the encoded bitstream, the third indication information being used to indicate the difference information index. For example, if the encoding end adopts the 1 st difference information in the difference list, the third indication information added by the encoding end in the encoded bitstream is 1, which indicates the 1 st difference information in the difference list, and so on. Based on this, after the decoding end obtains the third indication information from the encoded bitstream, if the third indication information is 1, it is determined that the difference information index is 1, which indicates the 1 st difference information in the difference list.
For the encoding end, in order to determine the difference information index, the encoding end may directly determine the default difference index as the difference information index; or the encoding end can use each difference information in the difference list in turn to pre-encode the bit stream, then determine the encoding performance of each difference information based on the rate distortion principle, and then determine the position of the difference information with the optimal encoding performance in the difference list as the difference information index. Of course, the above-described determination method of the difference information index is merely two examples, and is not limited thereto.
Example 8: for step 104, determining the original motion information of the current image block according to the selected alternative motion information and the difference information of the current image block may include: and determining the original motion information of the current image block according to the alternative motion information, the first weight, the difference information and the second weight. Specifically, the original motion information of the current image block is determined by linear weighting of the alternative motion information and the difference information. For example, original motion information=a×alternative motion information+b×difference information. a is a first weight and b is a second weight.
The values of a and b can be adjusted in real time or fixed values, and can be obtained through empirical configuration or training. Further, the values of a and b may be equal or different.
Example 9: for step 105, determining target motion information for the current image block from the original motion information may include, but is not limited to: step 1051, obtaining a template of a current image block; step 1052, searching for target motion information centered on the original motion information based on the template of the current image block.
In this embodiment, after the original motion information is obtained, the original motion information may not be used as the final motion information of the current image block, but target motion information different from the original motion information may be obtained according to the original motion information, where the target motion information is closest to the motion information of the current image block, so that the target motion information may be used as the final motion information of the current image block. Obviously, compared with the mode of taking the original motion information as the final motion information of the current image block, the mode of taking the target motion information as the final motion information of the current image block can solve the problems of low prediction quality, prediction error and the like.
Example 10: for step 1051, obtaining a template for a current image block may include:
In the first mode, the template of the current image block is obtained by utilizing the original motion information of the current image block. Specifically, the prediction value of the current image block may be determined using the original motion information (e.g., decoded motion information) of the current image block; and acquiring a template of the current image block according to the predicted value of the current image block, for example, determining the predicted value of the current image block as the template of the current image block. In the decoding process, each image block is decoded one by one, so that the decoded motion information of the current image block can be used to determine the predicted value of the current image block, the predicted value can be the reconstructed information and/or the predicted information of the current image block, the reconstructed information can include a luminance value, a chrominance value, and the like, the predicted information can be an intermediate value capable of obtaining the reconstructed information, and if the luminance value can be obtained by using the intermediate value a, the intermediate value a is the predicted information, which is not limited.
And the second mode is to acquire the motion information of surrounding image blocks (the airspace surrounding image blocks or the time domain surrounding image blocks of the current image block, which are not limited to the above), and acquire the template of the current image block by utilizing the motion information of the surrounding image blocks of the current image block. Specifically, the predicted value of the surrounding image block is determined by using the motion information (such as decoded motion information) of the surrounding image block; and obtaining a template of the current image block according to the predicted values of the surrounding image blocks, for example, determining the predicted values of the surrounding image blocks as the template of the current image block.
In the decoding of the current image block, since the surrounding image blocks of the current image block are already decoded, that is, the motion information of the surrounding image blocks is known, the motion information of the surrounding image blocks, such as a motion vector and a reference frame index, may be directly acquired, and the prediction value of the surrounding image block may be determined by using the motion information of the surrounding image blocks, and the prediction value may be the reconstruction information and/or the prediction information of the surrounding image block, and the reconstruction information may include a luminance value, a chrominance value, and the like, and the prediction information may be an intermediate value capable of acquiring the reconstruction information, for example, if the luminance value can be acquired by using the intermediate value a, the intermediate value a is the prediction information, which is not limited. Then, the predicted values of the surrounding image blocks are determined as templates of the current image block.
If the surrounding image block adopts the motion information adjustment method (i.e. the target motion information is obtained through the original motion information), the motion information of the surrounding image block is the original motion information of the surrounding image block.
And thirdly, acquiring the motion information of surrounding image blocks of the current image block, and acquiring a template of the current image block by utilizing the motion information of the surrounding image blocks of the current image block. Specifically, when the motion information includes a motion vector and a reference frame index of a surrounding image block, determining a reference frame image corresponding to the surrounding image block according to the reference frame index; and acquiring a reference image block corresponding to the surrounding image block from the reference frame image according to the motion vector, and acquiring a template of the current image block according to the reference image block.
If the surrounding image block adopts the motion information adjustment method (i.e. the target motion information is obtained through the original motion information), the motion information of the surrounding image block is the original motion information of the surrounding image block.
In one example, the surrounding image blocks may include M first surrounding image blocks and N second surrounding image blocks, M being a natural number greater than or equal to 1, N being a natural number greater than or equal to 0, or M being a natural number greater than or equal to 0, N being a natural number greater than or equal to 1; the first surrounding image block is a surrounding image block on the upper side of the current image block, and the second surrounding image block is a surrounding image block on the left side of the current image block.
The obtaining the template of the current image block using the motion information of surrounding image blocks of the current image block may include: determining a first template according to motion vector prediction modes and motion information of M first peripheral image blocks; determining a second template according to the motion vector prediction modes and the motion information of the N second surrounding image blocks; determining the first template as the template of the current image block; or determining the second template as the template of the current image block; or the first template and the second template are spliced and then are determined to be the template of the current image block.
For example, when M is a natural number greater than or equal to 1 and N is 0, a first template may be determined according to the motion vector prediction modes and the motion information of M first peripheral image blocks, and the first template may be determined as the template of the current image block. When N is a natural number greater than or equal to 1 and M is 0, a second template may be determined according to the motion vector prediction modes and the motion information of the N second surrounding image blocks, and the second template may be determined as the template of the current image block. When M is a natural number greater than or equal to 1 and N is a natural number greater than or equal to 1, determining a first template according to motion vector prediction modes and motion information of M first peripheral image blocks, determining a second template according to motion vector prediction modes and motion information of N second peripheral image blocks, and determining the first template as a template of the current image block; or determining the second template as the template of the current image block; or the first template and the second template are spliced to be determined as the template of the current image block.
The first surrounding image block comprises adjacent image blocks and/or secondary adjacent image blocks on the upper side of the current image block; the prediction mode of the adjacent image block is an inter mode or an intra mode; the prediction mode of the next-neighbor image block is an inter mode. For example, the first surrounding image block may include at least one neighboring image block whose prediction mode is an inter mode, e.g., all neighboring image blocks on the upper side of the current image block, or the first neighboring image block on the upper side of the current image block, or any one or more neighboring image blocks on the upper side of the current image block. In addition, if the neighboring image blocks on the upper side of the current image block are all in intra mode, the first peripheral image block may further include at least one sub-neighboring image block whose prediction mode is inter mode, for example, all sub-neighboring image blocks on the upper side of the current image block, or the first sub-neighboring image block on the upper side of the current image block, or any one or more sub-neighboring image blocks on the upper side of the current image block. In addition, if there is an intra-mode neighboring image block above the current image block, the first surrounding image block may further include an intra-mode neighboring image block, for example, a first intra-mode neighboring image block above the current image block, all intra-mode neighboring image blocks above the current image block, and so on. Of course, the above is only an example of the first surrounding image block, and this is not a limitation.
The second surrounding image block comprises adjacent image blocks and/or secondary adjacent image blocks on the left side of the current image block; the prediction mode of the adjacent image block is an inter mode or an intra mode; the prediction mode of the next-neighbor image block is an inter mode. For example, the second surrounding image block may include at least one neighboring image block whose prediction mode is an inter mode, e.g., all neighboring image blocks to the left of the current image block, or the first neighboring image block to the left of the current image block, or any one or more neighboring image blocks to the left of the current image block. In addition, if the neighboring image blocks to the left of the current image block are all intra-frames, the second surrounding image block may further include at least one sub-neighboring image block whose prediction mode is inter-frames, for example, all sub-neighboring image blocks to the left of the current image block, or the first sub-neighboring image block to the left of the current image block, or any one or more sub-neighboring image blocks to the left of the current image block. In addition, if there is an adjacent image block of the intra mode at the left side of the current image block, the second surrounding image block may further include an adjacent image block of the intra mode, for example, an adjacent image block of the first intra mode at the left side of the current image block, an adjacent image block of all intra modes at the left side of the current image block, and the like. Of course, the above is merely an example of the second surrounding image block, and is not limited thereto.
Wherein neighboring tiles of the current tile include, but are not limited to: spatial neighboring image blocks of the current image block (i.e., neighboring image blocks in the same frame of video image); or temporal neighboring image blocks of the current image block (i.e., neighboring image blocks in a different frame video image). The next-neighbor tiles of the current tile include, but are not limited to: spatial sub-adjacent image blocks of the current image block (i.e., sub-adjacent image blocks in the same frame of video image); or temporal sub-adjacent image blocks of the current image block (i.e., sub-adjacent image blocks in a different frame video image).
In one example, when M is greater than 1, the first template may include M sub-templates or P sub-templates, and is formed by splicing M sub-templates or P sub-templates, where P may be the number of the first peripheral image blocks of the inter mode, and P is less than or equal to M. For example, when the M first peripheral image blocks are all peripheral image blocks in the inter mode, the first template includes M sub-templates, which are spliced by the M sub-templates. When the M first peripheral image blocks include peripheral image blocks of P inter modes and include peripheral image blocks of M-P intra modes, then the first template may include M sub-templates (i.e., each peripheral image block corresponds to one sub-template) and be spliced by the M sub-templates, or the first template may include P sub-templates (i.e., P sub-templates corresponding to peripheral image blocks of P inter modes) and be spliced by the P sub-templates.
Further, when M is equal to 1, then the first template may include a first sub-template, which may be determined according to a motion vector prediction mode and motion information of a first surrounding image block on an upper side of the current image block; or the first sub-template may be determined according to the motion vector prediction mode and the motion information of any one of the surrounding image blocks on the upper side of the current image block. Wherein, since the first peripheral image block includes at least one adjacent image block or a sub-adjacent image block whose prediction mode is the inter mode, when M is equal to 1, the first template includes a first sub-template corresponding to the adjacent image block or the sub-adjacent image block of the inter mode.
The motion information may include a motion vector and a reference frame index of the first peripheral image blocks, and determining the first template according to the motion vector prediction modes and the motion information of the M first peripheral image blocks may include:
In the first case, for the ith surrounding image block in the M first surrounding image blocks, when determining that the motion vector prediction mode of the ith surrounding image block is the inter mode, determining a reference frame image corresponding to the ith surrounding image block according to the reference frame index; further, a reference image block corresponding to the ith surrounding image block can be determined from the reference frame image according to the motion vector of the ith surrounding image block; wherein, the relative position offset of the reference image block and the ith surrounding image block can be matched with the motion vector of the ith surrounding image block; further, an image block with a size of a first transverse length and a first longitudinal length can be obtained as an ith sub-template included in the first template according to the determined reference image block.
In the second case, for the i-th peripheral image block of the M first peripheral image blocks, when it is determined that the motion vector prediction mode of the i-th peripheral image block is the intra mode, the i-th peripheral image block may be filled with a default value (for example, a default pixel value, which may be an empirically preconfigured luminance value). Further, based on the image blocks filled by the default values, the image blocks with the dimensions of the first transverse length and the first longitudinal length can be obtained as an ith sub-template included in the first template.
In the third case, for the ith surrounding image block in the M first surrounding image blocks, when determining that the motion vector prediction mode of the ith surrounding image block is the intra mode, determining a reference frame image corresponding to the ith surrounding image block according to the reference frame index corresponding to the ith surrounding image block; further, a reference image block corresponding to the ith surrounding image block can be determined from the reference frame image according to a motion vector corresponding to the ith surrounding image block, wherein the relative position offset of the reference image block and the ith surrounding image block is matched with (including equal or approximately equal to) the motion vector corresponding to the ith surrounding image block; further, an image block with a first transverse length and a first longitudinal length can be obtained as an ith sub-template included in the first template according to the determined reference image block; the reference frame index and the motion vector corresponding to the ith surrounding image block are the reference frame index and the motion vector of the adjacent image block of the ith surrounding image block.
The first lateral length and the lateral length of the first peripheral image block satisfy a first proportional relationship (e.g. 1:1,1:2,2:1, etc., without limitation), or satisfy a second proportional relationship (e.g. 1:1,1:2,2:1, etc.) with the lateral length of the current image block, or are equal to a first preset length (configured empirically). The first longitudinal length and the longitudinal length of the first peripheral image block satisfy a third proportional relationship (such as 1:1,1:2,2:1, etc.), or satisfy a fourth proportional relationship (such as 1:1,1:2,2:1, etc.) with the longitudinal length of the current image block, or are equal to a second preset length (i.e. a length configured empirically). Further, the first proportional relationship, the second proportional relationship, the third proportional relationship, and the fourth proportional relationship may be the same or different. The first preset length and the second preset length may be set to be the same or different.
In one example, when N is greater than 1, the second template may include N sub-templates or R sub-templates, and is formed by stitching N sub-templates or R sub-templates, where R may be the number of second surrounding image blocks in the inter mode, and R is less than or equal to N. For example, when the N second surrounding image blocks are all surrounding image blocks in the inter mode, the second template includes N sub-templates, which are spliced by the N sub-templates. When the N second surrounding image blocks include R surrounding image blocks of inter mode and include N-R surrounding image blocks of intra mode, the second template may include N sub-templates (i.e., each surrounding image block corresponds to one sub-template) and be spliced by the N sub-templates, or the second template may include R sub-templates (i.e., R sub-templates corresponding to R surrounding image blocks of inter mode) and be spliced by the R sub-templates.
Further, when N is equal to 1, then the second template may include a second sub-template, which may be determined according to the motion vector prediction mode and the motion information of the first surrounding image block to the left of the current image block; or the second sub-template may be determined based on the motion vector prediction mode and motion information of any one of the surrounding image blocks to the left of the current image block. Wherein, since the second surrounding image block includes at least one neighboring image block or sub-neighboring image block whose prediction mode is the inter mode, when N is equal to 1, the second template includes a second sub-template corresponding to the neighboring image block or sub-neighboring image block of the inter mode.
The motion information may include a motion vector and a reference frame index of the second surrounding image block, and determining the second template according to the motion vector prediction modes and the motion information of the N second surrounding image blocks may include:
In the first case, for the ith surrounding image block in the N second surrounding image blocks, when determining that the motion vector prediction mode of the ith surrounding image block is the inter mode, determining a reference frame image corresponding to the ith surrounding image block according to the reference frame index; further, a reference image block corresponding to the ith surrounding image block can be determined from the reference frame image according to the motion vector of the ith surrounding image block; wherein, the relative position offset of the reference image block and the ith surrounding image block can be matched with the motion vector of the ith surrounding image block; further, an image block with a second transverse length and a second longitudinal length can be obtained as an ith sub-template included in the second template according to the determined reference image block.
In case two, for the i-th peripheral image block of the N second peripheral image blocks, when it is determined that the motion vector prediction mode of the i-th peripheral image block is the intra mode, the i-th peripheral image block may be filled with a default value (for example, a default pixel value, which may be an empirically preconfigured luminance value). Further, based on the image blocks filled by the default values, the image blocks with the second transverse length and the second longitudinal length can be obtained as an ith sub-template included in the second template.
In the third case, for the ith surrounding image block in the N second surrounding image blocks, when determining that the motion vector prediction mode of the ith surrounding image block is the intra mode, determining a reference frame image corresponding to the ith surrounding image block according to the reference frame index corresponding to the ith surrounding image block; further, a reference image block corresponding to the ith surrounding image block can be determined from the reference frame image according to a motion vector corresponding to the ith surrounding image block, wherein the relative position offset of the reference image block and the ith surrounding image block is matched with (including equal or approximately equal to) the motion vector corresponding to the ith surrounding image block; further, an image block with a second transverse length and a second longitudinal length can be obtained as an ith sub-template included in the first template according to the determined reference image block; the reference frame index and the motion vector corresponding to the ith surrounding image block are the reference frame index and the motion vector of the adjacent image block of the ith surrounding image block.
The second lateral length and the lateral length of the second surrounding image block satisfy a fifth proportional relationship (such as 1:1,1:2,2:1, etc., without limitation), or satisfy a sixth proportional relationship (such as 1:1,1:2,2:1, etc.) with the lateral length of the current image block, or are equal to a third preset length (configured empirically). The second longitudinal length and the longitudinal length of the second surrounding image block satisfy a seventh proportional relationship (e.g., 1:1,1:2,2:1, etc.), or satisfy an eighth proportional relationship (e.g., 1:1,1:2,2:1, etc.) with the longitudinal length of the current image block, or are equal to a fourth preset length (i.e., empirically configured length). Further, the fifth proportional relationship, the sixth proportional relationship, the seventh proportional relationship, and the eighth proportional relationship may be the same or different. The third preset length and the fourth preset length may be set to be the same or different.
The template of the current image block is described in detail below in connection with several specific application scenarios.
Application scenario 1: referring to fig. 3A, the surrounding image blocks may include all inter-mode neighboring image blocks on the upper side of the current image block, and all inter-mode neighboring image blocks on the left side of the current image block.
For the current image block A1, if there is an adjacent image block of the inter mode, for example, the image block A3 and the image block A4, on the left side, the image block A3 and the image block A4 of the inter mode may be determined as surrounding image blocks of the current image block A1. Similarly, if there is an adjacent image block of the inter mode, for example, image block A2, on the upper side, the image block A2 of the inter mode may be determined as a surrounding image block of the current image block A1.
When the surrounding image blocks are image blocks A2, A3 and A4, a template of the current image block A1 is acquired according to the motion information of the image block A2, the motion information of the image block A3 and the motion information of the image block A4. For the image block A2, determining a reference frame image corresponding to the image block A2 according to the reference frame index, selecting an image block B2 corresponding to the image block A2 from the reference frame images, moving the image block B2 according to the motion vector of the image block A2 to obtain a reference image block B2' corresponding to the image block A2, and obtaining a reference image block B3' corresponding to the image block A3 and a reference image block B4' corresponding to the image block A4, as shown in fig. 3B. A template of the current image block A1 is obtained from the reference image block B2', the reference image block B3' and the reference image block B4 '.
In one example, assuming that the lateral length of the upper template of the current image block A1 is W and the longitudinal length is S, the value of W may be empirically configured, the value of S may be empirically configured, and neither the values of W nor S are limited. For example, W may be the lateral length of the current image block A1, the lateral length of the surrounding image block A2, 2 times the lateral length of the current image block A1, etc., S may be the longitudinal length of the surrounding image block A2, 1/3 of the longitudinal length of the surrounding image block A2, etc. On this basis, referring to fig. 3C, a template diagram corresponding to the reference image block B2' is shown. In fig. 3C, taking W as an example of the lateral length of the surrounding image block A2, that is, W is the lateral length of the reference image block B2'; taking S as an example 1/3 of the longitudinal length of the surrounding image block A2, i.e. S is 1/3 of the longitudinal length of the reference image block B2'.
Assuming that the transverse length of the left template of the current image block A1 is R, the longitudinal length is H, the value of R can be configured empirically, the value of H can be configured empirically, and the values of R and H are not limited. For example, H may be the longitudinal length of the current image block A1, the longitudinal length of the surrounding image block A3, R may be the lateral length of the surrounding image block A3, 1/3 of the lateral length of the surrounding image block A3, and so on. On this basis, referring to fig. 3C, a template diagram corresponding to the reference image block B3' is shown, where H is taken as an example of the longitudinal length of the surrounding image block A3, and R is taken as an example of 1/3 of the lateral length of the surrounding image block A3.
Similarly, the template page corresponding to the reference image block B4' may be shown in fig. 3C, and will not be described herein.
In one example, assuming that there are M surrounding image blocks of different modes above the current image block, for the i-th surrounding image block, assuming that its lateral length is w i, the prediction mode of the surrounding image block is determined.
If the mode is in the intra mode, the sub-template is not generated, or is filled according to a default value (such as a default pixel value, which may be a luminance value preset according to experience), and is used as the ith sub-template of the upper template.
In the case of inter mode, motion information (such as a motion vector and a reference frame index) of the ith surrounding image block can be obtained, and a template with a transverse length w i and a longitudinal length S is generated based on the motion vector and the reference frame index and is used as the ith sub-template of the upper template. Specifically, if the motion vector is MV and the reference frame index is idx, a rectangular block with a transverse length w i and a longitudinal length S, whose relative position is offset MV, can be found in the idx-th reference image of the current frame, and used as the i-th sub-template of the upper template.
Assuming that there are N surrounding image blocks of different modes on the left side of the current image block, for the ith surrounding image block, assuming that the longitudinal length is h i and the transverse length is R, the prediction mode of the surrounding image block is determined.
If the mode is in the intra mode, the sub-template is not generated, or is filled according to a default value (such as a default pixel value, which may be a luminance value preset according to experience), and is used as the i sub-template of the left template.
In the case of inter mode, motion information (such as motion vector and reference frame index) of the ith surrounding image block can be obtained, and a template with a transverse length of R and a longitudinal length of h i is generated based on the motion vector and the reference frame index and used as the ith sub-template of the left template. Specifically, if the motion vector is MV and the reference frame index is idx, a rectangular block with a transverse length R and a longitudinal length h i, whose relative position is offset by MV, can be found in the idx-th reference image of the current frame, and used as the i-th sub-template of the left template.
Further, the upper template can be formed by splicing all the sub-templates on the upper side, the left template can be formed by splicing all the sub-templates on the left side, and the upper template and the left template are spliced into the template of the current image block.
Application scenario 2: referring to fig. 3D, the surrounding image blocks may include adjacent image blocks of the first inter mode on the upper side of the current image block, and adjacent image blocks of the first inter mode on the left side of the current image block.
For the current image block A1, if the first image block A3 on the left is the inter mode, the image block A3 may be determined as a surrounding image block of the current image block A1. If the first image block A2 on the upper side is the inter mode, the image block A2 may be determined as a surrounding image block of the current image block A1. When the surrounding image blocks are the image block A2 and the image block A3, the template of the current image block A1 is obtained according to the motion information of the image block A2 and the motion information of the image block A3, and the obtaining mode refers to the application scene 1, which is not described herein.
In one example, assuming that the lateral length of the upper template of the current image block A1 is W and the longitudinal length is S, the value of W may be empirically configured, the value of S may be empirically configured, and neither the values of W nor S are limited. For example, W may be the lateral length of the current image block A1, the lateral length of the surrounding image block A2, S may be the longitudinal length of the surrounding image block A2, 1/3 of the longitudinal length of the surrounding image block A2, and so on. Referring to fig. 3E, a template diagram corresponding to the reference image block B2' is shown.
Assuming that the transverse length of the left template of the current image block A1 is R, the longitudinal length is H, the value of R can be configured empirically, the value of H can be configured empirically, and the values of R and H are not limited. For example, H may be the longitudinal length of the current image block A1, the longitudinal length of the surrounding image block A3, etc., R may be the lateral length of the surrounding image block A3, 1/3 of the lateral length of the surrounding image block A3, etc. On this basis, referring to fig. 3E, a template diagram corresponding to the reference image block B3' is shown.
Of course, the above application scenario 1 and application scenario 2 are just two examples, and no limitation is made thereto. For example, the surrounding image blocks may include an adjacent image block of an inter mode on the upper side of the current image block, a next adjacent image block of an inter mode on the upper side of the current image block (i.e., when the adjacent image block is in an intra mode, an image block corresponding to the adjacent image block is selected to be next adjacent to the current image block), an adjacent image block of an inter mode on the left side of the current image block, and a next adjacent image block on the left side of the current image block. For another example, if the first neighboring image block above the current image block is intra-frame and the image block above the first neighboring image block is inter-frame, the surrounding image blocks may include sub-neighboring image blocks of inter-frame above the current image block; if the first neighboring image block to the left of the current image block is intra-mode and the image block to the left of the first neighboring image block is inter-mode, the surrounding image blocks may include sub-neighboring image blocks of inter-mode to the left of the current image block.
Example 11: for step 1052, in one embodiment, searching for target motion information centered on the original motion information based on the template of the current image block may include, but is not limited to, the following steps:
the original motion information is determined as center motion information, step 1052 A1.
In step 1052A2, edge motion information corresponding to the center motion information is determined.
Wherein the edge motion information may be different from the center motion information.
In one example, the raw motion information may include raw motion vectors, the center motion information may include center motion vectors, and the edge motion information may include edge motion vectors.
Wherein determining an edge motion vector corresponding to the center motion vector may include: the center motion vector (x, y) is shifted by S in different directions, so that edge motion vectors (x-S, y), edge motion vectors (x+s, y), edge motion vectors (x, y+s), and edge motion vectors (x, y-S) in different directions can be obtained. For example, in the horizontal direction, the center motion vector (x, y) may be shifted to the left by S, resulting in an edge motion vector (x-S, y); in the horizontal direction, the center motion vector (x, y) can be shifted to the right by S to obtain an edge motion vector (x+s, y); in the vertical direction, the center motion vector (x, y) may be shifted upward by S to obtain an edge motion vector (x, y+s); in the vertical direction, the center motion vector (x, y) may be shifted downward by S, resulting in an edge motion vector (x, y-S).
The initial value of S may be empirically configured, such as 2, 4, 8, 16, etc.
Assuming that the center motion vector is (3, 3), and S is 4, the edge motion vector is an edge motion vector (7, 3), an edge motion vector (3, 7), an edge motion vector (-1, 3), or an edge motion vector (3, -1).
In the above embodiment, in order to search for the edge motion information corresponding to the center motion information, the search for the up, down, left, and right offsets is performed centering on the center motion information; in another example, instead of performing a non-directional search, a search may be performed along the direction of the original motion information.
Step 1052A3 obtains the prediction performance of the center motion information (center motion vector) from the template of the current image block, and obtains the prediction performance of the edge motion information (edge motion vector) from the template of the current image block.
In case one, the prediction performance of the center motion vector is obtained according to the template of the current image block, which may include, but is not limited to: and determining the prediction performance of the center motion vector according to the parameter information of the template of the current image block and the parameter information of a first target reference block, wherein the first target reference block can be an image block obtained after the reference image block corresponding to the template is offset based on the center motion vector. Specifically, the prediction performance of the center motion vector may be determined according to the parameter information of the template and the parameter information of the first target reference block.
Wherein, the parameter information can be brightness value; or may be luminance and chrominance values.
Assuming that the parameter information is a luminance value, in order to determine the prediction performance of the center motion vector, the luminance value of the template of the current image block and the luminance value of the first target reference block may be acquired first. For example, after obtaining the template of the current image block, the luminance value of each pixel of the template may be obtained, and the reference image block of the template may be obtained, and assuming that the center motion vector is (3, 3), the reference image block may be moved by using the center motion vector (3, 3), to obtain an image block X corresponding to the reference image block (for example, the reference image block is moved to the right by 3 pixels, moved up by 3 pixels, and the processed image block is referred to as an image block X), where the image block X is the first target reference block, and the luminance value of each pixel of the image block X may be obtained.
Based on the luminance value of each pixel of the template and the luminance value of each pixel of the image block X, the prediction performance of the center motion vector can be determined by adopting the following formula: SAD is the sum of available absolute differences that is used to represent the prediction performance of the center motion vector. TM i represents the luminance value of the ith pixel of the template, TMP i represents the luminance value of the ith pixel of the image block X, and M represents the total number of pixels.
Assuming that the parameter information is a luminance value and a chrominance value, the following formula is usedDetermining a luminance value prediction performance SAD of the center motion vector, and determining a chrominance value prediction performance of the center motion vector by adopting the following formula: The average value of the luminance value prediction performance SAD and the chrominance value prediction performance CSAD is the prediction performance of the center motion vector. Wherein CSAD is the sum of available absolute differences for representing the chroma value prediction performance of the center motion vector, CTM i represents the chroma value of the ith pixel of the template, CTMP i represents the chroma value of the ith pixel of the image block X, and M c represents the total number of pixels.
In case two, the prediction performance of the edge motion vector is obtained according to the template of the current image block, which may include but is not limited to: and determining the prediction performance of the edge motion vector according to the parameter information of the template of the current image block and the parameter information of a second target reference block, wherein the second target reference block can be an image block obtained after the reference image block corresponding to the template is offset based on the edge motion vector. Specifically, the prediction performance of the edge motion vector may be determined according to the parameter information of the template and the parameter information of the second target reference block.
Wherein, the parameter information can be brightness value; or may be luminance and chrominance values.
Case two is similar to case one, except that: in the first case, the reference image block of the template is moved by using the edge motion vector to obtain a second target reference block, and the prediction performance of the edge motion vector is obtained by using the second target reference block, and in the second case, the reference image block of the template is moved by using the center motion vector to obtain a first target reference block, and the prediction performance of the center motion vector is obtained by using the first target reference block.
Step 1052A4 determines target motion information (e.g., a target motion vector) from the center motion information and the edge motion information based on the prediction performance of the center motion information and the prediction performance of the edge motion information.
Specifically, a motion vector with optimal prediction performance can be selected from the center motion vector and the edge motion vector; when the motion vector with optimal prediction performance is not the original motion vector, the motion vector with optimal prediction performance can be determined as a target motion vector; when the motion vector with the optimal prediction performance is the original motion vector, a motion vector with the suboptimal prediction performance can be selected from the center motion vector and the edge motion vector, and the motion vector with the suboptimal prediction performance can be determined as the target motion vector.
Example 12: for step 1052, in another embodiment, searching for target motion information centered on the original motion information based on the template of the current image block may include, but is not limited to, the following steps:
Step 1052B1, the original motion information is determined as center motion information.
In step 1052B2, edge motion information corresponding to the center motion information is determined.
Step 1052B3, obtaining the prediction performance of the center motion information according to the template of the current image block, and obtaining the prediction performance of the edge motion information according to the template of the current image block.
Step 1052B 1-step 1052B3 refer to step 1052 A1-step 1052A3, and are not described in detail herein.
Step 1052B4, determine whether the iteration end condition of the target motion information is satisfied.
If so, step 1052B6 may be performed; if not, step 1052B5 may be performed.
The iteration end condition may include, but is not limited to: the number of iterations reaches a number threshold, or the execution time reaches a time threshold, or the parameter S has been modified to a preset value, such as 1.
Of course, the above are just a few examples of iteration end conditions, and the iteration end conditions are not limited thereto.
Step 1052B5, selecting the motion information with the optimal prediction performance from the center motion information and the edge motion information, determining the motion information with the optimal prediction performance as the center motion information, and returning to step 1052B2.
When step 1052B2 is performed for the first time, the value of the parameter S may be an initial value, such as 16. When step 1052B2 is executed again, the value of the parameter S is adjusted, for example, the parameter S is adjusted to be less than 2 or half of the parameter S, and the parameter S is not limited to this and only smaller than the parameter S, and the parameter S is adjusted to be half of the parameter S. Therefore, when step 1052B2 is performed for the second time, the value of the parameter S is 8; when step 1052B2 is executed for the third time, the value of the parameter S is 4; and so on.
After the value of the parameter S is adjusted, it is determined whether the adjusted parameter S is less than or equal to a preset value, such as 1. If not, step 1052B2 is performed based on the adjusted parameter S, and the process will not be described again. If so, the value of the parameter S is set to 1, and step 1052B2 is executed based on the parameter S (i.e., the value of 1), and when step 1052B4 is executed, the judgment result is that the iteration end condition is satisfied.
Step 1052B6, determining the target motion information (e.g., target motion vector) from the center motion information and the edge motion information according to the prediction performance of the center motion information and the prediction performance of the edge motion information.
The process of step 1052B6 may refer to step 1052A4, and the detailed description is not repeated here.
Example 13: for step 105, the current image block is a bi-directional inter-frame prediction block, and the original motion information includes a first reference frame and a second reference frame corresponding to the current image block, a first motion vector corresponding to the first reference frame, and a second motion vector corresponding to the second reference frame; referring to fig. 4, which is a flowchart of a decoding method, for determining target motion information of a current image block according to original motion information, it may include:
Step 401, a first initial reference block (image block in the first reference frame) corresponding to the current image block is acquired from the first reference frame according to the first motion vector, and a second initial reference block (image block in the second reference frame) corresponding to the current image block is acquired from the second reference frame according to the second motion vector.
Step 402, based on the similarity between the first initial reference block and the second initial reference block, searching the first target reference block (image block in the first reference frame) with a plurality of offset directions and offset amounts based on the first motion vector in the first reference frame, and searching the second target reference block (image block in the second reference frame) with a plurality of offset directions and offset amounts based on the second motion vector in the second reference frame. The similarity of the first target reference block and the second target reference block meets the preset requirement.
In one example, the similarity of the first initial reference block and the second initial reference block may be determined by: determining a predicted value of a first initial reference block and a predicted value of a second initial reference block by adopting a bilinear interpolation mode; a downsampling result of the predicted values of the first and second initial reference blocks is acquired, SAD (sum of abstract difference, sum of absolute differences) or MRSAD (mean removed sum of abstract difference, sum of mean-removed absolute errors) of the first and second initial reference blocks is determined based on the downsampling result, and a similarity of the first and second initial reference blocks is determined based on the SAD or MRSAD, that is, the SAD or MRSAD may be the similarity.
In order to search for the first target reference block and the second target reference block, the following manner may be adopted:
Determining the first motion vector as a first center motion vector and the second motion vector as a second center motion vector; then, a first edge motion vector corresponding to the first center motion vector is determined, and a second edge motion vector corresponding to the second center motion vector is determined. Then, a first initial reference block corresponding to the current image block is determined from the first reference frame according to the first center motion vector, and a first predicted value of the first initial reference block is determined. And determining a second initial reference block corresponding to the current image block from a second reference frame according to the second center motion vector, and determining a second predicted value of the second initial reference block. And determining a third initial reference block corresponding to the current image block from the first reference frame according to the first edge motion vector, and determining a third predicted value of the third initial reference block. And determining a fourth initial reference block corresponding to the current image block from the second reference frame according to the second edge motion vector, and determining a fourth predicted value of the fourth initial reference block.
Further, the first similarity may be determined according to the first predicted value and the second predicted value, and the second similarity may be determined according to the third predicted value and the fourth predicted value. Then, the first target reference block and the second target reference block may be determined according to the first similarity and the second similarity. Specifically, if the first similarity is higher than the second similarity, determining the first initial reference block as a first target reference block, and determining the second initial reference block as a second target reference block; if the second similarity is higher than the first similarity, a third initial reference block may be determined as the first target reference block and a fourth initial reference block may be determined as the second target reference block.
In one example, before determining the first target reference block and the second target reference block according to the first similarity and the second similarity, it may also be determined whether an end condition (such as the number of iterations reaching a threshold, or the execution time reaching a time threshold, etc.) is met; if so, a step of determining a first target reference block and a second target reference block based on the first similarity and the second similarity; if not, if the first similarity is higher than the second similarity, a step of determining a first edge motion vector corresponding to the first center motion vector, a second edge motion vector corresponding to the second center motion vector; if the second similarity is higher than the first similarity, the first edge motion vector may be determined as a first center motion vector, the second edge motion vector may be determined as a second center motion vector, and the step of determining the first edge motion vector corresponding to the first center motion vector and the second edge motion vector corresponding to the second center motion vector may be performed.
In the above embodiment, the first similarity may include, but is not limited to, SAD or MRSAD, and the second similarity may include, but is not limited to, SAD or MRSAD.
In the above embodiment, determining the first similarity according to the first predicted value and the second predicted value includes: the first predicted value and the second predicted value can be downsampled, and the first similarity is determined by using the downsampled first predicted value and the downsampled second predicted value; determining the second similarity according to the third predicted value and the fourth predicted value, including: the third predictor and the fourth predictor may be downsampled, and the second similarity may be determined using the downsampled third predictor and the downsampled fourth predictor.
In the above embodiment, an interpolation strategy (N is preferably 2 (bilinear interpolation) with the number of horizontal and vertical taps being equal to or smaller than N may be adopted, and of course, N may be other values, which is not limited thereto), to determine the corresponding first prediction value of the first initial reference block; an interpolation strategy with the number of horizontal and vertical taps being less than or equal to N can be adopted to determine a corresponding second predicted value of a second initial reference block; an interpolation strategy with the number of horizontal and vertical taps being less than or equal to N can be adopted to determine a corresponding third predicted value of a third initial reference block; an interpolation strategy with both horizontal and vertical taps less than or equal to N may be employed to determine a corresponding fourth predictor for the fourth initial reference block.
In the above embodiment, determining the first edge motion vector corresponding to the first center motion vector and the second edge motion vector corresponding to the second center motion vector may include: mode 1: performing offset processing (comprising direction and offset) on the first center motion vector to obtain a first edge motion vector; and performing offset processing on the second center motion vector to obtain a second edge motion vector. Mode 2: performing offset processing on the first center motion vector to obtain a first edge motion vector; deriving a difference value between the second edge motion vector and the second center motion vector according to the difference value between the first edge motion vector and the first center motion vector, and determining the second edge motion vector by using the difference value and the second center motion vector; for example, if the first edge motion vector corresponds to the difference a with the first center motion vector, the second edge motion vector is the difference a with the second center motion vector, and therefore the second edge motion vector is the sum of the second center motion vector and the difference a; for another example, if the first edge motion vector corresponds to the difference a with the first center motion vector, the second edge motion vector is-1 x the difference a with the second center motion vector, and therefore the second edge motion vector is the difference between the second center motion vector and the difference a.
In step 403, the motion information of the first target reference block is determined as the target motion information of the current image block, and the motion information of the second target reference block is determined as the target motion information of the current image block.
Example 14: the current image block may have original motion information M, which may include a first motion vector M1 and a second motion vector M2. The first motion vector M1 may correspond to the reference frame A1, and the second motion vector M2 may correspond to the reference frame A2.
The first motion vector M1 is determined as a first center motion vector T1, and the second motion vector M2 is determined as a second center motion vector T2. Shifting the first central motion vector T1 to a certain direction to obtain a first edge motion vector W1, and shifting the second central motion vector T2 to a certain direction to obtain a second edge motion vector W2; of course, the above is merely an example of determining edge motion information, and is not limited in this regard.
Wherein, a first initial reference block corresponding to the current image block may be determined from the reference frame A1 according to the first center motion vector T1, and a first prediction value of the first initial reference block may be determined. The interpolation strategy that the number of the horizontal and vertical taps is less than or equal to N may be used to determine the first predicted value of the first initial reference block, that is, the first predicted value (such as a pixel value, a luminance value, a pigment value, etc.) of each pixel point in the first initial reference block; specifically, if the search is a split-pixel search, that is, the minimum accuracy of the search is less than 1 pixel, the pixel point in the first initial reference block may not be an integer pixel point, such as 0.75, so, to obtain the pixel value of this pixel point, an interpolation strategy with both horizontal and vertical tap numbers smaller than N may be used, for example, if both horizontal and vertical tap numbers are 2, the above interpolation strategy is a bilinear interpolation strategy, that is, a bilinear interpolation strategy may be used to determine the first predicted value (such as the pixel value) of 0.75 of the pixel point. For example, four truly existing pixel values around 0.75 pixel point may be utilized to determine the first predicted value.
Wherein, a second initial reference block corresponding to the current image block can be determined from the reference frame A2 according to the second center motion vector T2, and a second predicted value of the second initial reference block is determined. A third initial reference block corresponding to the current image block is determined from the reference frame A1 according to the first edge motion vector W1, and a third predicted value of the third initial reference block is determined. A fourth initial reference block corresponding to the current image block is determined from the reference frame A2 according to the second edge motion vector W2, and a fourth predicted value of the fourth initial reference block is determined.
Further, the similarity 1 may be determined according to the first predicted value and the second predicted value, and the similarity 2 may be determined according to the third predicted value and the fourth predicted value. In determining the similarity 1 and the similarity 2, if the similarity 1 and the similarity 2 are SAD, for each pixel point in the first intermediate image block (e.g., P1-P100) and each pixel point in the second intermediate image block (e.g., Q1-Q100), the similarity 1 is determined as follows:
Firstly, the predicted values of the pixel points P1-P100 may be downsampled, and the predicted values of the pixel points Q1-Q100 may be downsampled, for example, if the downsampling rate is 1, the downsampled result is the predicted values of the pixel points P1-P100 and the predicted values of the pixel points Q1-Q100; if the downsampling rate is 2, the result of downsampling is the predicted values of the pixel points P1-P50 and the predicted values of the pixel points Q1-Q50; if the downsampling rate is 4, the result of downsampling is the predicted values of the pixel points P1-P25 and the predicted values of the pixel points Q1-Q25; and so on.
Assuming that the result of the downsampling is the predicted value of the pixel points P1-P25 and the predicted value of the pixel points Q1-Q25, the absolute value of the difference between the predicted value of P1 and the predicted value of Q1, such as X1, the absolute value of the difference between the predicted value of P2 and the predicted value of Q2, such as X2, and so on, the absolute value of the difference between the predicted value of P25 and the predicted value of Q25, such as X25, is calculated, and then, the average value of X1, X2.. Similarly, the similarity 2 may be determined in the manner described above.
In determining the similarity 1 and the similarity 2, if the similarity 1 and the similarity 2 are MRSAD, for each pixel point in the first intermediate image block (such as P1-P100) and each pixel point in the second intermediate image block (such as Q1-Q100), the similarity 1 is determined as follows: first, the predicted values of the pixels P1 to P100 are downsampled, and the predicted values of the pixels Q1 to Q100 are downsampled, for example, if the downsampling rate is 4, the downsampled result is the predicted values of the pixels P1 to P25 and the predicted values of the pixels Q1 to Q25.
Determining an average value A of predicted values of P1-P25 and an average value B of predicted values of Q1-Q25; calculating the difference C1 between the predicted value of P1 and the average value A, the difference D1 between the predicted value of Q1 and the average value B, and calculating the absolute value of the difference between the difference C1 and the difference D2, such as Y1, and the like, calculating the difference C25 between the predicted value of P25 and the average value A, the difference D25 between the predicted value of Q25 and the average value B, and calculating the absolute value of the difference between the difference C25 and the difference D25, such as Y25, and then averaging Y1 and Y2.. Similarly, the similarity 2 may be determined in the manner described above.
After determining the similarity 1 and the similarity 2, judging whether an ending condition is met; if not, if the similarity 1 is higher than the similarity 2 (the smaller the value of the similarity is, the higher the similarity is), the first edge motion vector W3 is searched again according to the first center motion vector T1 (different from the first edge motion vector W1 described above), the second edge motion vector W4 is searched again according to the first center motion vector T2 (different from the second edge motion vector W2 described above), and the above steps are re-executed based on the first center motion vector T1, the first edge motion vector W3, the first center motion vector T2, and the second edge motion vector W4, which are not described again here. If the similarity 2 is higher than the similarity 1, the first edge motion vector W1 is determined as a first center motion vector T3, the second edge motion vector W2 is determined as a second center motion vector T4, the first edge motion vector W5 is re-searched according to the first center motion vector T3, the second edge motion vector W6 is re-searched according to the second center motion vector T4, and the above steps are re-executed based on the first center motion vector T3, the first edge motion vector W5, the first center motion vector T4, and the second edge motion vector W6, which are not repeated here.
After judging whether the ending condition is met, if so, determining a first target reference block and a second target reference block according to the first similarity and the second similarity (such as similarity 1 and similarity 2). If the similarity 1 is higher than the similarity 2, the first initial reference block may be determined as a first target reference block and the second initial reference block may be determined as a second target reference block; if the similarity 2 is higher than the similarity 1, the third initial reference block may be determined as the first target reference block and the fourth initial reference block may be determined as the second target reference block.
Example 15: for step 105, the original motion information includes a first original motion vector and a second original motion vector corresponding to the current image block; the determining the target motion information of the current image block according to the original motion information may include, but is not limited to: determining a first target motion vector of the current image block according to the first original motion vector, wherein the specific determination mode is as described in the above embodiment; then, determining a difference value between the first original motion vector and the first target motion vector; determining a second target motion vector from the second original motion vector and the difference value; and determining target motion information according to the first target motion vector and the second target motion vector.
For example, the reference direction of the original motion information may be bi-directional, i.e. the original motion information may comprise a first original motion vector and a second original motion vector. Assuming that the first original motion vector is (V0 x, V0 y) and the second original motion vector is (V1 x, V1 y), taking the first original motion vector (V0 x, V0 y) as an example for a certain motion information component of the original motion information, the implementation manner of embodiment 9-embodiment 14 may be adopted to determine the first target motion vector corresponding to the first original motion vector, and the determination process is not limited, and assuming that the first target motion vector is (v0x+offset_x, v0y+offset_y).
On the basis of this, it may be determined that the difference between the first target motion vector and the first original motion vector is (offset_x, offset_y), and then the second target motion vector corresponding to the second original motion vector (V1 x, V1 y) may be directly derived by (offset_x, offset_y), for example, the second target motion vector may be (V1 x-offset_x, V1 y-offset_y), which is, of course, merely an example and not limited thereto.
Further, the first target motion vector (v0x+offset_x, v0y+offset_y) and the second target motion vector (v1x-offset_x, v1y-offset_y) may be determined as the target motion information.
Example 16: for step 106, decoding the encoded bitstream according to the original motion information or the target motion information may include: the decoding end may obtain fourth indication information from the encoded bitstream, where the fourth indication information is used to indicate that the encoded bitstream is decoded using the original motion information or that the encoded bitstream is decoded using the target motion information. Of course, the above manner is merely an example, and is not limited thereto.
For example, if the encoding end encodes the bitstream by using the original motion information, the fourth indication information added by the encoding end in the encoded bitstream may be the first identifier, which indicates that the bitstream is encoded by using the original motion information; if the encoding end encodes the bit stream by using the target motion information, the fourth indication information added by the encoding end in the encoded bit stream may be the second identifier, which indicates that the bit stream is encoded by using the target motion information. Based on the first identification, after the decoding end obtains the fourth indication information from the coded bit stream, if the fourth indication information is the first identification, the decoding end can decode the coded bit stream according to the original motion information; if the fourth indication information is the second identification, the encoded bitstream may be decoded according to the target motion information.
The method comprises the steps of performing precoding on a bit stream by adopting original motion information and performing precoding on the bit stream by adopting target motion information aiming at a coding end, so that the coding performance of the original motion information and the coding performance of the target motion information can be determined based on a rate distortion principle. If the coding performance of the original motion information is better than that of the target motion information, determining to code the bit stream by adopting the original motion information; and if the coding performance of the target motion information is better than that of the original motion information, determining that the target motion information codes the bit stream. Of course, the above manner is merely an example, and is not limited thereto.
Example 17: after step 106, final motion information for the current image block may also be determined. For example, if the encoded bitstream is decoded from the original motion information, the original motion information is stored as final motion information; or if the coded bit stream is decoded according to the target motion information, storing the target motion information as final motion information; wherein the final motion information is used for decoding reference of the subsequent image block, that is, the final motion information may be referred to during processing of other image blocks.
Example 18:
referring to fig. 5, a flowchart of an encoding method may be applied to an encoding end, and may include:
Step 501, a motion model of a current image block is obtained. The motion model may include, but is not limited to: a 2-parameter motion model (e.g., a 2-parameter motion vector), a 4-parameter motion model (e.g., a 4-parameter affine model), a 6-parameter motion model (e.g., an affine model), and an 8-parameter motion model (e.g., a projection model).
Step 502, a motion information list of the current image block is built according to the motion model.
Step 503, selecting alternative motion information from the motion information list.
Step 504, determining the original motion information of the current image block according to the selected alternative motion information and the difference information of the current image block. Specifically, before step 104, the difference information of the current image block may be acquired, and then the original motion information of the current image block may be determined according to the alternative motion information and the difference information.
Step 505, determining target motion information of the current image block according to the original motion information. Wherein the target motion information may be a motion information different from the original motion information.
Step 506, the current image block is encoded according to the original motion information or the target motion information.
As can be seen from the above technical solutions, in the embodiments of the present application, the target motion information of the current image block may be determined according to the original motion information, and encoded according to the original motion information or the target motion information, instead of directly encoding according to the original motion information, so as to improve the accuracy of the motion information, solve the problems of low prediction quality, prediction error, etc., and improve the encoding performance, improve the encoding efficiency, and reduce the encoding delay.
Example 19:
For step 501, a motion model of the current image block is obtained, and the implementation manner thereof may be referred to embodiment 2, which is not described herein. For step 502, a motion information list of the current image block is established according to the motion model, and the implementation manner thereof can be seen from embodiment 3 to embodiment 5, which is not described herein. For step 503, alternative motion information is selected from the motion information list, and the implementation manner thereof may be referred to embodiment 6, and will not be described herein.
Before step 504, the difference information of the current image block may be obtained, and the specific implementation may be referred to embodiment 7, which is not described herein. For step 504, the original motion information of the current image block is determined according to the selected candidate motion information and the difference information, and the implementation manner may be referred to embodiment 8, which is not described herein.
For step 505, the target motion information of the current image block is determined according to the original motion information, and the implementation manner thereof may be referred to in embodiments 9-15, which are not described herein. For step 506, the current image block is encoded according to the original motion information or the target motion information, which can be implemented as described in embodiment 16.
After encoding the current image block according to the original motion information or the target motion information, final motion information of the current image block may also be determined. For example, if the current image block is encoded according to the original motion information, the original motion information is stored as final motion information; or if the current image block is encoded according to the target motion information, storing the target motion information as final motion information.
The final motion information is used for coding reference of the subsequent image block, that is, the final motion information can be referred to in the processing of other image blocks, which is not limited.
Example 20:
Referring to fig. 6, a video encoding framework is shown, and the encoding method may be implemented using the video encoding framework, and in addition, a video decoding framework is similar to fig. 6, and the decoding method may be implemented using the video decoding framework, which is not described herein. In particular, in the video encoding framework/video decoding framework, modules such as intra prediction, motion estimation/motion compensation, reference image buffers, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy encoder, etc. may be included. At the encoding end, the encoding method can be realized through the cooperation of the modules; at the decoding end, the decoding method can be realized through the cooperation of the modules.
Example 21:
Based on the same application concept as the above method, the embodiment of the present application further provides a decoding device, applied to a decoding end, as shown in fig. 7, which is a structural diagram of the device, where the device includes:
an obtaining module 71, configured to obtain a motion model of a current image block;
a building module 72, configured to build a motion information list of the current image block according to the motion model;
a selection module 73 for selecting alternative motion information from the motion information list;
A determining module 74, configured to determine original motion information of the current image block according to the selected candidate motion information and difference information of the current image block; and
Determining target motion information of the current image block according to the original motion information;
a decoding module 75 for decoding the coded bit stream based on the original motion information or the target motion information.
Example 22:
based on the same application concept as the method, the embodiment of the application further provides an encoding device, which is applied to an encoding end, as shown in fig. 8, and is a structure diagram of the device, and the device includes:
an obtaining module 81 for obtaining a motion model of the current image block;
a building module 82, configured to build a motion information list of the current image block according to the motion model;
a selection module 83 selecting alternative motion information from the motion information list;
A determining module 84 for determining original motion information of the current image block according to the selected candidate motion information and difference information of the current image block; and
Determining target motion information of the current image block according to the original motion information;
the encoding module 85 encodes the current image block according to the original motion information or the target motion information.
Example 23:
In the decoding end device provided by the embodiment of the present application, from a hardware level, a schematic diagram of a hardware architecture of the decoding end device may be specifically shown in fig. 9. Comprising the following steps: a processor 91 and a machine-readable storage medium 92, wherein: the machine-readable storage medium 92 stores machine-executable instructions executable by the processor 91; the processor 91 is configured to execute machine executable instructions to implement the decoding method disclosed in the above examples of the present application.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where a number of computer instructions are stored, where the computer instructions can implement the decoding method disclosed in the above example of the present application when the computer instructions are executed by a processor.
Example 24:
The hardware architecture schematic diagram of the encoding end device provided by the embodiment of the present application may be specifically shown in fig. 10 from the hardware level. Comprising the following steps: a processor 93 and a machine-readable storage medium 94, wherein: the machine-readable storage medium 94 stores machine-executable instructions executable by the processor 93; the processor 93 is configured to execute machine-executable instructions to implement the encoding methods disclosed in the above examples of the present application.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where a number of computer instructions are stored, where the computer instructions can implement the encoding method disclosed in the above example of the present application when executed by a processor.
Wherein the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state disk, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (9)

1. A decoding method, applied to a decoding end, the method comprising:
Acquiring a 2-parameter translational motion model of a current image block;
Establishing a motion information list of the current image block according to the 2-parameter translational motion model;
Selecting alternative motion information from the motion information list;
determining original motion information of the current image block according to the alternative motion information;
Determining the original motion information as center motion information, determining edge motion information corresponding to the center motion information, and obtaining a predicted value of the center motion information and a predicted value of the edge motion information; determining target motion information of the current image block from the center motion information and the edge motion information according to the predicted value of the center motion information and the predicted value of the edge motion information; wherein the obtaining the predicted value of the center motion information and the predicted value of the edge motion information includes: acquiring a template corresponding to the current image block, and acquiring a predicted value of the center motion information and a predicted value of the edge motion information according to the template;
and decoding the coded bit stream according to the original motion information or the target motion information.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The motion information list comprises motion information of candidate image blocks corresponding to the current image block, and the motion information of the candidate image blocks comprises motion vectors and motion directions corresponding to fixed reference frames;
Wherein the candidate image block includes at least one of: in the current frame of the current image block, the image block adjacent to the current image block; in the current frame of the current image block, the image block which is not adjacent to the current image block; the image block in the adjacent frame of the current frame where the current image block is located;
The image blocks in the adjacent frames of the current frame where the current image block is located include: and the reference image block in the adjacent frame, which is the same as the current image block in position, and the adjacent image block of the reference image block.
3. The method of claim 2, wherein the list of motion information further comprises default motion information comprising a zero vector and a zero reference frame index.
4. The method of claim 2, wherein the step of determining the position of the substrate comprises,
Determining the adding sequence of the motion information of each candidate image block according to the position relation between each candidate image block and the current image block aiming at the motion information of a plurality of candidate image blocks in the motion information list; according to the adding sequence, motion information of each candidate image block is added to a motion information list in sequence until the quantity of the motion information in the motion information list reaches a preset quantity;
When motion information of each candidate image block is added to a motion information list in sequence according to the adding sequence, comparing the motion information with motion information in the motion information list aiming at the motion information to be added, if the motion information is the same with the motion information in the motion information list, not adding the motion information to the motion information list, and if the motion information is different from the motion information in the motion information list, adding the motion information to the motion information list.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The obtaining the template corresponding to the current image block comprises the following steps:
determining a predicted value of the current image block by using the original motion information of the current image block;
And acquiring a template corresponding to the current image block according to the predicted value of the current image block.
6. The method of claim 1, wherein the current image block is a bi-directional inter-prediction block, and the original motion information includes a first reference frame and a second reference frame corresponding to the current image block, a first motion vector corresponding to the first reference frame, and a second motion vector corresponding to the second reference frame; the original motion information is determined to be central motion information, edge motion information corresponding to the central motion information is determined, and a predicted value of the central motion information and a predicted value of the edge motion information are obtained; according to the predicted value of the center motion information and the predicted value of the edge motion information, determining the target motion information of the current image block from the center motion information and the edge motion information, wherein the target motion information comprises:
Determining the first motion vector as a first center motion vector and the second motion vector as a second center motion vector; determining a first edge motion vector corresponding to the first center motion vector and determining a second edge motion vector corresponding to the second center motion vector;
determining a first initial reference block corresponding to the current image block from the first reference frame according to the first center motion vector, and determining a first predicted value of the first initial reference block by adopting a bilinear interpolation mode;
determining a second initial reference block corresponding to the current image block from the second reference frame according to the second center motion vector, and determining a second predicted value of the second initial reference block by adopting a bilinear interpolation mode;
Determining a third initial reference block corresponding to the current image block from the first reference frame according to the first edge motion vector, and determining a third predicted value of the third initial reference block by adopting a bilinear interpolation mode;
determining a fourth initial reference block corresponding to the current image block from the second reference frame according to the second edge motion vector, and determining a fourth predicted value of the fourth initial reference block by adopting a bilinear interpolation mode;
Searching a first target reference block and a second target reference block based on the first predicted value, the second predicted value, the third predicted value and the fourth predicted value, and determining the motion information of the first target reference block and the motion information of the second target reference block as target motion information of a current image block.
7. The method of claim 6, wherein the step of providing the first layer comprises,
The searching for the first target reference block and the second target reference block based on the first predictor, the second predictor, the third predictor, and the fourth predictor includes:
Downsampling the first predicted value and the second predicted value, and determining a first similarity by using the downsampled first predicted value and the downsampled second predicted value; downsampling the third predicted value and the fourth predicted value, and determining a second similarity by using the downsampled third predicted value and the downsampled fourth predicted value; wherein the first similarity and the second similarity are SAD;
Determining a first target reference block and a second target reference block according to the first similarity and the second similarity; if the first similarity is higher than the second similarity, determining the first initial reference block as a first target reference block, and determining the second initial reference block as a second target reference block; and if the second similarity is higher than the first similarity, determining the third initial reference block as a first target reference block and the fourth initial reference block as a second target reference block.
8. The method of claim 1, wherein the original motion information comprises a first original motion vector and a second original motion vector corresponding to the current image block; the target motion information comprises a first target motion vector and a second target motion vector corresponding to the current image block; wherein:
Determining a first target motion vector corresponding to the current image block according to the first original motion vector; determining a difference between the first original motion vector and the first target motion vector; and determining a second target motion vector corresponding to the current image block according to the second original motion vector and the difference value.
9. A decoding end apparatus, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to implement the method of any of claims 1-8.
CN202210351116.4A 2018-09-20 2018-09-20 A decoding and encoding method and device thereof Active CN114866777B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210351116.4A CN114866777B (en) 2018-09-20 2018-09-20 A decoding and encoding method and device thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811102907.3A CN110933426B (en) 2018-09-20 2018-09-20 Decoding and encoding method and device thereof
CN202210351116.4A CN114866777B (en) 2018-09-20 2018-09-20 A decoding and encoding method and device thereof

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811102907.3A Division CN110933426B (en) 2018-09-20 2018-09-20 Decoding and encoding method and device thereof

Publications (2)

Publication Number Publication Date
CN114866777A CN114866777A (en) 2022-08-05
CN114866777B true CN114866777B (en) 2024-11-22

Family

ID=69855593

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811102907.3A Active CN110933426B (en) 2018-09-20 2018-09-20 Decoding and encoding method and device thereof
CN202210351116.4A Active CN114866777B (en) 2018-09-20 2018-09-20 A decoding and encoding method and device thereof

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201811102907.3A Active CN110933426B (en) 2018-09-20 2018-09-20 Decoding and encoding method and device thereof

Country Status (2)

Country Link
CN (2) CN110933426B (en)
WO (1) WO2020057559A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113873249B (en) 2020-06-30 2023-02-28 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113891089B (en) * 2020-07-03 2022-12-23 杭州海康威视数字技术股份有限公司 Method, device and equipment for constructing motion information candidate list
CN111885389B (en) * 2020-07-24 2021-08-24 腾讯科技(深圳)有限公司 Multimedia data coding method, device and storage medium
WO2022021310A1 (en) * 2020-07-31 2022-02-03 深圳市大疆创新科技有限公司 Encoding method and apparatus, computing processing device, computer program, and storage medium
CN114079783B (en) * 2020-08-20 2022-12-23 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN114598889B (en) * 2020-12-03 2023-03-28 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113709502B (en) 2021-03-19 2022-12-23 杭州海康威视数字技术股份有限公司 Decoding method, encoding method, device, equipment and machine readable storage medium
CN116847088B (en) * 2023-08-24 2024-04-05 深圳传音控股股份有限公司 Image processing method, processing device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009212667A (en) * 2008-03-03 2009-09-17 Kddi Corp Moving image encoding apparatus and decoding apparatus
CN102215387A (en) * 2010-04-09 2011-10-12 华为技术有限公司 Video image processing method and coder/decoder

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100907847B1 (en) * 2004-07-20 2009-07-14 퀄컴 인코포레이티드 Method and apparatus for motion vector prediction in temporal video compression
JP2006279573A (en) * 2005-03-29 2006-10-12 Sanyo Electric Co Ltd Encoder and encoding method, and decoder and decoding method
KR101366242B1 (en) * 2007-03-29 2014-02-20 삼성전자주식회사 Method for encoding and decoding motion model parameter, and method and apparatus for video encoding and decoding using motion model parameter
CN102685504B (en) * 2011-03-10 2015-08-19 华为技术有限公司 The decoding method of video image, code device, decoding device and system thereof
CN102158709B (en) * 2011-05-27 2012-07-11 山东大学 A Derivable Motion Compensated Prediction Method at the Decoder
CN103716631B (en) * 2012-09-29 2017-04-05 华为技术有限公司 For the method for image procossing, device, encoder
WO2016008157A1 (en) * 2014-07-18 2016-01-21 Mediatek Singapore Pte. Ltd. Methods for motion compensation using high order motion model
CN104363451B (en) * 2014-10-27 2019-01-25 华为技术有限公司 Image prediction method and relevant apparatus
CN106454378B (en) * 2016-09-07 2019-01-29 中山大学 Converting video coding method and system in a kind of frame per second based on amoeboid movement model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009212667A (en) * 2008-03-03 2009-09-17 Kddi Corp Moving image encoding apparatus and decoding apparatus
CN102215387A (en) * 2010-04-09 2011-10-12 华为技术有限公司 Video image processing method and coder/decoder

Also Published As

Publication number Publication date
CN110933426A (en) 2020-03-27
CN114866777A (en) 2022-08-05
WO2020057559A1 (en) 2020-03-26
CN110933426B (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN114866777B (en) A decoding and encoding method and device thereof
CN111385569B (en) Coding and decoding method and equipment thereof
CN109862369B (en) A coding and decoding method and device thereof
CN115022639B (en) Encoding and decoding method, device and equipment thereof
CN113709457B (en) Decoding and encoding method, device and equipment
CN110662074B (en) Motion vector determination method and device
CN112449180B (en) Encoding and decoding method, device and equipment
CN110691247B (en) Decoding and encoding method and device
CN113709486B (en) Encoding and decoding method, device and equipment
WO2012114561A1 (en) Moving image coding device and moving image coding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant