[go: up one dir, main page]

CN115529459B - Center point searching method, center point searching device, computer equipment and storage medium - Google Patents

Center point searching method, center point searching device, computer equipment and storage medium Download PDF

Info

Publication number
CN115529459B
CN115529459B CN202211232932.XA CN202211232932A CN115529459B CN 115529459 B CN115529459 B CN 115529459B CN 202211232932 A CN202211232932 A CN 202211232932A CN 115529459 B CN115529459 B CN 115529459B
Authority
CN
China
Prior art keywords
center point
coding unit
search
search center
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211232932.XA
Other languages
Chinese (zh)
Other versions
CN115529459A (en
Inventor
朱传传
邵瑾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Granfei Intelligent Technology Co.,Ltd.
Original Assignee
Glenfly Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glenfly Tech Co Ltd filed Critical Glenfly Tech Co Ltd
Priority to CN202211232932.XA priority Critical patent/CN115529459B/en
Publication of CN115529459A publication Critical patent/CN115529459A/en
Application granted granted Critical
Publication of CN115529459B publication Critical patent/CN115529459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application relates to a center point search method, a center point search device, a center point search apparatus, a center point search computer device, a center point search storage medium, and a center point search program product. The method comprises the following steps: acquiring an original pixel and a reference pixel; searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units; when the current coding unit is not divided, a first search center point corresponding to the current coding unit is obtained based on the prediction unit corresponding to the current coding unit, and the current coding unit is calculated to obtain a second search center point; and searching the first search center point and the second search center point again to obtain a target first search center point and a target second search center point corresponding to the current coding unit. By adopting the method, the second search center point can be additionally acquired, so that the coding quality is improved.

Description

Center point searching method, center point searching device, computer equipment and storage medium
Technical Field
The present invention relates to the field of video encoding technology, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for searching a center point.
Background
Motion estimation is an extremely important link in video coding. The motion estimation is used for obtaining the optimal matching block of the current block, and the higher the matching degree is, the better the coding quality is, so that the quality of the coding quality is determined by the good and bad motion estimation.
Because of limited hardware resources, motion estimation is not possible to search for too many points, and thus the search area may be limited to a certain extent. Therefore, in the conventional art, a search area is determined by determining a search center point and then centering on the point. And finally, carrying out matching judgment on candidate blocks corresponding to all or part of candidate points in the search area, and finally obtaining an optimal matching point. However, the scope of this search is limited, the probability of being able to search for global optimum is not high, and thus the encoding quality is not high.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a center point search method, apparatus, computer device, computer readable storage medium, and computer program product that can additionally acquire a second search center point to improve coding quality.
In a first aspect, the present application provides a method for searching a center point. The method comprises the following steps:
Acquiring an original pixel and a reference pixel;
searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units;
when the current coding unit is not divided, a first search center point corresponding to the current coding unit is obtained based on the prediction unit corresponding to the current coding unit, and the current coding unit is calculated to obtain a second search center point;
and searching the first search center point and the second search center point again to obtain a target first search center point and a target second search center point corresponding to the current coding unit.
In one embodiment, the calculating the current coding unit to obtain the second search center point includes:
when the current size of the coding unit is smaller than the preset size, calculating the sum of absolute errors outside the coding unit, and obtaining the second search center point according to the sum of absolute errors outside the coding unit;
and when the current size of the coding unit is equal to the preset size, calculating the sum of absolute errors in the coding unit, and obtaining the second search center point according to the sum of absolute errors in the coding unit.
In one embodiment, when the current size of the coding unit is smaller than the preset size, calculating an absolute error sum outside the coding unit, and obtaining the second search center point according to the absolute error sum outside the coding unit, including:
calculating the minimum absolute error sum of each adjacent coding unit adjacent to the current coding unit and the synthesized coding unit;
and comparing the minimum absolute error sum of each adjacent coding unit with the minimum absolute error sum of the synthesized coding unit to obtain a target coding unit, and taking the motion vector of the target coding unit as the second search center point.
In one embodiment, when the current size of the coding unit is equal to the preset size, calculating an absolute error sum inside the coding unit, and obtaining the second search center point according to the absolute error sum inside the coding unit includes:
calculating the minimum absolute error and the corresponding motion vector of each partitioned coding unit in the current coding unit;
and taking the minimum absolute error of each segmentation coding unit and the median value of the corresponding motion vector as the second searching center point.
In one embodiment, the searching the first search center point and the second search center point to obtain a target first search center point and a target second search center point corresponding to the current coding unit includes:
when the first search center point is not equal to the second search center point, taking the first search center point as the target first search center point and taking the second search center point as the target second search center point;
and when the first search center point is equal to the second search center point, taking the first search center point as the target first search center point, and performing quadrant segmentation based on the target first search center point to obtain the target second search center point.
In one embodiment, quadrant segmentation is performed based on the target first search center point to obtain the target second search center point, which includes:
taking the target first search center point as an origin, and performing quadrant segmentation with a preset step length to obtain a plurality of initial second search center points;
and obtaining the target second search center point from a plurality of initial second search center points according to the quadrant of the predicted motion vector of the target first search center point.
In one embodiment, the method further comprises:
when the current coding unit has the division and the division quantity is the target quantity, the motion vector of the prediction unit of the current coding unit is respectively used as the target first search center point and the target second search center point.
In a second aspect, the present application provides a motion estimation method based on center point search. The method comprises the following steps:
obtaining a target first search center point and a target second search center point according to the center point searching method in any one of the embodiments;
performing intra-frame prediction on the original pixel based on the reference pixel to obtain a second prediction result;
obtaining a first prediction result according to the target first search center point and the target second search center point;
and comparing the first prediction result with the second prediction result to obtain a target prediction result.
In a third aspect, the present application further provides a center point searching apparatus. The device comprises:
the acquisition module is used for acquiring original pixels and reference pixels;
the searching module is used for searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units;
The computing module is used for obtaining a first search center point corresponding to the current coding unit based on the prediction unit corresponding to the current coding unit when the current coding unit is not divided, and computing the current coding unit to obtain a second search center point;
and the searching and repeating module is used for searching and repeating the first searching center point and the second searching center point to obtain a target first searching center point and a target second searching center point corresponding to the current coding unit.
In a fourth aspect, the present application further provides a motion estimation apparatus based on center point search. The device comprises:
the center point obtaining module is configured to obtain a target first search center point and a target second search center point according to the center point searching device in any one of the foregoing embodiments;
the intra-frame prediction module is used for carrying out intra-frame prediction on the original pixel based on the reference pixel to obtain a second prediction result;
the inter-frame prediction module is used for obtaining a first prediction result according to the target first search center point and the target second search center point;
and the comparison module is used for comparing the first prediction result and the second prediction result to obtain a target prediction result.
In a fifth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the method of any of the embodiments described above when the processor executes the computer program.
In a sixth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the embodiments described above.
In a seventh aspect, the present application also provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any of the embodiments described above.
According to the center point searching method, the device, the computer equipment, the storage medium and the computer program product, the encoder firstly searches according to the acquired original pixels and the reference pixels to obtain a plurality of CUs and PUs, when the current CU is not divided, the first searching center point corresponding to the current CU is obtained based on the PU corresponding to the current CU, the second searching center point is obtained by calculating the current CU, and finally the first searching center point and the second searching center point are searched to obtain the target first searching center point and the target second searching center point corresponding to the current coding unit, so that the encoder obtains the second target searching center point to properly increase the searching range, the probability of searching to the global optimal matching point is increased, and the coding quality is improved. Secondly, the encoder can check the first search center point and the second search center point, further obtain the target first search center point and the target second search center point, and can ensure that two search center points can be obtained through checking the second search center point, and can also avoid repeated searching, so that hardware resources are wasted.
Drawings
FIG. 1 is a flow diagram of a method of center point search in one embodiment;
FIG. 2 is an external schematic diagram of an encoding unit in one embodiment;
FIG. 3 is a schematic diagram of the interior of a coding unit in one embodiment;
FIG. 4 is a diagram of a center point search in one embodiment;
FIG. 5 is a flow diagram of a method of motion estimation based on center point search in one embodiment;
FIG. 6 is a schematic diagram of motion estimation in one embodiment;
FIG. 7 is a schematic diagram of a fine search in one embodiment;
FIG. 8 is a schematic diagram of coding unit partitioning in one embodiment;
FIG. 9 is a block diagram of a center point search device in one embodiment;
FIG. 10 is a block diagram of a motion estimation device based on center point search in one embodiment;
fig. 11 is an internal structural diagram of an encoder in one embodiment.
Description of the embodiments
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a center point searching method, including the steps of:
S102, acquiring original pixels and reference pixels.
The original pixel refers to an image to be processed which is not processed, and the original pixel can be a frame of image to be processed or a video composed of a plurality of frames of images to be processed; reference pixels refer to images that require reference in encoding. In order to achieve compression during video coding, new images can be generated by buffering portions of the images and combining these images with motion vectors, these buffered images being referred to as reference images. Wherein the reference pixel may be an image of a previously encoded frame of a certain frame.
Alternatively, the reference pixel may be selected from the original pixels by a preset selection method. The preset selection method is a preset method for selecting a reference pixel from original pixels, and can be specifically set in combination with a specific application scene. For example, the preset selection method may be a pre-trained reference pixel selection model based on deep learning, and the best reference pixel can be obtained through the model, so that the subsequent motion estimation effect can be improved.
Optionally, the original pixels and the reference pixels acquired by the encoder are encrypted, so that the encoder needs to decrypt the original pixels and the reference pixels first and then perform subsequent processing. The encryption modes include MD5 and SHA1, so that the security of the original pixel and the reference pixel can be ensured.
And S104, searching the original pixels based on the reference pixels to obtain a plurality of coding units and prediction units corresponding to the coding units.
The Coding Unit refers to a basic Unit of predictive Coding (CU), and has a size of 8×8, 16×16, 32×32, 64×64.
The Prediction Unit (PU) is obtained by dividing according to the coding Unit, and one CU may be divided into a plurality of PUs according to the type of the cut of the prediction model.
Alternatively, the original pixel may be searched based on the reference pixel by a coarse search, which determines a relatively large search range, and then searches for several matching points in a certain step size within this large range, to roughly estimate the CU and PU partitions. And obtaining a plurality of CUs corresponding to the original pixel and the reference pixel and PUs corresponding to the CUs through rough searching, wherein the dividing modes and the number of the CUs and the PUs of the original pixel and the reference pixel are consistent.
Illustratively, a CU partition scheme that can result in a 64×64 CTU by coarse search is determined, e.g., a CTU is partitioned into n CUs, each CU has a preliminary PU partition scheme, and each PU has a coarse Motion Vector (MV).
And S106, when the current coding unit is not divided, obtaining a first search center point corresponding to the current coding unit based on a prediction unit corresponding to the current coding unit, and calculating the current coding unit to obtain a second search center point.
After obtaining a plurality of CUs, the encoder traverses each CU to calculate, and a first search center point and a second search center point are obtained. First, the encoder determines whether the CU is divided, and if the current CU is not divided, i.e., the PU size is equal to the CU size, the encoder uses the MV of the coarse search of PU0 (i.e., the current CU because the PU and the CU are equal in size at this time) as the first search center point of the current CU.
Alternatively, the encoder may select the coarse search optimal MV for PU0 as the first search center point of the fine search of the current CU, i.e., the first search center point.
The encoder then proceeds with the computation based on the current CU to obtain a second search center point. Optionally, the encoder will calculate its corresponding minimum sum of absolute errors (SAD, sum of Absolute Difference) from the current CU to determine the second search center point.
S108, searching the first search center point and the second search center point again to obtain a target first search center point and a target second search center point corresponding to the current coding unit.
The target first search center point and the target second search center point are search center points which are finally used for motion estimation after the first search center point and the second search center point are subjected to repeated searching.
And the encoder performs duplicate checking on the first search center point and the second search center point, namely judging whether the first search center point is equal to the second search center point. And then, obtaining a target first search center point and a target second search center point on the basis of the first search center point and the second search center point according to the duplicate checking result, namely whether the first search center point is equal to the second search center point or not.
For example, when the duplicate checking results are not equal, the first search center point may be taken as a target first search center point, and the second search center point may be taken as a target second search center point. When the duplicate search results are equal, in order to code the duplicate search, thereby wasting hardware resources, a second search center point needs to be determined based on the position of the first search center point, thereby obtaining a target first search center point and a target second search center point.
In the above-mentioned central point searching method, the encoder searches according to the obtained original pixel and the reference pixel to obtain a plurality of CUs and PUs, and when the current CU is not divided, the first searching central point corresponding to the current CU is obtained based on the PU corresponding to the current CU, and the second searching central point is obtained by calculating the current CU, and finally the first searching central point and the second searching central point are searched to obtain the target first searching central point and the target second searching central point corresponding to the current coding unit, so that the encoder obtains the second target searching central point to appropriately increase the searching range, thereby increasing the probability of searching the global optimal matching point and further improving the coding quality. Secondly, the encoder can check the first search center point and the second search center point, further obtain the target first search center point and the target second search center point, and can ensure that two search center points can be obtained through checking the second search center point, and can also avoid repeated searching, so that hardware resources are wasted.
In one embodiment, calculating the current coding unit to obtain the second search center point includes: when the size of the current coding unit is smaller than the preset size, calculating the sum of absolute errors outside the coding unit, and obtaining a second search center point according to the sum of absolute errors outside the coding unit; and when the size of the current coding unit is equal to the preset size, calculating the absolute error sum inside the coding unit, and obtaining a second search center point according to the absolute error sum inside the coding unit.
The absolute error sum outside the coding unit refers to an absolute error sum of a CU outside the current CU, and may include an absolute error sum of a CU adjacent to the current CU and an absolute error sum of a synthesized coding unit formed by the current CU and the adjacent CU. As shown in fig. 2, fig. 2 is an external schematic diagram of an encoding unit in an embodiment, when a current CU is CU0, that is, m×m_0 in the figure, surrounding CUs are obtained, that is, a composite encoding unit may be formed, and then an absolute error outside the encoding unit includes a sum of absolute errors of neighboring CUs around the CU0 and a sum of absolute errors of the composite encoding unit.
The absolute error sum inside the coding unit refers to the absolute error sum corresponding to each divided coding unit after the current CU is divided. Referring to fig. 3, fig. 3 is a schematic diagram of an inside of a coding unit in one embodiment, where the coding unit includes dividing units divided by three dividing modes, i.e., (M/2) × (M/2), m× (M/2) _0, and (M/2) ×m_0, respectively, and then the absolute error sum inside the coding unit includes the absolute error sums of (M/2) × (M/2), m× (M/2) _0, and (M/2) ×m_0.
When the current CU is smaller than the preset CU, 3 other CUs with the same size are arranged around the current CU, the current CU and the 3 adjacent CUs can form a synthetic coding unit, and at the moment, the absolute error sum outside the coding unit is calculated, so that a second search center point corresponding to the current CU can be obtained. This is because the motion trend of the current CU is greatly correlated with the surrounding CUs, so that the second search center point of the current CU can be obtained by calculating the sum of absolute errors outside the coding unit.
When the current CU is equal to the preset size, i.e., the current CU is as large as the CTU, surrounding CUs may belong to neighboring CTUs, which may not have been encoded yet, and thus the second search center point is obtained by calculating the absolute error sum inside the encoding unit.
In the above embodiment, the encoder decides whether to calculate the absolute error sum outside the coding unit or inside the coding unit to obtain the second search center point corresponding to the current CU by determining whether the size of the current CU is equal to the preset size, so as to obtain a more accurate second search center point.
In one embodiment, when the size of the current coding unit is smaller than the preset size, calculating an absolute error sum outside the coding unit, and obtaining the second search center point according to the absolute error sum outside the coding unit, including: calculating the minimum absolute error sum of each adjacent coding unit adjacent to the current coding unit and the synthesized coding unit; and comparing the minimum absolute error of each adjacent coding unit with the minimum absolute error sum of the synthesized coding units to obtain a target coding unit, and taking the motion vector of the target coding unit as a second search center point.
The encoder calculates the minimum absolute error sum of the current CU, each adjacent CU and the synthesized coding unit respectively, compares the minimum absolute error sum of each adjacent coding unit with the minimum absolute error sum of the synthesized coding unit, can obtain a target coding unit according to the comparison result, and finally takes the MV of the target coding unit as a second search center point.
Optionally, the encoder screens the minimum absolute error sum of each adjacent coding unit and the minimum absolute error sum of the synthesized coding unit by a preset screening method to obtain the target coding unit. For example, the preset filtering method may be set to a unit with the smallest absolute error value as the target coding unit, that is, comparing the smallest error sums of each adjacent CU and the synthesized coding unit, and selecting a unit corresponding to the smallest value of the smallest error sum as the target coding unit.
Optionally, the encoder may preprocess the composite coding unit during the comparison, because the size of the composite coding unit is 2mx2m, and the other neighboring CUs are mxm, and the direct comparison may be unfair to the neighboring CUs, thus requiring the composite coding unit to be preprocessed. Illustratively, the value of the composite coding unit needs to be scaled down by a factor of 4 before being compared with the minimum absolute error sum of other neighboring coding units.
In other embodiments, the encoder may reduce the ratio of the synthesized coding unit to the neighboring coding units, for example, the synthesized coding unit has a size of 4mx4m, and the other neighboring coding units have a size of mxm, then the synthesized coding unit needs to be reduced 16 times.
In the above embodiment, by comparing the minimum absolute error sum of each adjacent coding unit and the minimum absolute error sum of the synthesized coding unit, an accurate second search center point can be obtained in combination with the motion trend of the current CU and the surrounding CUs.
In one embodiment, when the size of the current coding unit is equal to the preset size, calculating the sum of absolute errors inside the coding unit, and obtaining the second search center point according to the sum of absolute errors inside the coding unit, including: calculating the minimum absolute error and the corresponding motion vector of each divided coding unit in the current coding unit; and taking the minimum absolute error of each divided coding unit and the median value of the corresponding motion vector as a second searching center point.
The encoder firstly calculates the minimum absolute error of each divided coding unit and the corresponding motion vector in the current coding unit, and takes the median value of the minimum absolute error of each divided unit and the corresponding MV as a second search center point.
Illustratively, continuing with FIG. 3, the encoder obtains the MV corresponding to the minimum SAD of (M/2) x (M/2) _0 within the current MXM CU, denoted MV1; obtaining MV corresponding to the minimum SAD of M× (M/2) _0 in the current M×M CU, and marking as MV2; obtaining an MV corresponding to the minimum SAD of the (M/2) xM_0 in the current MxM CU, marking the MV as MV3, taking the median value of MV1, MV2 and MV3, marking the median value as candMV, namely candMV=medium (MV 1, MV2 and MV 3), and taking the candMV as a second search center point.
In the above embodiment, the encoder may obtain the accurate second search center point when the current CU is as large as the CTU in combination with the correlation between the current CU and the inside thereof.
In one embodiment, searching the first search center point and the second search center point to obtain a target first search center point and a target second search center point corresponding to the current coding unit includes: when the first search center point is not equal to the second search center point, taking the first search center point as a target first search center point and taking the second search center point as a target second search center point; and when the first search center point is equal to the second search center point, taking the first search center point as a target first search center point, and performing quadrant segmentation based on the target first search center point to obtain a target second search center point.
After the encoder obtains the first search center point and the second search center point, the encoder judges whether the first search center point is equal to the second search center point, if the first search center point is not equal to the second search center point, the first search center point is used as a target first search center point, and the second search center point is used as a target second search center point. When the first search center point is equal to the second search center point, then one second search center point needs to be searched again as a target second search center point. Then, the encoder continues to search for the next CU, and searches for the next CU for the corresponding target first search center point and the corresponding target second search center point until the encoder traverses all the CUs, i.e., searches for all the CUs for the corresponding target first search center point and the corresponding target second search center point.
When searching the target second searching center point, the encoder performs quadrant segmentation based on the first target searching center point to obtain the target second searching center point. Optionally, quadrant segmentation may be performed based on a preset step size and the first target search center point to obtain the target second search center point. For example, the encoder may obtain a corresponding coordinate system with the first search center point as the origin. And then, quadrant segmentation is carried out on the coordinate system by using a preset step length, and a second search center point is obtained by using the position of the predicted motion vector of the first search center point.
In the above embodiment, the encoder obtains the target first search center point and the target second search center point after searching the first search center point and the second search center point, so that repeated searching can be avoided when the first search center point is equal to the second search center point, and waste of hardware resources is caused.
In one embodiment, performing quadrant segmentation based on the target first search center point to obtain the target second search center point includes: taking the target first search center point as an origin, and performing quadrant segmentation with a preset step length to obtain a plurality of initial second search center points; and obtaining a target second search center point from the plurality of initial second search center points according to the quadrant of the predicted motion vector of the target first search center point.
Specifically, when the encoder performs quadrant segmentation based on the target first search center point to obtain the target second search center point, the target first search center point is firstly used as an original point, a corresponding coordinate system can be obtained, and then the quadrant is segmented by taking the search step length of rough search as a preset step length, so that a plurality of initial second search center points can be obtained. And then, obtaining a target second search center point from the initial second search center point according to the quadrant where the predicted motion vector of the target search center point is located.
Illustratively, in connection with FIG. 4, FIG. 4 is a center point search schematic in one embodiment. Assuming that the search step size of the rough search is n, C0 is the first search center point of the current CU, the coordinate value thereof is (x, y), and MVP is the prediction MV of the current CU. Since MVP reflects the motion trend of the current CU to some extent, the relationship of MVP to C0 can be used to determine the second search center point. And drawing a lower graph coordinate system by taking C0 as a center point. Then, the quadrants are divided by taking n as a step length, so that a plurality of initial second search center points, namely C1, C2, C3 and C4, can be obtained. Then, a target second search center point is obtained from C1, C2, C3 and C4 according to the position of the MVP of the current CU in the coordinate system. For example, if the MVP of the current CU is in the first quadrant of C0, then C1 is taken as the second search center point; if the MVP of the current CU is in the second quadrant of C0, C2 is taken as a second searching center point; if the MVP of the current CU is in the third quadrant of C0, C3 is taken as a second searching center point; if the MVP of the current CU is in the fourth quadrant of C0, then C4 is taken as the second search center point.
In the above embodiment, since the MVP may reflect the motion trend of the current CU, a more accurate target second search center point may be obtained by using the MVP of the current CU.
In one embodiment, the method further comprises: when the current coding unit is divided and the dividing number is the target number, the motion vector of the prediction unit of the current coding unit is respectively used as a target first search center point and a target second search center point.
Illustratively, if the coarse search phase, the current CU is determined to be partitioned into 2 PUs, as shown in fig. 5, fig. 5 is a coding unit partition diagram in one embodiment. Then the coarse search optimal MV for PU0 is taken as the first search center point for the fine search of the current CU; the coarse search optimal MV of the PU1 is used as the second search center point of the fine search of the current CU.
In one embodiment, as shown in fig. 5, there is provided a motion estimation method based on a center point search, including the steps of:
s502, obtaining a target first search center point and a target second search center point according to the center point searching method in any one of the embodiments.
The encoder can obtain the target first search center points and the target second search center points corresponding to all CUs through the center point searching method in any one of the embodiments.
S504, carrying out intra-frame prediction on the original pixel based on the reference pixel to obtain a second prediction result.
Intra prediction refers to predicting pixels of a current block in the current frame using boundary pixels adjacent to the reconstructed block as reference pixels.
Alternatively, a horizontal or vertical model may be used to perform intra-frame searching to obtain a second prediction result, where the second prediction result is a prediction result obtained by performing intra-frame prediction on the original pixel.
S506, obtaining a first prediction result according to the target first search center point and the target second search center point.
The encoder determines a search area based on the target first search center point and the target second search center point and then centered on the target first search center point and the target second search center point. And finally, carrying out matching judgment on all or part of candidate blocks corresponding to the candidate points in the search area to finally obtain an optimal matching point (or a matching block corresponding to the matching point). Here, the optimal means that the coding cost is minimum.
S508, comparing the first prediction result with the second prediction result to obtain a target prediction result.
The encoder compares the first prediction result with the second prediction result, and then selects a mode with better effect to perform motion estimation to obtain a final target prediction result.
For example, if the first prediction result obtained by performing motion estimation according to the target first search center point and the target second search center point is better than the second prediction result obtained by performing intra-frame prediction on the original pixel, the encoder may select the target first search center point and the target second search center point to perform motion estimation, and the target prediction result is the first prediction result. Otherwise, the encoder adopts an intra-frame prediction mode to perform motion estimation, and the corresponding target prediction result is a second prediction result.
In one embodiment, 21 video sequences are tested, and the probability of the search area containing the global optimum increases significantly as the second search start point is acquired. Therefore, the coding efficiency of the video sequence is improved by 10.21% at maximum, and the average improvement is 2.48%.
In the above embodiment, the encoder selects a better mode to perform motion estimation by comparing the first prediction result and the second prediction result, so as to improve the encoding efficiency.
In an exemplary embodiment, as shown in fig. 6, fig. 6 is a schematic diagram of motion estimation in one embodiment.
Firstly, reading an original pixel and a reference pixel from a memory; then, respectively carrying out intra-frame prediction and inter-frame prediction on the original pixels, wherein the inter-frame prediction comprises coarse search and fine search; and then, judging the results of the intra-frame prediction and the inter-frame prediction to obtain a final motion estimation result.
First, a coarse search is performed, and after the coarse search is performed, a CU partition scheme of 64×64 CTUs (sizes 16×16, 32×32, and 64×64) is determined, e.g., a CTU is divided into n CUs, each CU has a preliminary PU partition scheme, and each PU has a rough MV.
Then, a fine search phase is entered. Referring to fig. 7, fig. 7 is a schematic diagram of a fine search in one embodiment. First, it is determined whether or not the CU has a partition. As shown in fig. 8, fig. 8 is a schematic diagram of coding unit partitioning in one embodiment. If the current CU is determined to be divided into 2 PUs in the coarse search stage, as shown in fig. 8, the coarse search optimal MV for PU0 is taken as the first search center point of the fine search of the current CU; the coarse search optimal MV of the PU1 is used as the second search center point of the fine search of the current CU. If the coarse search stage, the current CU is determined to be divided into non-divisions, i.e., PU size is equal to CU size, then the coarse search optimal MV for PU0 is taken as the first search center point for the fine search of the current CU. A second search center point is then obtained as follows.
When M <64, for the case of M <64, i.e., m=8/16/32, there are another 3 other CUs of the same size around the current CU, and the current CU and the other 3 surrounding CUs may constitute a larger 2 mx 2M CU, as shown in fig. 2. Since the motion trend of the current CU has a great correlation with its surrounding CUs, the motion information of the surrounding CUs can be used to obtain the second search center point of the current CU, and the specific method is as follows:
1) The minimum SAD (Sum of Absolute Difference) values of the other 3 mxm CUs surrounding the current mxm CU are obtained, denoted SAD0/SAD1/SAD2, respectively.
The calculation formula of SAD is as follows:
wherein, mxn is the pixel block size of the calculated SAD, CB is the current pixel block to be encoded, CB (i, j) is the pixel value of coordinates (i, j) in the current pixel block, RB is the reference pixel block, and RB (i, j) is the pixel value of coordinates (i, j) in the reference pixel block.
2) The minimum SAD value of the 2 Mx2MCU where the current MxMCU is located is obtained and is denoted as SAD3.
3) Comparing SAD: since the CU size corresponding to SAD3 is 2m×2m, and the CU sizes corresponding to other SADs are m×m, if the SAD of the CU with different sizes is not fair, we need to reduce the SAD3 value by 4 times, i.e. compare the SAD0/SAD1/SAD 2/(SAD 3> > 2), and then take the optimal MV of the rough search of the CU with the minimum value, which is denoted candMV.
When m=64, since the current CU is as large as the CTU, surrounding CUs thereof may belong to neighboring CTUs, and neighboring CTUs may not yet start encoding. Thus, the second search center point is obtained using the correlation of the current CU's motion trend with its internal smaller CU/PU.
With continued reference to fig. 3, a specific method for obtaining the second search center point by 64×64 CU is as follows:
1) Obtaining the MV corresponding to the minimum SAD of (M/2) x (M/2) _0 in the current MXM CU, and marking the MV as MV1
2) Obtaining the MV corresponding to the minimum SAD of M× (M/2) _0 in the current MXM CU, and marking the MV as MV2
3) Obtaining the MV corresponding to the minimum SAD of the (M/2) xM_0 in the current MxM CU, and marking the MV as MV3
4) The median of MV1, MV2, MV3 is taken and denoted candMV, i.e. candmv=medium (MV 1, MV2, MV 3).
And searching the first search center point and the second search center point again to obtain a target first search center point and a target second search center point. And if the candMV is not equal to the first search center point of the current CU, taking the candMV as a second search center point of the current CU. Otherwise, if the two are equal, in order to avoid repeated searches, the hardware resources are wasted. A new second search center point needs to be found. Assuming that the search step size of the rough search is n, C0 is the first search center point of the current CU, the coordinate value thereof is (x, y), and MVP is the prediction MV of the current CU. Since MVP reflects the motion trend of the current CU to some extent, the relationship of MVP to C0 can be used to determine the second search center point. Continuing to incorporate the drawing,
1) If the MVP of the current CU is in the first quadrant of C0, then C1 is taken as the second search center point.
2) If the MVP of the current CU is in the second quadrant of C0, then C2 is taken as the second search center point.
3) If the MVP of the current CU is in the third quadrant of C0, then C3 is taken as the second search center point.
4) If the MVP of the current CU is in the fourth quadrant of C0, then C4 is taken as the second search center point.
In the above embodiment, the second search center point is additionally obtained, so as to appropriately increase the search range, thereby increasing the probability of searching for the globally optimal matching point, and further improving the encoding quality.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide a center point search device for implementing the above-mentioned related center point search method and a center point search-based motion estimation device of the center point search-based motion estimation method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiments of the one or more center point searching apparatuses and the motion estimation apparatus based on the center point search provided below may refer to the limitation in the center point searching method and the motion estimation method based on the center point search described above, and will not be repeated herein.
In one embodiment, as shown in fig. 9, there is provided a center point search apparatus including: an acquisition module 100, a search module 200, a calculation module 300, and a duplication checking module 400, wherein:
the acquiring module 100 is configured to acquire an original pixel and a reference pixel.
The searching module 200 is configured to search the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the plurality of coding units.
And the calculating module 300 is configured to obtain a first search center point corresponding to the current coding unit based on the prediction unit corresponding to the current coding unit when the current coding unit is not divided, and calculate the current coding unit to obtain a second search center point.
And the searching and repeating module 400 is used for searching and repeating the first searching center point and the second searching center point to obtain a target first searching center point and a target second searching center point corresponding to the current coding unit.
In one embodiment, the computing module 300 includes:
and the external computing unit is used for computing the sum of absolute errors outside the coding unit when the size of the current coding unit is smaller than the preset size, and obtaining a second search center point according to the sum of absolute errors outside the coding unit.
And the internal calculation unit is used for calculating the absolute error sum inside the coding unit when the size of the current coding unit is equal to the preset size, and obtaining a second search center point according to the absolute error sum inside the coding unit.
In one embodiment, the external computing unit includes:
and the error calculation subunit is used for calculating the minimum absolute error sum of each adjacent coding unit adjacent to the current coding unit and the synthesized coding unit.
And the error comparison unit is used for comparing the minimum absolute error sum of each adjacent coding unit and the minimum absolute error sum of the synthesized coding unit to obtain a target coding unit, and taking the motion vector of the target coding unit as a second search center point.
In one embodiment, the internal computing unit includes:
and the third error calculation subunit is used for calculating the minimum absolute error and the corresponding motion vector of each divided coding unit in the current coding unit.
And the error processing subunit is used for taking the minimum absolute error of each divided encoding unit and the median value of the corresponding motion vector as a second searching center point.
In one embodiment, the weight checking module 400 includes:
and the first searching unit is used for taking the first searching center point as a target first searching center point and taking the second searching center point as a target second searching center point when the first searching center point is not equal to the second searching center point.
And the second searching unit is used for taking the first searching center point as a target first searching center point when the first searching center point is equal to the second searching center point, and performing quadrant segmentation based on the target first searching center point to obtain a target second searching center point.
In one embodiment, the second duplicate checking unit includes:
the segmentation unit is used for taking the target first search center point as an origin, and performing quadrant segmentation with a preset step length to obtain a plurality of initial second search center points.
And the target acquisition unit is used for acquiring target second search center points from the plurality of initial second search center points according to the quadrant where the predicted motion vector of the target first search center point is located.
In one embodiment, the apparatus further comprises:
and the dividing module is used for taking the motion vector of the prediction unit of the current coding unit as a target first search center point and a target second search center point respectively when the current coding unit is divided and the dividing number is the target number.
In one embodiment, as shown in fig. 10, there is provided a motion estimation apparatus based on a center point search, including: a central point acquisition module 500, an intra prediction module 600, a comparison module 700, and a target result module 800.
The center point obtaining module 500 is configured to obtain the target first search center point and the target second search center point according to the center point searching apparatus in any one of the foregoing embodiments.
The intra-frame prediction module 600 is configured to perform intra-frame prediction on the original pixel based on the reference pixel, so as to obtain a second prediction result.
The inter-frame prediction module 700 is configured to obtain a first prediction result according to the target first search center point and the target second search center point.
The comparison module 800 is configured to compare the first prediction result and the second prediction result to obtain a target prediction result.
The above-described center point search and the respective modules in the center point search-based motion estimation apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be an encoder, the internal structure of which may be as shown in FIG. 11. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store raw pixel as well as reference pixel data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of center point search.
It will be appreciated by those skilled in the art that the structure shown in fig. 11 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method of any of the embodiments described above when the computer program is executed.
In an embodiment, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method of any of the embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method of any of the embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of center point search, the method comprising:
acquiring an original pixel and a reference pixel;
searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units;
when the current coding unit is not divided, a first search center point corresponding to the current coding unit is obtained based on the prediction unit corresponding to the current coding unit, and the current coding unit is calculated to obtain a second search center point;
Searching the first search center point and the second search center point again to obtain a target first search center point and a target second search center point corresponding to the current coding unit;
when the current coding unit has the division and the division quantity is the target quantity, the motion vector of the prediction unit of the current coding unit is respectively used as the target first search center point and the target second search center point.
2. The method of claim 1, wherein the calculating the current coding unit to obtain the second search center point includes:
when the current size of the coding unit is smaller than a preset size, calculating the sum of absolute errors outside the coding unit, and obtaining the second search center point according to the sum of absolute errors outside the coding unit; the absolute error sum outside the coding unit is used for representing the absolute error sum of the coding unit outside the coding unit at present;
when the current size of the coding unit is equal to the preset size, calculating the absolute error sum inside the coding unit, and obtaining the second search center point according to the absolute error sum inside the coding unit; the absolute error sum inside the coding unit is used for representing the absolute error sum corresponding to each divided coding unit after the current coding unit is divided.
3. The method according to claim 2, wherein calculating the sum of absolute errors outside the coding unit when the current size of the coding unit is smaller than the preset size, and obtaining the second search center point according to the sum of absolute errors outside the coding unit, comprises:
calculating the minimum absolute error sum of each adjacent coding unit adjacent to the current coding unit and the synthesized coding unit; the synthesized coding unit consists of a current coding unit and an adjacent coding unit adjacent to the current coding unit;
and comparing the minimum absolute error sum of each adjacent coding unit with the minimum absolute error sum of the synthesized coding unit to obtain a target coding unit, and taking the motion vector of the target coding unit as the second search center point.
4. The method according to claim 2, wherein calculating the sum of absolute errors inside the coding unit when the current size of the coding unit is equal to the preset size, and obtaining the second search center point according to the sum of absolute errors inside the coding unit, comprises:
calculating the minimum absolute error and the corresponding motion vector of each partitioned coding unit in the current coding unit;
And taking the minimum absolute error of each segmentation coding unit and the median value of the corresponding motion vector as the second searching center point.
5. The method of claim 1, wherein the searching the first search center point and the second search center point to obtain a target first search center point and a target second search center point corresponding to the current coding unit comprises:
when the first search center point is not equal to the second search center point, taking the first search center point as the target first search center point and taking the second search center point as the target second search center point;
when the first search center point is equal to the second search center point, taking the first search center point as the target first search center point, taking the target first search center point as an origin, and performing quadrant segmentation with a preset step length to obtain a plurality of initial second search center points; and obtaining the target second search center point from a plurality of initial second search center points according to the quadrant of the predicted motion vector of the target first search center point.
6. A method of motion estimation based on center point search, the method comprising:
obtaining a target first search center point and a target second search center point according to the center point searching method of any one of claims 1-5;
performing intra-frame prediction on the original pixel based on the reference pixel to obtain a second prediction result;
obtaining a first prediction result according to the target first search center point and the target second search center point;
and comparing the first prediction result with the second prediction result to obtain a target prediction result.
7. A center point search apparatus, the apparatus comprising:
the acquisition module is used for acquiring original pixels and reference pixels;
the searching module is used for searching the original pixel based on the reference pixel to obtain a plurality of coding units and prediction units corresponding to the coding units;
the computing module is used for obtaining a first search center point corresponding to the current coding unit based on the prediction unit corresponding to the current coding unit when the current coding unit is not divided, and computing the current coding unit to obtain a second search center point;
The searching and repeating module is used for searching and repeating the first searching center point and the second searching center point to obtain a target first searching center point and a target second searching center point corresponding to the current coding unit;
and the dividing module is used for respectively taking the motion vector of the prediction unit of the current coding unit as the target first search center point and the target second search center point when the current coding unit is divided and the dividing number is the target number.
8. A center point search based motion estimation apparatus, the apparatus comprising:
a center point acquisition module, configured to obtain a target first search center point and a target second search center point according to the center point search apparatus of claim 7;
the intra-frame prediction module is used for carrying out intra-frame prediction on the original pixel based on the reference pixel to obtain a second prediction result;
the inter-frame prediction module is used for obtaining a first prediction result according to the target first search center point and the target second search center point;
and the comparison module is used for comparing the first prediction result and the second prediction result to obtain a target prediction result.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 5 or 6 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 5 or 6.
CN202211232932.XA 2022-10-10 2022-10-10 Center point searching method, center point searching device, computer equipment and storage medium Active CN115529459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211232932.XA CN115529459B (en) 2022-10-10 2022-10-10 Center point searching method, center point searching device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211232932.XA CN115529459B (en) 2022-10-10 2022-10-10 Center point searching method, center point searching device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115529459A CN115529459A (en) 2022-12-27
CN115529459B true CN115529459B (en) 2024-02-02

Family

ID=84702521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211232932.XA Active CN115529459B (en) 2022-10-10 2022-10-10 Center point searching method, center point searching device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115529459B (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0734177A2 (en) * 1995-03-20 1996-09-25 Daewoo Electronics Co., Ltd Method and apparatus for encoding/decoding a video signal
WO2000033580A1 (en) * 1998-11-30 2000-06-08 Microsoft Corporation Improved motion estimation and block matching pattern
JP2000236552A (en) * 1999-02-15 2000-08-29 Nec Corp Motion vector detector
CN1662067A (en) * 2004-02-27 2005-08-31 松下电器产业株式会社 Motion detection method and dynamic image coding method
EP1679900A2 (en) * 2005-01-07 2006-07-12 NTT DoCoMo, Inc. Apparatus and method for multiresolution encoding and decoding
CN101022551A (en) * 2007-03-15 2007-08-22 上海交通大学 Motion compensating module pixel prefetching device in AVS video hardware decoder
CN101087413A (en) * 2006-06-07 2007-12-12 中兴通讯股份有限公司 Division method of motive object in video sequence
JP2008301270A (en) * 2007-05-31 2008-12-11 Canon Inc Moving image encoding device and moving image encoding method
CN101600112A (en) * 2009-07-09 2009-12-09 杭州士兰微电子股份有限公司 Sub-pixel motion estimation device and method
WO2010041624A1 (en) * 2008-10-09 2010-04-15 株式会社エヌ・ティ・ティ・ドコモ Moving image encoding device, moving image decoding device, moving image encoding method, moving image decoding method, moving image encoding program, moving image decoding program, moving image processing system and moving image processing method
CN101815218A (en) * 2010-04-02 2010-08-25 北京工业大学 Method for coding quick movement estimation video based on macro block characteristics
WO2016008284A1 (en) * 2014-07-18 2016-01-21 清华大学 Intra-frame pixel prediction method, encoding method and decoding method, and device thereof
CN106331703A (en) * 2015-07-03 2017-01-11 华为技术有限公司 Video encoding and decoding method, video encoding and decoding device
CN107872674A (en) * 2017-11-23 2018-04-03 上海交通大学 A layered motion estimation method and device for ultra-high-definition video applications
GB201810794D0 (en) * 2018-06-29 2018-08-15 Imagination Tech Ltd Guaranteed data compression
CN108495138A (en) * 2018-03-28 2018-09-04 天津大学 A kind of integer pixel motion estimation method based on GPU
CN109660800A (en) * 2017-10-12 2019-04-19 北京金山云网络技术有限公司 Method for estimating, device, electronic equipment and computer readable storage medium
CN110365988A (en) * 2018-04-11 2019-10-22 福州瑞芯微电子股份有限公司 A kind of H.265 coding method and device
CN111479115A (en) * 2020-04-14 2020-07-31 腾讯科技(深圳)有限公司 Video image processing method and device and computer readable storage medium
CN112514392A (en) * 2020-02-18 2021-03-16 深圳市大疆创新科技有限公司 Method and apparatus for video encoding
WO2021056225A1 (en) * 2019-09-24 2021-04-01 Oppo广东移动通信有限公司 Inter-frame prediction method and apparatus, device and storage medium
CN114565501A (en) * 2022-02-21 2022-05-31 格兰菲智能科技有限公司 Data loading method and device for convolution operation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4697275B2 (en) * 2008-07-30 2011-06-08 ソニー株式会社 Motion vector detection apparatus, motion vector detection method, and program
US9106922B2 (en) * 2012-12-19 2015-08-11 Vanguard Software Solutions, Inc. Motion estimation engine for video encoding
WO2017201141A1 (en) * 2016-05-17 2017-11-23 Arris Enterprises Llc Template matching for jvet intra prediction

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0734177A2 (en) * 1995-03-20 1996-09-25 Daewoo Electronics Co., Ltd Method and apparatus for encoding/decoding a video signal
WO2000033580A1 (en) * 1998-11-30 2000-06-08 Microsoft Corporation Improved motion estimation and block matching pattern
JP2000236552A (en) * 1999-02-15 2000-08-29 Nec Corp Motion vector detector
CN1662067A (en) * 2004-02-27 2005-08-31 松下电器产业株式会社 Motion detection method and dynamic image coding method
EP1679900A2 (en) * 2005-01-07 2006-07-12 NTT DoCoMo, Inc. Apparatus and method for multiresolution encoding and decoding
CN101087413A (en) * 2006-06-07 2007-12-12 中兴通讯股份有限公司 Division method of motive object in video sequence
CN101022551A (en) * 2007-03-15 2007-08-22 上海交通大学 Motion compensating module pixel prefetching device in AVS video hardware decoder
JP2008301270A (en) * 2007-05-31 2008-12-11 Canon Inc Moving image encoding device and moving image encoding method
WO2010041624A1 (en) * 2008-10-09 2010-04-15 株式会社エヌ・ティ・ティ・ドコモ Moving image encoding device, moving image decoding device, moving image encoding method, moving image decoding method, moving image encoding program, moving image decoding program, moving image processing system and moving image processing method
CN101600112A (en) * 2009-07-09 2009-12-09 杭州士兰微电子股份有限公司 Sub-pixel motion estimation device and method
CN101815218A (en) * 2010-04-02 2010-08-25 北京工业大学 Method for coding quick movement estimation video based on macro block characteristics
WO2016008284A1 (en) * 2014-07-18 2016-01-21 清华大学 Intra-frame pixel prediction method, encoding method and decoding method, and device thereof
CN106331703A (en) * 2015-07-03 2017-01-11 华为技术有限公司 Video encoding and decoding method, video encoding and decoding device
CN109660800A (en) * 2017-10-12 2019-04-19 北京金山云网络技术有限公司 Method for estimating, device, electronic equipment and computer readable storage medium
CN107872674A (en) * 2017-11-23 2018-04-03 上海交通大学 A layered motion estimation method and device for ultra-high-definition video applications
CN108495138A (en) * 2018-03-28 2018-09-04 天津大学 A kind of integer pixel motion estimation method based on GPU
CN110365988A (en) * 2018-04-11 2019-10-22 福州瑞芯微电子股份有限公司 A kind of H.265 coding method and device
GB201810794D0 (en) * 2018-06-29 2018-08-15 Imagination Tech Ltd Guaranteed data compression
WO2021056225A1 (en) * 2019-09-24 2021-04-01 Oppo广东移动通信有限公司 Inter-frame prediction method and apparatus, device and storage medium
CN112514392A (en) * 2020-02-18 2021-03-16 深圳市大疆创新科技有限公司 Method and apparatus for video encoding
CN111479115A (en) * 2020-04-14 2020-07-31 腾讯科技(深圳)有限公司 Video image processing method and device and computer readable storage medium
CN114565501A (en) * 2022-02-21 2022-05-31 格兰菲智能科技有限公司 Data loading method and device for convolution operation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
AHG4: Agenda and report of the AHG meeting on the 360 Video Verification Tests on 2020-09-04;Mathias Wien etal;Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 20th Meeting: by teleconference, 7 – 16 October 2020;全文 *
JVET AHG report: Draft text and test model algorithm description editing (AHG2);Benjamin Bross;Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 15th Meeting: Gothenburg, SE, 3–12 July 2019;全文 *

Also Published As

Publication number Publication date
CN115529459A (en) 2022-12-27

Similar Documents

Publication Publication Date Title
TWI750475B (en) Video processing method, device and recording medium based on interleaving prediction
US7580456B2 (en) Prediction-based directional fractional pixel motion estimation for video coding
US20200244986A1 (en) Picture prediction method and related apparatus
US12301798B2 (en) Method, codec device for intra frame and inter frame joint prediction
US9451266B2 (en) Optimal intra prediction in block-based video coding to calculate minimal activity direction based on texture gradient distribution
US9106922B2 (en) Motion estimation engine for video encoding
US8634471B2 (en) Moving image encoding apparatus, control method thereof and computer-readable storage medium
US8724702B1 (en) Methods and systems for motion estimation used in video coding
US20160080769A1 (en) Encoding system using motion estimation and encoding method using motion estimation
KR100994773B1 (en) Method and apparatus for generating motion vector in hierarchical motion estimation
CN115118977B (en) Intra-frame predictive coding method, system and medium for 360-degree video
US20220292723A1 (en) Attribute information prediction method, encoder, decoder and storage medium
US20160165258A1 (en) Light-weight video coding system and decoder for light-weight video coding system
CN116074537A (en) Encoding method, decoding method, electronic device, and computer-readable storage medium
CN114792290A (en) Image/video processing
CN115529459B (en) Center point searching method, center point searching device, computer equipment and storage medium
CN108924551A (en) The prediction technique and relevant device of video image coding pattern
CN113347417A (en) Method, device, equipment and storage medium for improving rate distortion optimization calculation efficiency
CN104159123B (en) HEVC motion estimation method applied to hardware realization
CN113365081B (en) Method and device for optimizing motion estimation in video coding
CN107194961B (en) Method for Determining Multiple Reference Images in Crowd Image Coding
CN113747166B (en) Encoding and decoding method, device and equipment
CN110035285B (en) Depth Prediction Method Based on Motion Vector Sensitivity
CN114040209A (en) Motion estimation method, device, electronic device and storage medium
CN116156174B (en) Data encoding processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 200135, 11th Floor, Building 3, No. 889 Bibo Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee after: Granfei Intelligent Technology Co.,Ltd.

Country or region after: China

Address before: 200135 Room 201, No. 2557, Jinke Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee before: Gryfield Intelligent Technology Co.,Ltd.

Country or region before: China