[go: up one dir, main page]

CN102377992A - Method and device for obtaining predicted value of motion vector - Google Patents

Method and device for obtaining predicted value of motion vector Download PDF

Info

Publication number
CN102377992A
CN102377992A CN2010102472757A CN201010247275A CN102377992A CN 102377992 A CN102377992 A CN 102377992A CN 2010102472757 A CN2010102472757 A CN 2010102472757A CN 201010247275 A CN201010247275 A CN 201010247275A CN 102377992 A CN102377992 A CN 102377992A
Authority
CN
China
Prior art keywords
motion vector
current block
msub
predicted value
position coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010102472757A
Other languages
Chinese (zh)
Other versions
CN102377992B (en
Inventor
林四新
元辉
常义林
霍俊彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201010247275.7A priority Critical patent/CN102377992B/en
Publication of CN102377992A publication Critical patent/CN102377992A/en
Application granted granted Critical
Publication of CN102377992B publication Critical patent/CN102377992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a method and device for obtaining a predicted value of a motion vector. The technical scheme provided by the embodiment of the invention is as follows: the method comprises the following steps of: obtaining position coordinates and a motion vector of more than one encoded/decoded block neighboring a current block; obtaining association parameters of the motion vector and the position coordinates in accordance with the preset congruent relationship between the motion vector and the position coordinates, and the position coordinates and the motion vector of more than one encoded/decoded block; and obtaining a predicted value of the motion vector of the current block in accordance with the preset congruent relationship between the motion vector and the position coordinates, and the position coordinates of the current block and the association parameters of the motion vector and the motion vector. The embodiment disclosed by the invention can be applied to the video image encoding and decoding process, such as H. 264/AVC and the like.

Description

Method and device for obtaining predicted value of motion vector
Technical Field
The present invention relates to video image coding technologies, and in particular, to a method and an apparatus for obtaining a prediction value of a motion vector.
Background
At present, video images are mainly encoded by using an inter-frame prediction technique of motion compensation. When the encoding end encodes a current block in a current frame, firstly, a prediction block corresponding to the current block is obtained from an encoded video image frame, then, a residual error value between the prediction block and the current block is calculated, the residual error value is encoded, and the encoded residual error value and a motion vector of the current block are written into a code stream and are transmitted to the decoding end; and the decoding end decodes the residual value and the motion vector from the received code stream, acquires a prediction block corresponding to the current block according to the motion vector, and decodes and reconstructs the current block according to the prediction block and the residual value obtained by decoding.
In order to improve the encoding quality and reduce the bit number occupied by encoding the motion vector, the encoding end can encode the motion vector difference of the current block (the difference between the motion vector of the current block and the predicted value of the motion vector of the current block), the decoding end decodes the motion vector difference from the received code stream, and obtains the motion vector of the current block according to the motion vector difference and the predicted value of the motion vector.
In the prior art, a method for an encoding end/a decoding end to obtain a prediction value of a motion vector of a current block may include: the encoding end/decoding end obtains more than one motion vector of an encoded block/an encoded block adjacent to the current block; and the encoding end/the decoding end acquires the median of the motion vectors of the more than one encoded blocks/decoded blocks adjacent to the current block as the predicted value of the motion vector of the current block.
However, if the object in the scene is not rigid (the object has deformation, for example, when the object moves toward or away from the lens), the method for obtaining the motion vector predictor of the current block provided by the prior art cannot accurately obtain the motion vector predictor, thereby causing a problem of low coding efficiency.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for obtaining a prediction value of a motion vector, which can improve coding efficiency for a non-rigid object.
In one aspect, a method for obtaining a prediction value of a motion vector is provided, including: acquiring position coordinates and motion vectors of more than one encoded/decoded block adjacent to a current block; acquiring a relation parameter of the motion vector and the position coordinate according to a preset corresponding relation of the motion vector and the position coordinate and the motion vector of the more than one coded/decoded block; and acquiring the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate and the relation parameters between the position coordinate of the current block and the motion vector and the position coordinate.
In another aspect, an apparatus for obtaining a prediction value of a motion vector is provided, including:
a first obtaining module for obtaining position coordinates and motion vectors of more than one encoded/decoded block adjacent to a current block;
a second obtaining module, configured to obtain a relationship parameter between a motion vector and a position coordinate according to a preset correspondence between the motion vector and the position coordinate, and the position coordinate and the motion vector of the one or more encoded/decoded blocks obtained by the first obtaining module;
and the third acquisition module is used for acquiring the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate and the relation parameters of the position coordinate of the current block and the motion vector and the position coordinate acquired by the second acquisition module.
The method and the device for obtaining the predicted value of the motion vector provided by the embodiment of the invention obtain the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate, and the predicted value of the motion vector is more accurate, thereby being beneficial to coding a non-rigid object image and improving the coding efficiency and the coding quality.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for obtaining a prediction value of a motion vector according to an embodiment of the present invention;
FIG. 2 is a first diagram illustrating a current block and an encoded/decoded block according to an embodiment of the present invention;
FIG. 3 is a second diagram of a current block and an encoded/decoded block according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for obtaining a prediction value of a motion vector according to an embodiment of the present invention;
fig. 5 is a flowchart of a method for obtaining a predicted value of a motion vector according to an embodiment of the present invention;
fig. 6 is a first schematic structural diagram of an apparatus for obtaining a prediction value of a motion vector according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a second apparatus for obtaining a prediction value of a motion vector according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an encoding module in the apparatus for obtaining a prediction value of a motion vector shown in fig. 7 according to the embodiment of the present invention;
fig. 9 is a schematic structural diagram of a decoding module in the apparatus for obtaining a prediction value of a motion vector shown in fig. 7 according to the embodiment of the present invention;
fig. 10 is a first schematic diagram illustrating results of a comparison experiment between a method and an apparatus for obtaining a prediction value of a motion vector according to an embodiment of the present invention and the prior art;
fig. 11 is a schematic diagram illustrating a result of a comparison experiment between a method and an apparatus for obtaining a predicted value of a motion vector according to an embodiment of the present invention and the prior art;
fig. 12 is a third schematic diagram illustrating a result of a comparison experiment between a method and an apparatus for obtaining a prediction value of a motion vector according to an embodiment of the present invention and the prior art.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problem of low coding efficiency of a non-rigid object in the prior art, embodiments of the present invention provide a method and an apparatus for obtaining a predicted value of a motion vector.
As shown in fig. 1, the method for obtaining a prediction value of a motion vector according to an embodiment of the present invention includes:
step 101, obtaining the position coordinates and motion vectors of more than one encoded/decoded block adjacent to the current block.
In this embodiment, step 101 may select at least one encoded/decoded block from the encoded/decoded blocks adjacent to the current block, and obtain the position coordinates and the motion vector of the encoded/decoded block; or, according to a preset encoded/decoded block selection rule, selecting one or more encoded/decoded blocks from the encoded/decoded blocks adjacent to the current block, and obtaining the position coordinates and the motion vector of the encoded/decoded block, for example: the coded/decoded block selection rule may be preset to select the top coded/decoded block on the left side of the current block and the left-most coded block on the top side, as shown in fig. 2, if the current block is E, step 101 may select a and B according to the coded/decoded block selection rule, and obtain the position coordinates and the motion vectors of a and B. Of course, in the actual using process, the coded/decoded block may be selected in the step 101 in other manners, and details of each case are not described here.
Further, if in step 101, the encoding end arbitrarily selects more than one encoded block to obtain the position coordinates and the motion vectors of the encoded/decoded blocks, in order to ensure the consistency between the blocks selected by the decoding end and the encoding end, and thus make the error between the results of the subsequent processing by the encoding end and the decoding end smaller and the accuracy higher, the encoding end needs to send the related information of the selected encoded block to the decoding end, so that the decoding end can select the corresponding decoded block according to the related information of the encoded block to obtain the position coordinates and the motion vectors of the decoded block.
The present embodiment does not limit the sizes of the current block and the encoded/decoded block, and the size of the encoded/decoded block selected in step 101 may be the same as the size of the current block, for example: as shown in fig. 2, the current block E and the adjacent encoded/decoded block A, B, C are the same size, both 4 × 4; it may also be different from the size of the current block, for example: as shown in FIG. 3, the current block is E (16 × 16), the E neighboring encoded/decoded blocks are A (8 × 4), B (4 × 8), and C (16 × 8), and A, B, C are different from E in size.
In this embodiment, the size of the minimum block in the current frame where the current block is located may be used as the unit length of the position coordinate system, and step 101 obtains the position coordinates of more than one encoded/decoded block according to the unit length. For example: if the size of the minimum block in the current frame where the current block is located is 4 × 4, which may be 4 × 4 unit length of the position coordinate system, the position coordinate of the first 4 × 4 block in the coordinate system is (0, 0), the coordinate of the second 4 × 4 block in the coordinate system is (1, 0), and so on.
Of course, the above is only an example, and in an actual using process, the unit length of the position coordinate system may also be set in other manners, and details of each case are not described here.
Step 102, obtaining the relation parameter between the motion vector and the position coordinate according to the preset corresponding relation between the motion vector and the position coordinate, and the position coordinate and the motion vector of the more than one coded/decoded block obtained in step 101.
In the present embodiment, the correspondence relationship between the motion vector and the position coordinate may be as shown in formula (1):
<math> <mrow> <msub> <mi>MV</mi> <mi>x</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>a</mi> <mi>i</mi> </msub> <msup> <mi>x</mi> <mi>i</mi> </msup> <mo>+</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>b</mi> <mi>j</mi> </msub> <msup> <mi>y</mi> <mi>j</mi> </msup> <mo>+</mo> <mi>c</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>MV</mi> <mi>y</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>d</mi> <mi>i</mi> </msub> <msup> <mi>x</mi> <mi>i</mi> </msup> <mo>+</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>e</mi> <mi>j</mi> </msub> <msup> <mi>y</mi> <mi>j</mi> </msup> <mo>+</mo> <mi>f</mi> </mrow> </math>
wherein x and y are position coordinates; MV (Medium Voltage) data basexIs a motion vector component in the x coordinate direction, MVyA motion vector component in the y-coordinate direction;and f is fortuneRelation parameters of the motion vector and the position coordinate; i. j is any natural number.
In order to reduce the amount of calculation and save encoding and decoding resources, the correspondence between the motion vector and the position coordinate may also be as shown in formula (2):
<math> <mrow> <msub> <mi>MV</mi> <mi>x</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>a</mi> <mi>i</mi> </msub> <msup> <mi>x</mi> <mi>i</mi> </msup> <mo>+</mo> <mi>c</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <msub> <mi>MV</mi> <mi>y</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>e</mi> <mi>j</mi> </msub> <msup> <mi>y</mi> <mi>j</mi> </msup> <mo>+</mo> <mi>f</mi> </mrow> </math>
wherein x and y are position coordinates; MV (Medium Voltage) data basexIs a motion vector component in the x coordinate direction, MVyA motion vector component in the y-coordinate direction;and f is a relation parameter of the motion vector and the position coordinate; i. j is any natural number.
The present embodiment will be described by taking an example in which the correspondence relationship between the motion vector and the position coordinate is as shown in formula (1), and i ═ j ═ 1.
In the present embodiment, if N encoded/decoded blocks adjacent to the current block are selected in step 101, the N encoded/decoded blocks are obtainedThe position coordinates of the code block are (x)n,yn) The motion vectors of the N coded/decoded blocks are (MV)x,n,MVy,n) Wherein 1 < ═ N; step 102 may include:
1. the relation parameter of the motion vector component and the position coordinate in the x direction is acquired.
First, the x-direction position coordinates x of the N encoded/decoded blocks are determinednAnd motion vector component MVx,nSubstituting equation (1) yields the following equation (3) in the x direction:
<math> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>MV</mi> <mrow> <mi>x</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>&times;</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>&times;</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>+</mo> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <msub> <mi>MV</mi> <mrow> <mi>x</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>&times;</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>&times;</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>+</mo> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>MV</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <msub> <mrow> <mo>&times;</mo> <mi>a</mi> </mrow> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>y</mi> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>&times;</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>+</mo> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <msub> <mi>MV</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>N</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>&times;</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>y</mi> <mi>N</mi> </msub> <mo>&times;</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>+</mo> <mi>c</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
then, the formula (3) is transformed to obtain a formula (4), and a relationship parameter between the motion vector component in the x direction and the position coordinate is obtained according to the following formula (4).
a 1 b 1 c = ( A T A ) - 1 A T B - - - ( 4 )
Wherein, A = x 1 y 1 1 x 2 y 2 1 . . . . . . . . . . . . . . . . . . . . x N - 1 y N - 1 1 x N y N 1 , B = MV x , 1 MV x , 2 . . . MV x , N - 1 MV x , N .
2. the relation parameters of the motion vector component in the y direction and the position coordinates are acquired.
See the step of obtaining the relation parameter between the motion vector component and the position coordinate in the x direction, and will not be described herein again.
And 103, acquiring a predicted value of the motion vector of the current block according to a preset corresponding relation between the motion vector and the position coordinate of the current block and the relation parameter between the motion vector and the position coordinate acquired in the step 101.
In the present embodiment, the prediction value PMV of the motion vector of the current block can be obtained by the following formula (5):
PMVx=a1x′+b1y′+c (5)
PMVy=d1x′+e1y′+f
wherein x ', y' are the position coordinates of the current block, a1、b1、c、d1、e1And f is a relation parameter of the motion vector and the position coordinate obtained in step 102.
The method for obtaining the predicted value of the motion vector provided by the embodiment of the invention can be applied to an encoding end and a decoding end.
Optionally, when the method for obtaining a predicted value of a motion vector provided in the embodiment of the present invention is applied to an encoding end, as shown in fig. 4, after step 103, the method may further include:
step 104, encoding the motion vector of the current block by using the prediction value of the motion vector obtained in step 103, which may include:
first, the encoding side determines the predicted value of the motion vector acquired in step 103 as the target predicted value of the motion vector according to the rate-distortion criterion.
In the present embodiment, the optimization formula of the rate-distortion criterion is shown in formula (6):
J(PMVi)=D(PMVi)+λ·R(PMVi) (6)
wherein, PMViA predictor, J (PMV), representing a motion vector of the current block obtained by the method of the ith type including the method of obtaining a predictor of a motion vector provided by the embodiment of the present inventioni) Representing the rate-distortion cost, D (PMV)i) Representing distortion, R (PMV)i) Indicating the number of bits occupied by the coding information (including coding mode, motion vector, reference frame, difference signal, etc.), and λ represents the lagrange multiplier.
If according toThe prediction value of the motion vector of the current block acquired in step 103 is calculated by formula (6) to obtain J (PMV)i) If the minimum value is obtained, determining the predicted value of the motion vector obtained in the step 103 as a target predicted value of the motion vector; otherwise, the prediction value of the motion vector acquired in step 103 is not the target prediction value of the motion vector.
Then, the encoding end obtains a motion vector difference value of the current block and encodes the motion vector difference value, wherein the motion vector difference value is a difference value between the motion vector of the current block and a target predicted value of the motion vector.
It should be noted that, in some coding cases, the target prediction value of the motion vector of the current block may be used as the motion vector of the current block, and the motion vector difference value is not coded, considering the coding efficiency comprehensively.
Optionally, in order to enable the decoding end to determine that the prediction value of the motion vector of the current block acquired in step 103 is the target prediction value of the motion vector of the current block, the embodiment may further include a step in which the encoding end sends, to the decoding end, information indicating that the prediction value of the motion vector of the current block is acquired according to a preset correspondence between the motion vector and the position coordinate.
In this embodiment, the information indicating that the prediction value of the motion vector of the current block is obtained according to the preset correspondence between the motion vector and the position coordinate may be frame-level identification information, where the frame-level identification information is used to indicate whether the current frame where the current block is located obtains the prediction value of the motion vector of the current block according to the preset correspondence between the motion vector and the position coordinate, as shown in table 1:
TABLE 1
Figure BSA00000220052400071
When the UseZoomingPMV is 1, the current frame where the current block is located obtains a predicted value of the motion vector of the current block according to a preset corresponding relation between the motion vector and the position coordinate; when the UseZoomingPMV is 0, the current frame where the current block is located obtains the prediction value of the motion vector of the current block according to other methods (for example, a method of calculating the median of the motion vectors of more than one encoded block).
The information indicating that the prediction value of the motion vector of the current block is obtained according to the preset correspondence between the motion vector and the position coordinate may also be block-level identification information, where the block-level identification information is used to indicate whether the current block obtains the prediction value of the motion vector of the current block according to the preset correspondence between the motion vector and the position coordinate, as shown in table 2:
TABLE 2
Figure BSA00000220052400072
Figure BSA00000220052400081
When the Zoom _ PMV _ flag is 1, the prediction value of the motion vector of the current block is obtained according to the preset corresponding relation between the motion vector and the position coordinate; when Zoom _ PMV _ flag is 0, it means that the current block obtains the prediction value of the motion vector of the current block according to other methods (e.g., a method of finding the median of the motion vectors of more than one encoded block).
Of course, the information indicating that the prediction value of the motion vector of the current block is obtained according to the preset correspondence between the motion vector and the position coordinate may also be represented in other manners in the actual using process, and details of each case are not repeated here.
Optionally, when the method for obtaining a predicted value of a motion vector provided in the embodiment of the present invention is applied to a decoding end, as shown in fig. 5, after step 103, the method may further include:
step 105, decoding the prediction value of the motion vector obtained in step 103 to obtain the motion vector of the current block, including:
firstly, the decoding end obtains the information of the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate according to the indication contained in the received code stream, and determines the predicted value of the motion vector obtained in the step 103 as the target predicted value of the motion vector.
Specifically, if the information indicating that the prediction value of the motion vector of the current block is obtained according to the preset correspondence between the motion vector and the position coordinate is the frame-level identification information shown in table 1 and/or the block-level identification information shown in table 2, and the frame-level identification information and/or the block-level identification information are both 1, the decoding end determines that the prediction value of the motion vector obtained in step 103 is the target prediction value of the motion vector.
Then, the decoding end decodes to obtain the motion vector of the current block according to the target predicted value of the motion vector obtained in step 103 and the motion vector difference value of the current block included in the received code stream, wherein the motion vector difference value is the difference value between the motion vector of the current block and the target predicted value of the motion vector.
It should be noted that, if the code stream does not include the motion vector difference, the target prediction value of the motion vector of the current block is the motion vector of the current block.
The method for obtaining the predicted value of the motion vector provided by the embodiment of the invention obtains the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate, and the predicted value of the motion vector is more accurate, thereby being beneficial to coding a non-rigid object image and improving the coding efficiency and the coding quality.
As shown in fig. 6, an embodiment of the present invention further provides an apparatus for obtaining a prediction value of a motion vector, including:
a first obtaining module 601, configured to obtain position coordinates and motion vectors of more than one encoded/decoded block adjacent to a current block;
a second obtaining module 602, configured to obtain a relationship parameter between a motion vector and a position coordinate according to a preset correspondence between the motion vector and the position coordinate, and the position coordinate and the motion vector of one or more encoded/decoded blocks obtained by the first obtaining module 601;
a third obtaining module 603, configured to obtain a prediction value of the motion vector of the current block according to the preset corresponding relationship between the motion vector and the position coordinate, and the position coordinate of the current block and the relationship parameter between the motion vector and the position coordinate obtained by the second obtaining module 602.
Further, as shown in fig. 7, the method for obtaining a prediction value of a motion vector according to an embodiment of the present invention may further include:
an encoding module 604, configured to encode the motion vector of the current block by using the prediction value of the motion vector acquired by the third acquiring module 603; or,
a decoding module 605, configured to decode and obtain the motion vector of the current block by using the prediction value of the motion vector obtained by the third obtaining module 603.
Further, as shown in fig. 8, the encoding module 604 may include:
a first determining sub-module 6041, configured to determine, according to a rate-distortion criterion, that the prediction value of the motion vector obtained by the third obtaining module 603 is a target prediction value of the motion vector;
an encoding sub-module 6042, configured to obtain a motion vector difference value of the current block, and encode the motion vector difference value, where the motion vector difference value is a difference value between the motion vector of the current block and the target prediction value of the motion vector determined by the first determining sub-module 6041.
Further, as shown in fig. 9, the decoding module 605 may include:
a second determining sub-module 6051, configured to obtain, according to an instruction included in the received code stream, information of a prediction value of the motion vector of the current block according to a preset correspondence between the motion vector and the position coordinate, and determine that the prediction value of the motion vector obtained by the third obtaining module 603 is a target prediction value of the motion vector;
a decoding sub-module 6052, configured to decode to obtain the motion vector of the current block according to the target prediction value of the motion vector determined by the second determining sub-module 6051 and the motion vector difference value of the current block included in the received code stream, where the motion vector difference value is a difference value between the motion vector of the current block and the target prediction value of the motion vector determined by the second determining sub-module 6051.
Further, as shown in fig. 7, the method for obtaining a prediction value of a motion vector according to an embodiment of the present invention may further include:
a sending module 606, configured to send, to the decoding end, information indicating that the prediction value of the motion vector of the current block is obtained according to a preset correspondence between the motion vector and the position coordinate.
It should be noted that, in the actual using process, the above-mentioned multiple modules divided by the apparatus for obtaining a predicted value of a motion vector according to the embodiment of the present invention as shown in fig. 6 to fig. 9 may also be implemented by a module or a functional module having a similar function to the multiple modules, which is not described herein again.
The specific implementation method of the apparatus for obtaining a predicted value of a motion vector provided in the embodiment of the present invention may be described in the method for obtaining a predicted value of a motion vector provided in the embodiment of the present invention, and is not described herein again.
The device for obtaining the predicted value of the motion vector provided by the embodiment of the invention obtains the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate, the predicted value of the motion vector is more accurate, the device is favorable for coding a non-rigid object image, and the coding efficiency and the coding quality are improved
In order to enable those skilled in the art to more clearly understand the beneficial effects brought by the method and the apparatus for obtaining a predicted value of a motion vector according to the embodiments of the present invention, the following description is provided by specific experimental data.
The experiment selected 4 representative sequences for testing, with Quantization Parameters (QP) of 27, 32, 37, 42. The test conditions were: intra inter frame (IPPP) coding structure, 1 reference frame, context based binary arithmetic coding (CABAC) entropy coding, 1/4 pixel motion estimation, 32 × 32 motion search range, 4 × 4 transform.
The video coding gain is expressed by BD Birate, BD PSNR 2. The BD Bitrate negative value indicates that the coding rate is reduced by adopting the technology of the invention; positive values of the BD PSNR indicate an increase in PSNR after encoding using the present invention. The results of the experiment are shown in table 3 and fig. 10 to 12.
TABLE 3
Figure BSA00000220052400111
According to the experimental results, compared with the prior art, for the sequence of the non-rigid scene, when the video quality PSNR is recovered to be the same, the code rate of the technical scheme provided by the embodiment of the invention is averagely reduced by 3.97%; under the condition of the same code rate, the recovery video quality average of the technical scheme provided by the embodiment of the invention can be improved by 0.2005 dB.
The method and the device for acquiring the predicted value of the motion vector provided by the embodiment of the invention can be applied to the encoding and decoding processes of video images such as H.264/AVC and the like in an expanding way.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A method for obtaining a prediction value of a motion vector, comprising:
acquiring position coordinates and motion vectors of more than one encoded/decoded block adjacent to a current block;
acquiring a relation parameter of the motion vector and the position coordinate according to a preset corresponding relation of the motion vector and the position coordinate and the motion vector of the more than one coded/decoded block;
and acquiring the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate and the relation parameters between the position coordinate of the current block and the motion vector and the position coordinate.
2. The method according to claim 1, wherein the correspondence between the preset motion vector and the position coordinate comprises:
<math> <mrow> <msub> <mi>MV</mi> <mi>x</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>a</mi> <mi>i</mi> </msub> <msup> <mi>x</mi> <mi>i</mi> </msup> <mo>+</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>b</mi> <mi>j</mi> </msub> <msup> <mi>y</mi> <mi>j</mi> </msup> <mo>+</mo> <mi>c</mi> </mrow> </math>
<math> <mrow> <msub> <mi>MV</mi> <mi>y</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>d</mi> <mi>i</mi> </msub> <msup> <mi>x</mi> <mi>i</mi> </msup> <mo>+</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>e</mi> <mi>j</mi> </msub> <msup> <mi>y</mi> <mi>j</mi> </msup> <mo>+</mo> <mi>f</mi> </mrow> </math>
or,
<math> <mrow> <msub> <mi>MV</mi> <mi>x</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>i</mi> </munder> <msub> <mi>a</mi> <mi>i</mi> </msub> <msup> <mi>x</mi> <mi>i</mi> </msup> <mo>+</mo> <mi>c</mi> </mrow> </math>
<math> <mrow> <msub> <mi>MV</mi> <mi>y</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>j</mi> </munder> <msub> <mi>e</mi> <mi>j</mi> </msub> <msup> <mi>y</mi> <mi>j</mi> </msup> <mo>+</mo> <mi>f</mi> </mrow> </math>
wherein x and y are position coordinates; MV (Medium Voltage) data basexIs a motion vector component in the x coordinate direction, MVyA motion vector component in the y-coordinate direction;
Figure FSA00000220052300015
and f is a relation parameter of the motion vector and the position coordinate; i. j is any natural number.
3. The method of claim 1, further comprising:
encoding the motion vector of the current block by adopting the predicted value of the motion vector; or,
and decoding by adopting the predicted value of the motion vector to obtain the motion vector of the current block.
4. The method of claim 3, wherein said encoding the motion vector of the current block using the prediction value of the motion vector comprises:
determining the predicted value of the motion vector as a target predicted value of the motion vector according to a rate distortion criterion;
and acquiring a motion vector difference value of the current block, and encoding the motion vector difference value, wherein the motion vector difference value is the difference value between the motion vector of the current block and a target predicted value of the motion vector.
5. The method of claim 3, wherein the decoding using the predictor of the motion vector to obtain the motion vector of the current block comprises:
determining the predicted value of the motion vector as a target predicted value of the motion vector according to indication information contained in the received code stream; the indication information is used for indicating to obtain a predicted value of the motion vector of the current block according to the corresponding relation between the motion vector and the position coordinate;
and decoding to obtain the motion vector of the current block according to the target predicted value of the motion vector and the motion vector difference value of the current block contained in the received code stream, wherein the motion vector difference value is the difference value between the motion vector of the current block and the target predicted value of the motion vector.
6. The method of claim 1, further comprising:
and sending indication information to a decoding end, wherein the indication information indicates that the information of the predicted value of the motion vector of the current block is obtained according to the corresponding relation between the motion vector and the position coordinate.
7. The method according to claim 5 or 6, wherein the information indicating that the prediction value of the motion vector of the current block is obtained according to a preset correspondence between the motion vector and the position coordinate comprises:
the frame-level identification information is used for indicating whether the current frame where the current block is located obtains the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate; and/or the presence of a gas in the gas,
and the identification information of the block level is used for indicating whether the current block obtains the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate.
8. An apparatus for obtaining a prediction value of a motion vector, comprising:
a first obtaining module for obtaining position coordinates and motion vectors of more than one encoded/decoded block adjacent to a current block;
a second obtaining module, configured to obtain a relationship parameter between a motion vector and a position coordinate according to a preset correspondence between the motion vector and the position coordinate, and the position coordinate and the motion vector of the one or more encoded/decoded blocks obtained by the first obtaining module;
and the third acquisition module is used for acquiring the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate and the relation parameters of the position coordinate of the current block and the motion vector and the position coordinate acquired by the second acquisition module.
9. The apparatus of claim 8, further comprising:
the encoding module is used for encoding the motion vector of the current block by adopting the prediction value of the motion vector acquired by the third acquisition module; or,
and the decoding module is used for decoding the predicted value of the motion vector acquired by the third acquisition module to acquire the motion vector of the current block.
10. The apparatus of claim 9, wherein the encoding module comprises:
the first determining submodule is used for determining the predicted value of the motion vector acquired by the third acquiring module as a target predicted value of the motion vector according to a rate-distortion criterion;
and the encoding submodule is used for acquiring a motion vector difference value of the current block and encoding the motion vector difference value, wherein the motion vector difference value is the difference value between the motion vector of the current block and the target predicted value of the motion vector determined by the first determining submodule.
11. The apparatus of claim 9, wherein the decoding module comprises:
the second determining submodule is used for acquiring the information of the predicted value of the motion vector of the current block according to the preset corresponding relation between the motion vector and the position coordinate according to the indication contained in the received code stream, and determining the predicted value of the motion vector acquired by the third acquiring module as the target predicted value of the motion vector;
and a decoding submodule, configured to decode to obtain the motion vector of the current block according to the target prediction value of the motion vector determined by the second determining submodule and a motion vector difference value of the current block included in the received code stream, where the motion vector difference value is a difference value between the motion vector of the current block and the target prediction value of the motion vector determined by the second determining submodule.
12. The apparatus of claim 8, further comprising:
and the sending module is used for sending information indicating that the predicted value of the motion vector of the current block is obtained according to the preset corresponding relation between the motion vector and the position coordinate to the decoding end.
CN201010247275.7A 2010-08-06 2010-08-06 Method and device for obtaining predicted value of motion vector Active CN102377992B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010247275.7A CN102377992B (en) 2010-08-06 2010-08-06 Method and device for obtaining predicted value of motion vector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010247275.7A CN102377992B (en) 2010-08-06 2010-08-06 Method and device for obtaining predicted value of motion vector

Publications (2)

Publication Number Publication Date
CN102377992A true CN102377992A (en) 2012-03-14
CN102377992B CN102377992B (en) 2014-06-04

Family

ID=45795880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010247275.7A Active CN102377992B (en) 2010-08-06 2010-08-06 Method and device for obtaining predicted value of motion vector

Country Status (1)

Country Link
CN (1) CN102377992B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883161A (en) * 2012-09-19 2013-01-16 华为技术有限公司 Video encoding and decoding processing method and device
WO2017054630A1 (en) * 2015-09-29 2017-04-06 华为技术有限公司 Image prediction method and device
WO2017148345A1 (en) * 2016-03-01 2017-09-08 Mediatek Inc. Method and apparatus of video coding with affine motion compensation
CN107770524A (en) * 2013-06-15 2018-03-06 浙江大学 A kind of video coding-decoding processing method and device
WO2019109955A1 (en) * 2017-12-08 2019-06-13 华为技术有限公司 Interframe prediction method and apparatus, and terminal device
WO2020057664A1 (en) * 2018-09-21 2020-03-26 华为技术有限公司 Method and apparatus for determining motion vector

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1608380A (en) * 2001-12-19 2005-04-20 汤姆森许可贸易公司 Method for estimating the dominant motion in a sequence of images
CN1917642A (en) * 2005-08-19 2007-02-21 汤姆森许可贸易公司 Method and apparatus for iteratively calculating a set of global motion parameters
CN101023676A (en) * 2004-07-20 2007-08-22 高通股份有限公司 Method and apparatus for motion vector prediction in temporal video compression
CN101573983A (en) * 2007-01-03 2009-11-04 三星电子株式会社 Method and apparatus for predicting motion vector using global motion vector, encoder, decoder, and decoding method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1608380A (en) * 2001-12-19 2005-04-20 汤姆森许可贸易公司 Method for estimating the dominant motion in a sequence of images
CN101023676A (en) * 2004-07-20 2007-08-22 高通股份有限公司 Method and apparatus for motion vector prediction in temporal video compression
CN1917642A (en) * 2005-08-19 2007-02-21 汤姆森许可贸易公司 Method and apparatus for iteratively calculating a set of global motion parameters
CN101573983A (en) * 2007-01-03 2009-11-04 三星电子株式会社 Method and apparatus for predicting motion vector using global motion vector, encoder, decoder, and decoding method

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883161B (en) * 2012-09-19 2015-09-30 华为技术有限公司 The processing method of Video coding and decoding and device
CN102883161A (en) * 2012-09-19 2013-01-16 华为技术有限公司 Video encoding and decoding processing method and device
CN107770524B (en) * 2013-06-15 2020-11-06 浙江大学 A kind of video coding and decoding processing method and device
CN107770524A (en) * 2013-06-15 2018-03-06 浙江大学 A kind of video coding-decoding processing method and device
WO2017054630A1 (en) * 2015-09-29 2017-04-06 华为技术有限公司 Image prediction method and device
US12058365B2 (en) 2015-09-29 2024-08-06 Huawei Technologies Co., Ltd. Image prediction method and apparatus
US11323736B2 (en) 2015-09-29 2022-05-03 Huawei Technologies Co., Ltd. Image prediction method and apparatus
WO2017148345A1 (en) * 2016-03-01 2017-09-08 Mediatek Inc. Method and apparatus of video coding with affine motion compensation
CN109905714A (en) * 2017-12-08 2019-06-18 华为技术有限公司 Inter-frame prediction method, device and terminal device
US11290724B2 (en) 2017-12-08 2022-03-29 Huawei Technologies Co., Ltd. Inter prediction method and apparatus, and terminal device
WO2019109955A1 (en) * 2017-12-08 2019-06-13 华为技术有限公司 Interframe prediction method and apparatus, and terminal device
US11259042B2 (en) 2018-09-21 2022-02-22 Huawei Technologies Co., Ltd. Motion vector determining method and apparatus
WO2020057664A1 (en) * 2018-09-21 2020-03-26 华为技术有限公司 Method and apparatus for determining motion vector

Also Published As

Publication number Publication date
CN102377992B (en) 2014-06-04

Similar Documents

Publication Publication Date Title
US11838509B2 (en) Video coding method and apparatus
CN102047665B (en) Dynamic image encoding method and dynamic image decoding method
US10097826B2 (en) Method and device for generating a predicted value of an image using interpolation and motion vectors
KR101781874B1 (en) New planar prediction mode
CN107046645B (en) Image coding and decoding method and device
CN110637462B (en) Video coding and decoding method and device
WO2016050051A1 (en) Image prediction method and relevant device
US20140112392A1 (en) Apparatus and method for coding and decoding image
TWI621351B (en) Image prediction decoding device, image prediction decoding method and image prediction decoding program
JP2009111691A (en) Image-encoding device and encoding method, and image-decoding device and decoding method
KR20170026536A (en) Method for encoding a digital image, and associated decoding method, devices and computer programmes
CN112866720B (en) Motion vector prediction method and device and coder-decoder
KR20080064355A (en) Prediction encoding method and apparatus for video, decoding method and apparatus thereof
CN103329528A (en) Video encoding and decoding with improved error resilience
US20110243227A1 (en) Moving picture decoding method and device, and moving picture encoding method and device
CN102377992A (en) Method and device for obtaining predicted value of motion vector
JP2020523818A (en) Method and apparatus for encoding or decoding video data in FRUC mode with reduced memory access
CN102396232A (en) Image-processing device and method
KR101663235B1 (en) Method and apparatus for video coding and decoding using intra prediction
CN115022639A (en) Encoding and decoding method, device and equipment
CN110662033A (en) Decoding and encoding method and device thereof
CN110662074B (en) Motion vector determination method and device
US20230164310A1 (en) Bitstream decoder
JP2009049969A (en) Device and method of coding moving image and device and method of decoding moving image
JP5788952B2 (en) Video decoding method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant