CN1960487B - Method for estimating motion vector based on distance weighted search order - Google Patents
Method for estimating motion vector based on distance weighted search order Download PDFInfo
- Publication number
- CN1960487B CN1960487B CN 200510115481 CN200510115481A CN1960487B CN 1960487 B CN1960487 B CN 1960487B CN 200510115481 CN200510115481 CN 200510115481 CN 200510115481 A CN200510115481 A CN 200510115481A CN 1960487 B CN1960487 B CN 1960487B
- Authority
- CN
- China
- Prior art keywords
- distance
- search
- weighted
- motion vector
- estimating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 239000013598 vector Substances 0.000 title claims abstract description 46
- 229910003460 diamond Inorganic materials 0.000 claims description 11
- 239000010432 diamond Substances 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 238000007906 compression Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 5
- 239000007787 solid Substances 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
Images
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
技术领域technical field
本发明是有关于一种视讯编码的动态估计的方法,特别是涉及一种以距离加权(Distance Weighted)顺序进行移动估算(Motion Estimation)的方法。The present invention relates to a method for motion estimation of video coding, in particular to a method for motion estimation in a distance weighted order.
背景技术Background technique
如图1所示,目前在MPEG视讯编码影像的数据流(Data stream)中,其数据结构皆是由一个或一个以上的序列(Sequence)所构成,而在每个序列之中则包含了多数个图像群组(Group of Picture,GOP),而所谓的图像群组指的是由许多画面(Frame)或称为图像(Picture)所构成的群组,画面依其属性可区分幅内编码画面(I Frame)、预测编码画面(PFrame),以及双向编码画面(B Frame)影像三种型态。As shown in Figure 1, currently in the data stream (Data stream) of MPEG video coded images, its data structure is composed of one or more sequences (Sequence), and each sequence contains most A group of pictures (Group of Picture, GOP), and the so-called group of pictures refers to a group composed of many pictures (Frame) or pictures (Picture), and the pictures can be distinguished from intra-frame coding pictures according to their attributes (I Frame), predictive coding frame (PFrame), and bidirectional coding frame (B Frame) image types.
上述每一种画面均可加以编码,一般是以幅内编码(I)画面作为起始影像压缩的切入点,借由移动向量(Motion Vector)的估算,预测编码(P)画面可以幅内编码(I)画面或预测编码(P)画面作为参考画面来进行预测,而双向编码(B)画面则是以幅内编码(I)画面与预测编码(P)画面二者或两预测编码(P)画面作为参考画面所产生的移动向量推估,如此将画面连续播放后,呈现在使用者面前即为动态的MPEG视讯影像。Each of the above-mentioned pictures can be coded. Generally, the intra-coded (I) picture is used as the starting point of image compression. By estimating the motion vector (Motion Vector), the predictive coded (P) picture can be intra-coded (I) picture or predictive coded (P) picture is used as a reference picture for prediction, while bidirectionally coded (B) picture is based on both intra coded (I) picture and predictive coded (P) picture or two predictive coded (P) pictures ) frame is estimated as the motion vector generated by the reference frame, so that after playing the frame continuously, it will be presented as a dynamic MPEG video image in front of the user.
而在MPEG的压缩标准中,除需进行移动向量的估算外,尚有所谓的移动补偿(Motion Compensation),一般移动补偿最直接的作法,便是纪录每一画素的亮度及彩度,并以全找搜寻(Full Search)方式纪录两者变化,但是此举将耗费大量资源,因此,现行的做法是将每个画面细分为数个像条(Slice),像条中又可再分为数个巨块(MacroBlock,MB),而巨块可由四个亮度(Luminance)像块及数个彩度(Chrominance)像块所组成,最后将每一像块(Block)定义为MPEG的数据结构中的最小编码单位。In the MPEG compression standard, in addition to the estimation of the motion vector, there is also the so-called motion compensation (Motion Compensation). Generally, the most direct method of motion compensation is to record the brightness and chroma of each pixel, and use The Full Search method records the changes of the two, but this will consume a lot of resources. Therefore, the current practice is to subdivide each picture into several slices, and the slices can be further divided into several slices. A macroblock (MacroBlock, MB), and a macroblock can be composed of four brightness (Luminance) blocks and several chroma (Chrominance) blocks, and finally each block (Block) is defined as an MPEG data structure The smallest coding unit.
配合移动向量的估算以及移动补偿技术,目前画面便可借由具有的各像块以及从先前画面找出最佳匹配像块,以计算出的移动向量与差值资料将先前画面的像块加以调整而成为目前画面,依据移动向量可将像块转移至适当位置,差值数据则可提供亮度、彩度及饱和值的变化,由于无须纪录大部分的重复资料,因此可节省储存的数据量达到数据压缩的目的。Cooperating with motion vector estimation and motion compensation technology, the current frame can find the best matching pixel blocks from the previous frame through the existing image blocks, and use the calculated motion vector and difference data to convert the image blocks of the previous frame to Adjust to become the current picture, the image block can be transferred to the appropriate position according to the motion vector, and the difference data can provide the change of brightness, chroma and saturation value, because there is no need to record most of the repeated data, so the amount of stored data can be saved To achieve the purpose of data compression.
如图2所示,上述从先前画面找出最佳匹配像块的方式,以目前的一种三步搜寻(Three Step Search;TSS)判断方式为例,首先以一欲搜寻的搜寻区域21的中心定义为起始点210,而以其外部8点作为检查点,假设搜寻区域21的搜寻区域大小为4×4,且找到搜寻区域21右下方具有的检查点20为相似,则以检查点20为中心,再缩小搜寻区域22的搜寻区域大小为2×2,最后找到搜寻区域22右上方具有最相似的检查点220后,即可依此定义出移动向量值。As shown in FIG. 2, the above-mentioned method of finding the best matching image block from the previous picture is taken as an example of a current three-step search (Three Step Search; TSS) judgment method. First, a
现有一种钻石搜寻(Diamond Shape Search;DSS)判断方式,则是以一菱形(Rhombus)的外框作为搜寻区域,其步骤说明如下:There is an existing Diamond Shape Search (DSS) judgment method, which uses a rhombus frame as the search area. The steps are as follows:
步骤1:找寻一搜寻起始点及以起始点为中心的空心菱形外框共九点,若最相似点在菱形中心则进行步骤4,若最相似点在菱形外框则进行步骤2。Step 1: Find a search starting point and a hollow rhombus frame with the starting point as the center, a total of nine points, if the most similar point is in the center of the rhombus, go to step 4, if the most similar point is in the rhombus frame, go to step 2.
步骤2:以最相似点为中心点,重复以一菱形外框作为搜寻区域。Step 2: Take the most similar point as the center point, and repeatedly use a diamond-shaped outer frame as the search area.
步骤3:若最相似点仍在中心点,则进行步骤4,若最相似点在菱形外框上,则重复步骤2。Step 3: If the most similar point is still at the center point, go to step 4, if the most similar point is on the diamond frame, then repeat step 2.
步骤4:缩小搜寻区域的搜寻范围为小实心菱形,最后找到最相似点后,即结束搜寻。Step 4: Narrow down the search area. The search range is a small solid diamond, and the search ends when the most similar point is found.
如图3所示,说明步骤1中,发现最相似点在菱形中心1’,因此进行步骤4,缩小搜寻区域的搜寻范围为小实心菱形,最后找到最相似点2’后结束搜寻。As shown in Figure 3, in step 1, the most similar point is found in the diamond center 1', so step 4 is performed to narrow the search area to a small solid diamond, and finally the most similar point 2' is found to end the search.
如图4所示,说明步骤1中,最相似点1”在菱形外框,因此进行步骤2,以最相似点1”为中心点,重复以一菱形外框作为搜寻区域,接着进行步骤3,因为最相似点2”在菱形外框上,故重复步骤2,重复以一菱形外框作为搜寻区域,发现最相似点仍在菱形中心2”,因此进行步骤4,缩小搜寻区域的搜寻范围为小实心菱形,最后找到最相似点4”后结束搜寻。As shown in Figure 4, it shows that in step 1, the most similar point 1" is in the diamond-shaped outer frame, so proceed to step 2, take the most similar point 1" as the center point, and repeatedly use a diamond-shaped outer frame as the search area, and then proceed to step 3 , because the most similar point 2” is on the rhombus outer frame, so repeat step 2 and use a rhombus outer frame as the search area, and find that the most similar point is still at the center of the rhombus 2”, so proceed to step 4 to narrow the search range of the search area It is a small solid diamond, and the search ends after finding the most similar point 4".
如图5、6所示,目前无论是三步搜寻或钻石搜寻等判断方式,其搜寻的顺序主要仍是采取扫描式(Raster Pattern)搜寻,或是螺旋式(Spiral Pattern)搜寻法。所谓扫描式搜寻,乃是在所定义的搜寻范围自左而右、自上而下的顺序逐列寻找最相似的检查点;而所谓螺旋式搜寻法,则是在所定义的搜寻范围由中心点出发,以螺旋状的顺序向外逐点搜寻。As shown in Figures 5 and 6, no matter whether it is a three-step search or a diamond search, the search sequence is still mainly scanning (Raster Pattern) search or spiral (Spiral Pattern) search method. The so-called scanning search is to find the most similar checkpoints column by row from left to right and top to bottom within the defined search range; while the so-called spiral search method is to search from the center to Starting from the point, search outward point by point in a spiral order.
发明内容Contents of the invention
因此,本发明的一目的,即在提供一种节省搜寻所耗费的时间,并提高效率的基于距离加权搜寻顺序以估算移动向量的方法。Therefore, it is an object of the present invention to provide a method for estimating a motion vector based on a distance-weighted search sequence that saves time spent on searching and improves efficiency.
于是,本发明基于距离加权搜寻顺序以估算移动向量的方法是参考一第一画面所具有的一预定像块,并搜寻邻近的一第二画面中对应预定像块相匹配的像块以估算预定像块的移动向量,其特征在于:所述的方法是于第二画面中是依一预定原则选择的一像块为一起始点,并以距离加权方式向外逐渐扩展搜寻与预定像块相匹配的像块以估算预定像块的移动向量,所述选择的一像块为搜寻区域的中心点。Therefore, the method for estimating the motion vector based on the distance-weighted search order of the present invention is to refer to a predetermined image block in a first frame, and search for a matching image block corresponding to the predetermined image block in an adjacent second frame to estimate the predetermined image block. The moving vector of the image block is characterized in that: the method is to use a image block selected according to a predetermined principle as a starting point in the second picture, and gradually expand outwards in a distance-weighted manner to search for a match with the predetermined image block The selected image block is used to estimate the motion vector of the predetermined image block, and the selected image block is the center point of the search area.
有鉴于移动向量常是以放射状分布的方式出现,依据本发明以距离加权顺序搜寻可避免前述扫描式搜寻或是螺旋式搜寻法需要逐点一一搜寻所耗费的搜寻时间。In view of the fact that the motion vectors usually appear in a radial distribution, the distance-weighted sequential search according to the present invention can avoid the search time spent point by point in the aforementioned scanning search or spiral search method.
附图说明Description of drawings
下面结合附图及实施例对本发明进行详细说明:Below in conjunction with accompanying drawing and embodiment the present invention is described in detail:
图1是一示意图,说明目前在MPEG视讯编码影像的数据数据结构;Fig. 1 is a schematic diagram illustrating the current data data structure of an MPEG video coded image;
图2是一示意图,说明现有的一种三步判断方式;Fig. 2 is a schematic diagram illustrating an existing three-step judgment method;
图3是一示意图,说明现有的一种钻石判断方式;Fig. 3 is a schematic diagram illustrating an existing diamond judging method;
图4是一示意图,说明现有的一种钻石判断方式;Fig. 4 is a schematic diagram illustrating an existing diamond judging method;
图5是一示意图,说明目前的一种扫描式搜寻法;Fig. 5 is a schematic diagram illustrating a current scanning search method;
图6是一示意图,说明目前的一种螺旋式搜寻法;Fig. 6 is a schematic diagram illustrating a current spiral search method;
图7是一电路方块图,说明使用本发明方法的系统是一用以执行MPEG视讯编码功能的视讯编码装置;Fig. 7 is a block diagram of a circuit, illustrating that the system using the method of the present invention is a video encoding device for performing MPEG video encoding functions;
图8是一示意图,说明本发明基于距离加权搜寻顺序以估算移动向量的方法的移动向量估算方式;FIG. 8 is a schematic diagram illustrating the motion vector estimation method of the method for estimating motion vectors based on the distance-weighted search sequence of the present invention;
图9是一示意图,说明本发明基于距离加权搜寻顺序以估算移动向量的方法的第一较佳实施例以距离加权顺序法则进行的搜寻顺序;FIG. 9 is a schematic diagram illustrating the search sequence performed by the distance-weighted sequence rule in the first preferred embodiment of the method for estimating motion vectors based on the distance-weighted search sequence;
图10是一示意图,说明本发明基于距离加权搜寻顺序以估算移动向量的方法的第二较佳实施例以距离加权顺序法则进行的搜寻顺序。FIG. 10 is a schematic diagram illustrating the search sequence performed by the distance-weighted order rule in the second preferred embodiment of the method for estimating motion vectors based on the distance-weighted search order of the present invention.
具体实施方式Detailed ways
有关本发明的前述及其它技术内容、特点与功效,在以下配合参考图式的三个较佳实施例的详细说明中,将可清楚的呈现。在本发明被详细描述之前,要注意的是,在以下的说明内容中,类似的元件是以相同的编号来表示。The aforementioned and other technical contents, features and effects of the present invention will be clearly presented in the following detailed description of three preferred embodiments with reference to the drawings. Before the present invention is described in detail, it should be noted that in the following description, similar elements are denoted by the same reference numerals.
如图7所示,使用本发明方法的系统是一用以执行MPEG视讯压缩功能的视讯编码装置1,然而,其它实施例中,或可应用在执行类似的视讯压缩功能的处理系统。As shown in FIG. 7 , the system using the method of the present invention is a video coding device 1 for performing MPEG video compression function. However, in other embodiments, it may be applied to a processing system for performing similar video compression function.
视讯编码装置1具有一前处理单元(Preprocessor)10、一移动估算单元(MotionEstimation)11、一移动补偿单元12、一移动向量编码单元(Motion Vector EncodingUnit)13、一纹理编码模组(Texture Encoding Unit)14、一比特流组合单元(Bit-StreamComposer)15及一记忆体16。The video encoding device 1 has a preprocessor unit (Preprocessor) 10, a motion estimation unit (MotionEstimation) 11, a
欲将一原始的输入影像100输入至视讯编码装置1时,是由视讯编码装置1的一前处理单元10先将一给定画面中的每一个巨块资料定义出来,并暂存至记忆体16;接着由移动估算单元11对于输入影像100中画面所具有的巨块资料进行计算,例如运算前、后画面中对应的像块资料后,可求得全画面的各像块资料的移动向量资料102;接着将其输入至移动补偿单元12,由移动补偿单元12利用所述的移动向量撷取先前或后一画面中的影像巨块资料以得到一参考资料104;再将前处理单元10得到输入影像100具有的影像巨块资料减去移动补偿单元12得到的参考数据104后,便得到一差值数据103,由纹理编码模组14对差值数据103进行运算以获得压缩的纹理及重建的参考资料。When an original input image 100 is to be input to the video coding device 1, a
纹理编码模组14的一离散余弦转换单元141是是对每一像块的画素资料施以离散余弦转换(DCT),接着以频域转换单元142把画素数据由时域转换为频域,接着由量化单元143施以量化(Quantize)步骤,使得许多经过离散余弦转换的DCT系数量化为零,并去除掉高频部分;并需再经反量化单元144、反离散余弦转换单元145进行反量化以及反离散余弦转换运算,如此再反馈至移动估算单元11。并可由移动向量编码单元13将各移动向量加以编码输出至比特流组合单元15具有的一可变长度编码器151。A discrete cosine transform unit 141 of the
同时,需以交流/直流预测单元146(AC/DC Prediction)依照同一画面中各像块(Block)重复的累赘信息去除,再由交错扫描单元147进行交错扫描(Zig-zag scan)来将量化后的DCT系数重新排列,将低频系数排列在前而高频系数排列在后,最后在经交错扫瞄过后的DCT系数进行动态长度编码(Run Length Encoding;RLE),最后再由比特流组合单元15具有的另一可变长度编码器152对二者进行可变长度编码(VariableLength Coding;VLC),由比特流组合单元15加以组合,便可完成MPEG压缩格式的输出。At the same time, it is necessary to use the AC/DC prediction unit 146 (AC/DC Prediction) to remove the redundant information repeated in each block (Block) in the same picture, and then the interleaved scanning unit 147 performs interleaved scanning (Zig-zag scan) to quantize After the DCT coefficients are rearranged, the low-frequency coefficients are arranged in front and the high-frequency coefficients are arranged in the back, and finally the dynamic length encoding (Run Length Encoding; RLE) is performed on the DCT coefficients after interleaved scanning, and finally the bit stream combination unit Another variable length encoder 152 in 15 performs variable length coding (VariableLength Coding; VLC) on the two, and combines them by the bit stream combining unit 15 to complete the output in the MPEG compression format.
配合图7、8所示,本发明基于距离加权搜寻顺序以估算移动向量的方法的三个较佳实施例,均是在视讯编码装置1中的移动估算单元11进行,然而,其它实施例中,本发明的概念也可用于目前常见的全找搜寻、钻石搜寻、三步搜寻、四步搜寻或其它任一种动态估计.As shown in Figures 7 and 8, the three preferred embodiments of the method for estimating the motion vector based on the distance-weighted search sequence of the present invention are all performed in the motion estimation unit 11 of the video coding device 1, however, in other embodiments , the concept of the present invention can also be used in the current common all-find search, diamond search, three-step search, four-step search or any other dynamic estimation.
所述的方法是从一第一画面501(又称为目前画面;Current Frame)所具有的一预定像块51,并搜寻邻近的一第二画面502(又称为参考画面;Reference Frame)对应预定像块51所相匹配的像块52以估算预定像块51的移动向量53,所述的方法是以第二画面502中依一预定原则选择一像块为一起始点,以距离加权方式向外逐渐扩展搜寻相匹配的像块52以估算预定像块51的移动向量53。Described method is from a
本发明基于距离加权搜寻顺序以估算移动向量的方法对于加权距离的概念可以公式1来说明:The method for estimating the motion vector based on the distance-weighted search sequence of the present invention can be described by formula 1 for the concept of weighted distance:
搜寻加权距离=H(u,v); 公式1Search weighted distance = H(u, v); Formula 1
H是一预定的距离权重函数,u是与起始点相距的U轴坐标,v是与起始点相距的V轴坐标,U、V两坐标是构成平面的一组基底(Bases),搜寻顺序是先自加权距离最小的各像块开始搜寻。H is a predetermined distance weight function, u is the U-axis coordinate distance from the starting point, v is the V-axis coordinate distance from the starting point, the U and V coordinates are a set of bases (Bases) that constitute the plane, and the search order is Start searching from the image blocks with the smallest weighted distance first.
如图9所示,说明本发明基于距离加权搜寻顺序以估算移动向量的方法的第一较佳实施例,本实施例中,距离加权的计算是依据公式2来计算;x是与起始点相距的X轴坐标,y是与起始点相距的Y轴坐标,且搜寻顺序是先自加权寻距离最小的各像块开始搜寻。As shown in Figure 9, it illustrates the first preferred embodiment of the method for estimating the motion vector based on the distance-weighted search order of the present invention. In this embodiment, the distance-weighted calculation is calculated according to formula 2; x is the distance from the starting point is the X-axis coordinate of , y is the Y-axis coordinate of the distance from the starting point, and the search sequence is to start searching from each image block with the smallest weighted search distance.
加权搜寻距离=(绝对值(x)+绝对值(y)) 公式2Weighted search distance = (absolute value (x) + absolute value (y)) Formula 2
其搜寻顺序是以一搜寻区域201的中心点301为起始(Original)点,依计算所得的最小距离开始搜寻,例如先搜寻301周围的4点菱形区域,若搜寻不到,然后再以菱形向外环绕扩散至未搜寻区域,依此类推,直到搜寻到最近似的像块为止。The search sequence is to start from the center point 301 of a search area 201 (Original), and start searching according to the calculated minimum distance. Spread out to the unsearched area, and so on, until the closest block is found.
如图10所示,说明本发明基于距离加权搜寻顺序以估算移动向量的方法的第二较佳实施例,本实施例中,距离加权的计算是依据公式3来计算;x是与起始点相距的X轴坐标,y是与起始点相距的Y轴坐标,且搜寻顺序是先自加权搜寻距离最小的像块开始搜寻。As shown in FIG. 10 , the second preferred embodiment of the method for estimating the motion vector based on the distance-weighted search order of the present invention is illustrated. In this embodiment, the distance-weighted calculation is calculated according to formula 3; x is the distance from the starting point is the X-axis coordinate of , y is the Y-axis coordinate of the distance from the starting point, and the search sequence is to start searching from the image block with the smallest weighted search distance.
加权搜寻距离=最小值(开平方根号(x2+y2)) 公式3Weighted search distance = minimum value (square root (x 2 +y 2 )) Formula 3
第二较佳实施例的搜寻顺序与第一较佳实施例类似,是以一搜寻区域202的中心点302为起始(Original)点,依计算所得的最小距离开始搜寻,例如先搜寻302本身(1),再搜寻最接近302周围的4点(2,3,4,5),再逐渐往外搜寻302周围的4点(6,7,8,9),若在上述9点均搜寻不到,再向外扩散至未搜寻区域的外部多点,仍是以最接近302周围的点开始搜寻,如先找(10,11,12,13),接着找(14,15,16,17,18,19,20,21),依此类推,直到搜寻到最近似的像块为止。The search sequence of the second preferred embodiment is similar to that of the first preferred embodiment. The center point 302 of a search area 202 is used as the starting (Original) point, and the search is started according to the calculated minimum distance. For example, search 302 itself first. (1), then search for the 4 points (2, 3, 4, 5) closest to 302, and then gradually search for 4 points (6, 7, 8, 9) around 302, if the above 9 points cannot be searched , and then spread out to the outer multi-points of the unsearched area, still start searching with the point closest to 302, such as first find (10, 11, 12, 13), and then find (14, 15, 16, 17 , 18, 19, 20, 21), and so on, until the closest block is found.
本发明的第三较佳实施例,是以查表对各种距离值进行转换,加权距离的计算是依据公式4所示:The third preferred embodiment of the present invention converts various distance values by looking up a table, and the calculation of the weighted distance is shown in formula 4:
搜寻加权距离=F(u)+G(v)=F(abs(x))+G(abs(y)) 公式4Search weighted distance = F(u)+G(v)=F(abs(x))+G(abs(y)) Formula 4
F(u)、G(v)可由一预定的距离权重函数表以查表的方式换算出,x是与起始点相距的x轴坐标,y是与起始点相距的y轴坐标,且搜寻顺序是先自加权距离最小的各像块开始搜寻。前述的距离权重函数表的范例如表1、表2所示,且公式中的F(·)=G(·)。F(u) and G(v) can be converted from a predetermined distance weight function table by means of table lookup, x is the x-axis coordinate distance from the starting point, y is the y-axis coordinate distance from the starting point, and the search order It is to start searching from each image block with the smallest weighted distance first. Examples of the foregoing distance weight function table are shown in Table 1 and Table 2, and F(·)=G(·) in the formula.
表1Table 1
表2Table 2
必须说明的是,表1、2中的u、v值可递增其计数,即可再继续扩增u、v值与F(u)的对应值,而不以上述列表为限;此外,本实施例中的u、v值每次增加的单位除了可以一整数点的数值代入距离权重函数表的方式,也可以是以一非整数点,例如1/2、1/4或1/8等数值代入距离权重函数表,均属于本发明可应用的范畴。It must be noted that the values of u and v in Tables 1 and 2 can be counted incrementally, that is, the values of u and v and the corresponding values of F(u) can be continuously amplified, without being limited to the above list; in addition, this In addition to the way that the values of u and v increase each time in the embodiment, the value of an integer point can be substituted into the distance weight function table, or a non-integer point, such as 1/2, 1/4 or 1/8, etc. Numerical values are substituted into the distance weight function table, all of which belong to the applicable category of the present invention.
归纳上述,本发明基于距离加权搜寻顺序以估算移动向量的方法的原理,乃是由于一般移动向量常是以放射状由内而外分布的方式出现,因此相较于以往采用的扫描式搜寻,或是螺旋式搜寻法,依据本发明以距离加权顺序搜寻将可快速找到相似像块,并且节省不必要的搜寻时间而使得移动向量的估算更为有效率。To sum up the above, the principle of the method for estimating the motion vector based on the distance-weighted search order of the present invention is that the general motion vector usually appears in a radially distributed manner from inside to outside, so compared with the previous scanning search, or It is a spiral search method, and according to the present invention, similar image blocks can be quickly found by searching in distance-weighted order, and unnecessary search time is saved so that the estimation of the motion vector is more efficient.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200510115481 CN1960487B (en) | 2005-11-04 | 2005-11-04 | Method for estimating motion vector based on distance weighted search order |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200510115481 CN1960487B (en) | 2005-11-04 | 2005-11-04 | Method for estimating motion vector based on distance weighted search order |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1960487A CN1960487A (en) | 2007-05-09 |
CN1960487B true CN1960487B (en) | 2010-05-05 |
Family
ID=38071946
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200510115481 Expired - Fee Related CN1960487B (en) | 2005-11-04 | 2005-11-04 | Method for estimating motion vector based on distance weighted search order |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1960487B (en) |
-
2005
- 2005-11-04 CN CN 200510115481 patent/CN1960487B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN1960487A (en) | 2007-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7953153B2 (en) | Motion estimation method utilizing modified rhombus pattern search for a succession of frames in digital coding system | |
US7444026B2 (en) | Image processing apparatus and method of motion vector detection in a moving picture, and recording medium used therewith | |
US7764738B2 (en) | Adaptive motion estimation and mode decision apparatus and method for H.264 video codec | |
US10645410B2 (en) | Video decoding apparatus | |
US7864837B2 (en) | Motion estimation method utilizing a distance-weighted search sequence | |
JP2007159155A (en) | Filtering method for eliminating blocking effect, and equipment therefor | |
JP2930092B2 (en) | Image coding device | |
JP2002077629A (en) | Extent determining method of blocked artifacts in digital image | |
JP2000510311A (en) | Method and apparatus for encoding and decoding digitized images | |
US8111753B2 (en) | Video encoding method and video encoder for improving performance | |
CN1960487B (en) | Method for estimating motion vector based on distance weighted search order | |
US9020289B2 (en) | Image processing apparatus and image processing method for compressing image data by combining spatial frequency conversion, quantization, and entropy coding | |
CN106559675A (en) | Semiconductor device | |
US20030152147A1 (en) | Enhanced aperture problem solving method using displaced center quadtree adaptive partitioning | |
JP2006129248A (en) | Image encoding and decoding method and apparatus thereof | |
CN100556146C (en) | Improved dynamic estimation method for diamond search | |
JP2001078199A (en) | Video signal coder | |
US20230245425A1 (en) | Image processing apparatus, image processing method and storage medium | |
JP3232160B2 (en) | Encoding device and method | |
US7580458B2 (en) | Method and apparatus for video coding | |
JPH02122766A (en) | Device and method for compressing picture data and device and method for expanding compression data | |
JP2008193456A (en) | Motion predicting method and device in moving image encoding | |
JP2003179929A (en) | Image decoding apparatus | |
JP4447903B2 (en) | Signal processing apparatus, signal processing method, recording medium, and program | |
Choi et al. | Adaptive image quantization using total variation classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20100505 Termination date: 20161104 |