CN115100236A - Video processing method and device - Google Patents
Video processing method and device Download PDFInfo
- Publication number
- CN115100236A CN115100236A CN202210828522.5A CN202210828522A CN115100236A CN 115100236 A CN115100236 A CN 115100236A CN 202210828522 A CN202210828522 A CN 202210828522A CN 115100236 A CN115100236 A CN 115100236A
- Authority
- CN
- China
- Prior art keywords
- frame
- coefficient
- motion vector
- module
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/207—Analysis of motion for motion estimation over a hierarchy of resolutions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Television Systems (AREA)
- Studio Devices (AREA)
Abstract
Description
技术领域technical field
本申请属于视频处理技术领域,具体涉及一种视频处理方法及其装置。The present application belongs to the technical field of video processing, and in particular relates to a video processing method and a device thereof.
背景技术Background technique
图像对齐,也称为帧对齐或者图像配准,是一种对图像进行扭曲旋转使其与另一张图像对齐的技术。图像对齐是许多视频处理场景中的关键技术,例如视频降噪、运动检测、人像分割等等。Image alignment, also known as frame alignment or image registration, is a technique of twisting and rotating an image to align it with another image. Image alignment is a key technique in many video processing scenarios, such as video noise reduction, motion detection, portrait segmentation, and more.
通常采用光流法来进行图像对齐。光流法是通过检测图像像素点的强度随时间的变化来推断物体的运动信息的方法。在应用光流法进行图像对齐时,需要满足物体运动时其亮度不会发生变化,以及运动是微小运动的条件,但在现实场景中这两个条件均难以满足,导致使用光流法进行图像对齐的准确性较低。The optical flow method is usually used for image alignment. Optical flow method is a method of inferring the motion information of objects by detecting the change of the intensity of image pixels over time. When applying the optical flow method for image alignment, it is necessary to meet the conditions that the brightness of the object will not change when the object moves, and that the motion is a small movement. However, in real scenes, these two conditions are difficult to meet, resulting in the use of the optical flow method for image alignment. Alignment is less accurate.
发明内容SUMMARY OF THE INVENTION
本申请实施例的目的是提供一种视频处理方法及其装置,能够提高图像对齐的准确性和鲁棒性。The purpose of the embodiments of the present application is to provide a video processing method and device thereof, which can improve the accuracy and robustness of image alignment.
第一方面,本申请实施例提供了一种视频处理方法,该方法包括:获取目标视频的第一帧和第二帧,其中,所述第二帧为与所述第一帧相邻的上一帧;根据所述第二帧确定所述第一帧的第一运动矢量;根据所述加权比例对所述第一运动矢量进行调整,得到所述第一帧的第二运动矢量;基于所述第二运动矢量将所述第一帧与所述第二帧进行对齐。In a first aspect, an embodiment of the present application provides a video processing method, the method includes: acquiring a first frame and a second frame of a target video, wherein the second frame is an upper frame adjacent to the first frame a frame; determine the first motion vector of the first frame according to the second frame; adjust the first motion vector according to the weighting ratio to obtain the second motion vector of the first frame; The second motion vector aligns the first frame with the second frame.
第二方面,本申请实施例提供了一种视频处理装置,该装置包括:第一获取模块,用于获取目标视频的第一帧和第二帧,其中,所述第二帧为与所述第一帧相邻的上一帧;第一确定模块,用于根据所述第二帧确定所述第一帧的第一运动矢量和加权比例;第一加权模块,用于根据所述加权比例对所述第一运动矢量进行调整,得到所述第一帧的第二运动矢量;第一对齐模块,用于基于所述第二运动矢量将所述第一帧与所述第二帧进行对齐。In a second aspect, an embodiment of the present application provides a video processing apparatus, the apparatus includes: a first acquiring module, configured to acquire a first frame and a second frame of a target video, wherein the second frame is the same as the The previous frame adjacent to the first frame; the first determining module is used to determine the first motion vector and the weighting ratio of the first frame according to the second frame; the first weighting module is used to determine the weighting ratio according to the first frame Adjusting the first motion vector to obtain a second motion vector of the first frame; a first alignment module for aligning the first frame and the second frame based on the second motion vector .
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的视频处理方法。In a third aspect, an embodiment of the present application provides an electronic device, the electronic device includes a processor and a memory, the memory stores a program or an instruction that can be executed on the processor, and the program or instruction is processed by the processor The video processing method according to the first aspect is implemented when the processor is executed.
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的视频处理方法。In a fourth aspect, an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the video processing method according to the first aspect is implemented .
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的视频处理方法。In a fifth aspect, an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, and implement the first aspect the video processing method.
第六方面,本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如第一方面所述的视频处理方法。In a sixth aspect, an embodiment of the present application provides a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the video processing method according to the first aspect.
在本申请实施例中,先根据前后两帧确定前后两帧的运动矢量,从空间维度确定前后两帧之间的关系,保证帧对齐在空间上的鲁棒性;然后根据前一帧对后一帧的加权比例调整该运动矢量,使得前后帧在时间维度上保持平滑性,提高帧对齐在时间上的鲁棒性。时间维度与空间维度相结合,也能够提高帧对齐的准确性。并且,本实施例的技术方案流程简单、数据吞吐量小,有利于降低系统的功耗压力,能够增大应用场景。In the embodiment of the present application, the motion vectors of the two frames before and after are first determined according to the two frames before and after, and the relationship between the two frames is determined from the spatial dimension to ensure the robustness of the frame alignment in space; The weighted ratio of a frame adjusts the motion vector, so that the front and rear frames maintain smoothness in the time dimension, and the robustness of frame alignment in time is improved. Combining the temporal dimension with the spatial dimension can also improve the accuracy of frame alignment. In addition, the technical solution of this embodiment has a simple process and a small data throughput, which is beneficial to reduce the power consumption pressure of the system, and can increase application scenarios.
附图说明Description of drawings
图1是本申请实施例提供的视频处理方法的流程图;1 is a flowchart of a video processing method provided by an embodiment of the present application;
图2是本申请实施例提供的视频处理方法中子块的示意图;2 is a schematic diagram of a sub-block in a video processing method provided by an embodiment of the present application;
图3是本申请实施例提供的视频处理装置的结构示意图;3 is a schematic structural diagram of a video processing apparatus provided by an embodiment of the present application;
图4是本申请实施例提供的电子设备的结构示意图之一;4 is one of the schematic structural diagrams of the electronic device provided by the embodiment of the present application;
图5是本申请实施例提供的电子设备的结构示意图之二。FIG. 5 is a second schematic structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art fall within the protection scope of this application.
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。The terms "first", "second" and the like in the description and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and distinguish between "first", "second", etc. The objects are usually of one type, and the number of objects is not limited. For example, the first object may be one or more than one. In addition, "and/or" in the description and claims indicates at least one of the connected objects, and the character "/" generally indicates that the associated objects are in an "or" relationship.
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的视频处理方法、视频处理装置和电子设备进行详细地说明。The video processing method, video processing apparatus, and electronic device provided by the embodiments of the present application will be described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
本申请实施例首先提供一种视频处理方法,该视频处理方法可应用于手机、平板电脑、笔记本电脑、可穿戴电子设备(如智能手表)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、车载设备等电子设备中,本申请实施例对此不作任何限制。The embodiments of the present application first provide a video processing method, and the video processing method can be applied to mobile phones, tablet computers, notebook computers, wearable electronic devices (such as smart watches), augmented reality (AR)/virtual reality (virtual reality) In electronic devices such as reality (VR) devices and in-vehicle devices, the embodiments of the present application do not impose any limitations on this.
图1示出了本申请实施例提供的视频处理方法的一种流程图。如图1所示,该视频处理方法包括以下步骤:FIG. 1 shows a flowchart of a video processing method provided by an embodiment of the present application. As shown in Figure 1, the video processing method includes the following steps:
步骤10:获取目标视频的第一帧和第二帧,其中,第二帧为与第一帧相邻的上一帧。Step 10: Acquire the first frame and the second frame of the target video, wherein the second frame is the previous frame adjacent to the first frame.
通过图像传感器可以以一定的时间周期获取图像,例如每30毫秒获取一帧图像等,多帧图像可以形成视频。第一帧可以指当前获取的图像,或者需要处理的图像。第二帧则指的是在第一帧之前获取到的图像。例如,图像传感器可以每30毫秒生成一帧图像,则将30毫秒作为一个时刻,第一帧可以为第2个时刻的图像,第二帧则可以为第1个时刻的图像。The image sensor can acquire images in a certain period of time, such as acquiring one frame of image every 30 milliseconds, etc. Multiple frames of images can form a video. The first frame may refer to the currently acquired image, or the image to be processed. The second frame refers to the image acquired before the first frame. For example, if the image sensor can generate one frame of image every 30 milliseconds, 30 milliseconds can be regarded as a moment, the first frame can be the image of the second moment, and the second frame can be the image of the first moment.
步骤20:根据第二帧确定第一帧的第一运动矢量和加权比例。Step 20: Determine the first motion vector and the weighting ratio of the first frame according to the second frame.
运动矢量指的是图像或图像中的区域相对于参考帧的运动轨迹,即空间视角上移动的方向和大小。本实施方式中,第一运动矢量指的是第一帧相对于第二帧的运动矢量。The motion vector refers to the motion trajectory of an image or an area in the image relative to the reference frame, that is, the direction and size of the movement in the spatial perspective. In this embodiment, the first motion vector refers to the motion vector of the first frame relative to the second frame.
示例性的,确定第一运动矢量的过程如下:首先将第一帧与第二帧均划分为多个子块,如M个子块。M为大于0的正整数。第一帧与第二帧的尺寸相同,例如,对于尺寸为W×H的第一帧、第二帧,W为图像的宽度,H为图像的高度,将图像可以划分为m×n个子块,M=m×n,则每个子块的尺寸为W/m×H/n。举例来说,若将第二帧和第一帧分别划分为2×2个子块,如图2所示,第二帧21可以分为子块1、子块2、子块3和子块4,第一帧22同样分为子块1、子块2、子块3和子块4四个子块。Exemplarily, the process of determining the first motion vector is as follows: first, both the first frame and the second frame are divided into multiple sub-blocks, such as M sub-blocks. M is a positive integer greater than 0. The size of the first frame and the second frame is the same. For example, for the first frame and the second frame whose size is W×H, W is the width of the image and H is the height of the image, and the image can be divided into m×n sub-blocks , M=m×n, then the size of each sub-block is W/m×H/n. For example, if the second frame and the first frame are respectively divided into 2×2 sub-blocks, as shown in FIG. 2 , the
然后分别对第一帧和第二帧的每对子块进行运动估计,确定第一帧的子块相对于第二帧中对应的子块的第一局部矢量。局部矢量指的是子块的运动矢量的估计值,可以包括水平维度的估计值和垂直维度的估计值。示例性的,将第一帧和第二帧可以先进行灰度化处理,将每个像素点的值转化为灰度值,从而降低计算量,提高计算效率。对于第一帧与第二帧中的每个子块,可以将该子块中的每一列像素值,即灰度值进行累加,得到一个由列像素值的累加和构成的一维数组。例如,每个子块中包括10列,则每列累加和构成的一维数组包括10个元素。该一维数组可以表示为:Then, motion estimation is performed for each pair of sub-blocks in the first frame and the second frame, respectively, and a first local vector of the sub-blocks in the first frame relative to the corresponding sub-blocks in the second frame is determined. The local vector refers to the estimated value of the motion vector of the sub-block, and may include the estimated value of the horizontal dimension and the estimated value of the vertical dimension. Exemplarily, grayscale processing may be performed on the first frame and the second frame first, and the value of each pixel point is converted into a grayscale value, thereby reducing the amount of calculation and improving the calculation efficiency. For each sub-block in the first frame and the second frame, the pixel values of each column in the sub-block, that is, the grayscale values, can be accumulated to obtain a one-dimensional array composed of the accumulated sum of the column pixel values. For example, if each sub-block includes 10 columns, the one-dimensional array formed by the cumulative sum of each column includes 10 elements. This one-dimensional array can be represented as:
便于区分,将第二帧21的子块1记为pre1,第一帧22的子块1记为cur1。上述公式中,Verpre1为第二帧的第一个子块对应的一维数组,Verpre1(i)表示该一维数组中的第i个元素;blockH为每个子块的宽度,即列的总数;blockW为每个子块的高度,即行的总数;pre1(i,j)表示第一帧的第一个子块中的像素点,即第i行第i个像素点;该一维数组中共有blockW个元素。For convenience of distinction, the
并且,将第二帧的第一个子块的每行的像素值进行累加,根据行像素值的累加和也可以得到一个一维数组。便于分区,将上述列像素点构成的一维数组Ver称为列数组,将行像素点构成一维数组称为行数组。第二帧的第一个子块的行数组可以表示为:Moreover, the pixel values of each row of the first sub-block of the second frame are accumulated, and a one-dimensional array can also be obtained according to the accumulated sum of the row pixel values. For the convenience of partitioning, the one-dimensional array Ver formed by the above column pixels is called a column array, and the one-dimensional array formed by row pixels is called a row array. The row array of the first subblock of the second frame can be represented as:
其中,Hor表示列数组。根据上述公式(1)和公式(2)也可以确定第二帧的其他子块,即pre2、pre3和pre4的行数组和列数组,以及第一帧每个子块的行数组和列数组。通过每个子块的行数组和列数组可以统计第一帧或第二帧中不同区域的水平维度和垂直维度的特征。经过上述处理,可以得到m×n×2×2个一维数组,即第一帧的8个数组和第二帧的8个数组。Among them, Hor represents an array of columns. Other sub-blocks of the second frame, ie, row arrays and column arrays of pre2, pre3, and pre4, and row arrays and column arrays of each sub-block of the first frame can also be determined according to the above formulas (1) and (2). Through the row array and column array of each sub-block, the characteristics of the horizontal dimension and the vertical dimension of different regions in the first frame or the second frame can be counted. After the above processing, m×n×2×2 one-dimensional arrays can be obtained, that is, 8 arrays in the first frame and 8 arrays in the second frame.
可选地,利用错位相减的方式计算第二帧与第一帧水平维度和垂直维度的运动矢量的估计值。错位指的是,将第二帧或者第一帧按照一定的距离移动后,再与另一帧相减。预先设置错位的范围,例如[-5,5]、[-9,9]等等,本实施方式对此不作特殊限定。示例性的,对于水平维度和垂直维度可以分别设置不同的错位范围。确定错位范围后,在该范围内搜索第二帧与第一帧中差异值累积最小的错位值,该错位值可以确定为该子块的第一局部矢量。举例来说,当错位值为0时,对于第二帧第一个子块pre1与第一帧第一个子块cur1,依次计算子块pre1与子块cur1的第一个元素相减的结果,第二元素相减的结果,第三个元素相减的结果等等,然后将每对相减的结果的绝对值相加,得到子块pre1与子块cur1在错位0的情况下水平维度或者垂直维度累计的差异值。当计算完错位范围内每个错位置对应的差异值后,确定差异值最小时的错位值,该值即为子块pre1与子块cur1的第一局部矢量。例如,对于第一帧的第一子块,在水平维度上的计算过程用公式表示如下:Optionally, the estimated values of the motion vectors in the horizontal dimension and the vertical dimension of the second frame and the first frame are calculated by means of offset subtraction. Dislocation refers to moving the second frame or the first frame by a certain distance and then subtracting it from another frame. The range of dislocation is preset, such as [-5,5], [-9,9], etc., which is not particularly limited in this embodiment. Exemplarily, different misalignment ranges may be set for the horizontal dimension and the vertical dimension, respectively. After the misalignment range is determined, a misalignment value with the smallest accumulated difference value between the second frame and the first frame is searched within the range, and the misalignment value can be determined as the first local vector of the sub-block. For example, when the misalignment value is 0, for the first sub-block pre1 of the second frame and the first sub-block cur1 of the first frame, calculate the result of subtracting the first element of the sub-block pre1 and the sub-block cur1 in turn. , the result of the subtraction of the second element, the result of the subtraction of the third element, etc., and then add the absolute value of each pair of subtraction results to get the sub-block pre1 and sub-block cur1 in the case of a misalignment of 0 in the horizontal dimension Or the cumulative difference value for the vertical dimension. After the difference value corresponding to each misalignment position within the misalignment range is calculated, the misalignment value with the smallest difference value is determined, and this value is the first local vector of the sub-block pre1 and the sub-block cur1. For example, for the first sub-block of the first frame, the calculation process in the horizontal dimension is expressed as follows:
其中,difVerblock1(k)表示在错位为k时第一帧第一个子块block1与第二帧的第一个子块之间的差异值;k在错位范围内取值,例如k=-10,-9,…,0,1,…,9,10。在该范围内依次求每个k值时对应的子块之间的差异值difVer,将该差异difVer值最小时的k值作为该子块的水平维度的估计值,记为Xblock1。同理可计算第一个子块block1垂直维度的差异值difHor,从而得到差异值difHor最小时的估计值Yblock1,第一个子块block1的第一局部矢量即为:Xblock1,Yblock1。Wherein, difVer block1 (k) represents the difference value between the first sub-block block1 of the first frame and the first sub-block of the second frame when the dislocation is k; k takes a value within the dislocation range, for example k=- 10, -9, …, 0, 1, …, 9, 10. In this range, the difference value difVer between the corresponding sub-blocks is calculated for each k value in turn, and the k value with the smallest difference difVer value is used as the estimated value of the horizontal dimension of the sub-block, denoted as X block1 . Similarly, the difference value difHor of the vertical dimension of the first sub-block block1 can be calculated to obtain the estimated value Y block1 when the difference value difHor is the smallest. The first local vector of the first sub-block block1 is: X block1 , Y block1 .
在本申请实施例中,对第一帧中每个子块进行上述公式(3)的处理,可以得到每个子块的第一局部矢量。然后对第一帧的局部矢量进行空间滤波处理,提高空间维度的鲁棒性和平滑性。In this embodiment of the present application, each sub-block in the first frame is processed by the above formula (3), and the first local vector of each sub-block can be obtained. Then, the local vector of the first frame is spatially filtered to improve the robustness and smoothness of the spatial dimension.
对于第一帧中的第j子块,根据第j子块的相邻子块对第j子块的第一局部矢量进行均值滤波,得到第j子块的第二局部矢量,若从0开始计数,则0≤j≤M。j也可以从1开始计数。若第j个子块的相邻子块为第j+1子块,第j-1子块,那么第j-1,j,j+1子块的第一局部矢量的平均值作为第j子块的第二局部矢量。示例性的,将以第j子块为中心的周围8个子块作为与第j子块的相邻子块,与第j子块一起计算该9个子块的第一局部矢量的平均值,将得到的结果作为第j子块的第二局部矢量。该第二局部矢量即为第一帧的第一运动矢量。For the j-th sub-block in the first frame, perform mean filtering on the first local vector of the j-th sub-block according to the adjacent sub-blocks of the j-th sub-block to obtain the second local vector of the j-th sub-block, if starting from 0 count, then 0≤j≤M. j can also start counting from 1. If the adjacent sub-blocks of the j-th sub-block are the j+1-th sub-block and the j-1-th sub-block, then the average value of the first local vectors of the j-1, j, and j+1-th sub-blocks is taken as the j-th sub-block The second local vector of the block. Exemplarily, take the surrounding 8 sub-blocks centered on the j-th sub-block as the adjacent sub-blocks with the j-th sub-block, and calculate the average value of the first local vectors of the 9 sub-blocks together with the j-th sub-block, and set The result is obtained as the second local vector of the jth subblock. The second local vector is the first motion vector of the first frame.
示例性的,第一帧的第一运动矢量可以包括由每个子块的第二局部矢量构成的矢量图,该矢量图的长宽即为子块水平方向的数量和垂直方向的数量,即m、n。结合图2,当将图像划分为2×2个子块时,第一帧的矢量图可以表示为MVmap1(x,y),x=1,2;y=1,2。具体的:MVmap1(1,1)={Xblock1,Yblock1},MVmap1(1,2)={Xblock2,Yblock2},MVmap1(2,1)={Xblock3,Yblock3},MVmap1(2,2)={Xblock4,Yblock4}。或者,第一帧的每个子块的第二局部矢量可以表示为一维向量,作为第一运动矢量。例如,一维向量MVmap1的第一个元素MVmap1(1)={Xblock1,Yblock1},第二个元素MVmap1(2)={Xblock2,Yblock2}等等。Exemplarily, the first motion vector of the first frame may include a vector diagram composed of the second local vector of each sub-block, and the length and width of the vector diagram are the number of sub-blocks in the horizontal direction and the number in the vertical direction, that is, m. , n. Referring to FIG. 2 , when the image is divided into 2×2 sub-blocks, the vector diagram of the first frame can be represented as MVmap1(x,y), x=1,2; y=1,2. Specifically: MVmap1(1,1)={X block1 ,Y block1 }, MVmap1(1,2)={X block2 ,Y block2 }, MVmap1(2,1)={X block3 ,Y block3 }, MVmap1( 2,2)={X block4 , Y block4 }. Alternatively, the second local vector of each sub-block of the first frame may be represented as a one-dimensional vector as the first motion vector. For example, the first element of the one-dimensional vector MVmap1 MVmap1(1)={X block1 , Y block1 }, the second element MVmap1(2)={X block2 , Y block2 } and so on.
本实施方式中,首先根据第一帧的帧内信息估计第一帧的局部矢量,然后再对局部矢量进行均值滤波,增强局部矢量的平滑性,可以保证局部矢量在整体上的一致性。In this embodiment, the local vector of the first frame is first estimated according to the intra-frame information of the first frame, and then the average filtering is performed on the local vector to enhance the smoothness of the local vector and ensure the overall consistency of the local vector.
步骤30:根据加权比例对第一运动矢量进行调整,得到第一帧的第二运动矢量。Step 30: Adjust the first motion vector according to the weighting ratio to obtain the second motion vector of the first frame.
可以理解的,根据第二帧可以确定第一帧的第一运动矢量,根据第二帧之前的帧也可以确定第二帧的第一运动矢量。也就是说,根据视频中每相邻的两帧可以确定其中后一帧的第一运动矢量。进而,根据第二帧的第一运动矢量可以确定第一帧的加权比例,然后再按照加权比例对第一帧的第一运动矢量进行处理。同理的,根据第二帧之前的帧也可以确定第二帧的加权比例。It can be understood that the first motion vector of the first frame can be determined according to the second frame, and the first motion vector of the second frame can also be determined according to the frame before the second frame. That is, the first motion vector of the next frame can be determined according to every two adjacent frames in the video. Furthermore, the weighting ratio of the first frame can be determined according to the first motion vector of the second frame, and then the first motion vector of the first frame is processed according to the weighting ratio. Similarly, the weighting ratio of the second frame can also be determined according to the frame before the second frame.
示例性的,加权比例可以包括两个系数,分别为第一系数、第二系数。从水平维度和垂直维度来说,加权比例可以包括水平维度的第一系数、第二系数和垂直维度的第一系数、第二系数。根据第二帧的加权比例可以确定第一帧的加权比例。通过加权比例将历史时刻的帧融入第一帧的运动矢量结果中,可以提升第一帧运动矢量估计结果在时域上的鲁棒性。Exemplarily, the weighting ratio may include two coefficients, which are a first coefficient and a second coefficient, respectively. In terms of the horizontal dimension and the vertical dimension, the weighting ratio may include the first coefficient and the second coefficient of the horizontal dimension and the first coefficient and the second coefficient of the vertical dimension. The weighted ratio of the first frame may be determined according to the weighted ratio of the second frame. The frame at the historical moment is integrated into the motion vector result of the first frame through the weighted ratio, which can improve the robustness of the motion vector estimation result of the first frame in the time domain.
具体的,基于第一目标参数以及第二帧的加权比例中的第一系数先确定第一帧的第一候选系数。如果第二帧为视频中的起始帧,如从0开始计数即第0帧,则第0帧的加权比例为0,即第一系数、第二系数均为0。通过添加第一目标参数可以引入权重。具体的,对于水平维度或垂直维度,先基于该第一目标参数对第二帧的第一系数进行更新,如下公式所示:Specifically, the first candidate coefficient of the first frame is first determined based on the first target parameter and the first coefficient in the weighting ratio of the second frame. If the second frame is the start frame in the video, such as counting from 0, that is, the 0th frame, the weighting ratio of the 0th frame is 0, that is, the first coefficient and the second coefficient are both 0. Weights can be introduced by adding a first objective parameter. Specifically, for the horizontal dimension or the vertical dimension, the first coefficient of the second frame is first updated based on the first target parameter, as shown in the following formula:
prebx(i)=prebx(i)+paramx1 (4)pre bx (i)=pre bx (i)+paramx1 (4)
其中,prebx(i)为第二帧第i个子块水平维度的第一系数,i大于等于0,而小于等于m×n,paramx1表示第一目标参数。即,更新的结果为第一目标参数与第二帧原始的第一系数之和。然后再基于第二目标参数对更新后的第一系数Prebx进行归一化处理,得到第一帧的第一候选系数。归一化处理可以用如下公式表示:Wherein, pre bx (i) is the first coefficient of the horizontal dimension of the ith sub-block of the second frame, i is greater than or equal to 0, and less than or equal to m×n, and paramx1 represents the first target parameter. That is, the updated result is the sum of the first target parameter and the original first coefficient of the second frame. Then, the updated first coefficient Pre bx is normalized based on the second target parameter to obtain the first candidate coefficient of the first frame. The normalization process can be expressed by the following formula:
tmp1(i)=prebx(i)/(prebx(i)+paramx2) (5)tmp1(i)=pre bx (i)/(pre bx (i)+paramx2) (5)
其中,tmp1(i)表示归一化后第i子块的水平维度的第一候选系数,paramx2表示第二目标参数。通过该公式可以得到第一帧水平维度的第一候选系数。同理的,根据公式(4)、公式(5)对垂直维度进行同样的处理,可以得到第一帧垂直维度的第一候选系数。示例性的,垂直维度的第一目标参数、第二目标参数与水平维度可以不同,也可以相同,本实施方式对此不做特殊限定。Wherein, tmp1(i) represents the first candidate coefficient of the horizontal dimension of the i-th sub-block after normalization, and paramx2 represents the second target parameter. Through this formula, the first candidate coefficient of the horizontal dimension of the first frame can be obtained. Similarly, the same processing is performed on the vertical dimension according to formula (4) and formula (5), and the first candidate coefficient of the vertical dimension of the first frame can be obtained. Exemplarily, the first target parameter and the second target parameter of the vertical dimension may be different from or the same as the horizontal dimension, which is not specifically limited in this embodiment.
接下来,基于第二帧的第二系数与第一帧的第一运动矢量可以确定第一帧的第二候选系数。示例性的,确定第二帧的第二系数与第一帧的第一运动矢量之间的一阶方差,基于该一阶方差得到第一帧的第二候选系数。公式表示如下:Next, a second candidate coefficient of the first frame may be determined based on the second coefficient of the second frame and the first motion vector of the first frame. Exemplarily, the first-order variance between the second coefficient of the second frame and the first motion vector of the first frame is determined, and the second candidate coefficient of the first frame is obtained based on the first-order variance. The formula is expressed as follows:
tmp2(i)=MVmap1(i)-prexa(i) (6)tmp2(i)=MVmap1(i)-pre xa (i) (6)
其中,tmp2(i)为第一帧的第二候选系数,prexa(i)为第二帧水平维度的第二系数。如果第二帧为视频的起始帧,则prexa(i)=0。MVmap1(i)表示第一帧的第一运动矢量中的第i个元素,i大于等于0,而小于等于m×n。确定第一帧第一运动矢量中每个元素与第二帧的第二系数之间的一阶方差,作为第一帧的第二候选系数。Wherein, tmp2(i) is the second candidate coefficient of the first frame, and pre xa (i) is the second coefficient of the horizontal dimension of the second frame. If the second frame is the start frame of the video, then pre xa (i)=0. MVmap1(i) represents the i-th element in the first motion vector of the first frame, where i is greater than or equal to 0 and less than or equal to m×n. The first-order variance between each element in the first motion vector of the first frame and the second coefficient of the second frame is determined as the second candidate coefficient of the first frame.
然后基于第一帧的第一候选系数和第二候选系数确定第一帧的加权比例。示例性的,根据第一帧的第一候选系数与第二候选系数进行加权处理,可以得到第一帧的加权比例。具体的,对于水平维度,基于第一帧水平维度的第一候选系数与第二候选系数,确定第一帧水平维度的第一系数。用公式表示如下:The weighting ratio of the first frame is then determined based on the first candidate coefficient and the second candidate coefficient of the first frame. Exemplarily, the weighting ratio of the first frame may be obtained by performing weighting processing on the first candidate coefficient and the second candidate coefficient of the first frame. Specifically, for the horizontal dimension, the first coefficient of the horizontal dimension of the first frame is determined based on the first candidate coefficient and the second candidate coefficient of the horizontal dimension of the first frame. The formula is expressed as follows:
curxb(i)=(1-tmp1(i)*prexb(i)) (7)cur xb (i)=(1-tmp1(i)*pre xb (i)) (7)
其中,curxb(i)为第一帧第i子块的水平维度的第一系数。根据第一帧的第一候选系数与第二帧的第一系数,即公式(4)中更新后的第一系数来确定第一帧的第一系数。这样一来,将第二帧的第一系数以一定的比例添加至第一帧的第一系数,增强时间轴上的关联,从而提高第一帧与上一帧之间的平滑性。Wherein, cur xb (i) is the first coefficient of the horizontal dimension of the ith sub-block of the first frame. The first coefficient of the first frame is determined according to the first candidate coefficient of the first frame and the first coefficient of the second frame, that is, the updated first coefficient in formula (4). In this way, the first coefficient of the second frame is added to the first coefficient of the first frame in a certain proportion, so as to enhance the correlation on the time axis, thereby improving the smoothness between the first frame and the previous frame.
第一帧水平维度的第二系数根据如下公式计算:The second coefficient of the horizontal dimension of the first frame is calculated according to the following formula:
curxa(i)=prexa(i)+tmp1(i)*tmp2(i) (8)cur xa (i)=pre xa (i)+tmp1(i)*tmp2(i) (8)
其中,curxa(i)为第一帧水平维度的第二系数。将第一帧的第一运动矢量按照一定比例融入第一帧的加权比例中,继续对下一帧进行加权处理,可以保证第一帧与下一帧之间的平滑性,从而增强视频时间上的鲁棒性。同理的,对于垂直维度,根据第一帧垂直维度的第一候选系数与第二候选系数进行同样的加权处理,可以得到第一帧垂直维度的第一系数。Wherein, cur xa (i) is the second coefficient of the horizontal dimension of the first frame. The first motion vector of the first frame is integrated into the weighted ratio of the first frame according to a certain proportion, and the next frame is continued to be weighted, which can ensure the smoothness between the first frame and the next frame, thereby enhancing the video time. robustness. Similarly, for the vertical dimension, the first coefficient of the vertical dimension of the first frame can be obtained by performing the same weighting process according to the first candidate coefficient and the second candidate coefficient of the vertical dimension of the first frame.
确定第一帧的加权比例的基础上对之间确定的第一运动矢量进行调整,可以得到第一帧的第二运动矢量。示例性的,基于第三目标参数与第一帧的上述加权比例可以确定第一帧的第一分量,基于第四目标参数与第一帧的第一运动矢量可以确定第二分量,然后合并第一分量与第二分量可以得到第二运动矢量。本实施方式中,采用加权比例对第三目标参数进行加权处理,将得到的结果与一定比例的第一运动矢量进行融合,得到第二运动矢量。通过一定比例的目标参数来调节运动矢量,可以辅助运动矢量的估计,有利于提高运动估计的准确性。举例来说,第一帧的第二运动矢量可以为:On the basis of determining the weighting ratio of the first frame, the first motion vector determined between them is adjusted to obtain the second motion vector of the first frame. Exemplarily, the first component of the first frame may be determined based on the above-mentioned weighted ratio between the third target parameter and the first frame, the second component may be determined based on the fourth target parameter and the first motion vector of the first frame, and then the first component is combined. The one component and the second component can obtain the second motion vector. In this embodiment, the weighting ratio is used to perform weighting processing on the third target parameter, and the obtained result is fused with a certain ratio of the first motion vector to obtain the second motion vector. Adjusting the motion vector by a certain proportion of target parameters can assist the estimation of the motion vector, which is beneficial to improve the accuracy of the motion estimation. For example, the second motion vector of the first frame can be:
MVmap2(i)=prexa(i)*paramx3+MVmap1x(i)*paramx4 (9)MVmap2(i)=pre xa (i)*paramx3+MVmap1 x (i)*paramx4 (9)
MVmap1x(i)表示第一帧第一运动矢量中的水平维度的值,按照该公式可以计算得到第一帧第二运动矢量MVmap2(i)的水平维度的值。其中,paramx3表示水平维度的第三目标参数,采用第一帧的第二系数对第三目标参数进行加权的结果为第一分量,即prexa(i)*paramx3。paramx4为水平维度的第四目标参数,采用该第四目标参数对第一运动矢量的水平值进行加权的结果为第二分量,即MVmap1x(i)*paramx4。垂直维度的计算过程与水平维度相同,不再赘述。MVmap1 x (i) represents the value of the horizontal dimension in the first motion vector of the first frame, and the value of the horizontal dimension of the second motion vector MVmap2(i) of the first frame can be calculated according to this formula. Wherein, paramx3 represents the third target parameter of the horizontal dimension, and the result of weighting the third target parameter by the second coefficient of the first frame is the first component, that is, pre xa (i)*paramx3. paramx4 is the fourth target parameter in the horizontal dimension, and the result of weighting the horizontal value of the first motion vector by using the fourth target parameter is the second component, that is, MVmap1 x (i)*paramx4. The calculation process of the vertical dimension is the same as that of the horizontal dimension, and will not be repeated here.
继续参考图1,在步骤40中,基于第二运动矢量将第一帧与第二帧进行对齐。With continued reference to FIG. 1, in
将第一帧中的每个点可以按照第二运动矢量进行移动,得到与第二帧空间视角相同的图像。按照上述方法可以对拍摄的视频中的每一帧进行对齐,然后将对齐后的视频进行降噪处理,得到像质更好的视频。Each point in the first frame can be moved according to the second motion vector to obtain an image with the same spatial perspective as the second frame. According to the above method, each frame in the captured video can be aligned, and then the aligned video is subjected to noise reduction processing to obtain a video with better image quality.
示例性的,在得到第二运动矢量之后,可以在空间域上对第二运动矢量再进行优化。具体的,确定第一帧的第二运动矢量的平均值;然后基于该平均值剔除第二运动矢量中的目标矢量值,得到第三运动矢量。对于水平维度的平均值,通过如下公式:Exemplarily, after the second motion vector is obtained, the second motion vector may be further optimized in the spatial domain. Specifically, an average value of the second motion vector of the first frame is determined; then, based on the average value, the target vector value in the second motion vector is eliminated to obtain a third motion vector. For the average of the horizontal dimension, by the following formula:
其中,MVmap2x(i)为第二运动矢量中第i个元素的水平维度值,m*n为子块的总数。通过计算第二运动矢量中的每个值与该平均值的方差,可以确定方差最大的值,即目标矢量值。同样以上述水平维度为例,计算水平维度平均值与第二运动矢量中每个水平维度值之间的方差如下:Wherein, MVmap2 x (i) is the horizontal dimension value of the i-th element in the second motion vector, and m*n is the total number of sub-blocks. By calculating the variance of each value in the second motion vector from this average, the value with the largest variance, the target vector value, can be determined. Also taking the above horizontal dimension as an example, the variance between the average value of the horizontal dimension and each horizontal dimension value in the second motion vector is calculated as follows:
x(i)=(MVmap2x(i)-meanx)2 (11)x(i)=(MVmap2 x (i)-mean x ) 2 (11)
x(i)为第二运动矢量中的第i个元素的方差。方差最大的元素的水平维度值为目标矢量值。得到每个元素的方差后用水平维度平均值meanx值替换该目标矢量值,完成对水平维度的处理。然后再用相同的方式对垂直维度进行处理。采用平均值替换掉该目标矢量值,从而将目标矢量值剔除掉,得到更加平滑的运动矢量,即第三运动矢量。再根据第三运动矢量将第一帧与上一帧进行对齐。x(i) is the variance of the i-th element in the second motion vector. The horizontal dimension value of the element with the largest variance is the value of the target vector. After obtaining the variance of each element, replace the target vector value with the mean x value of the horizontal dimension to complete the processing of the horizontal dimension. The vertical dimension is then processed in the same way. The target vector value is replaced by the average value, so that the target vector value is eliminated to obtain a smoother motion vector, that is, the third motion vector. The first frame is then aligned with the previous frame according to the third motion vector.
本实施方式提供的视频处理方法,过程简单,能够提高软硬件的能效比。并且从空间和时间上对视频帧进行运动估计,能够提高运动矢量的准确性,以及时间上、空间上的平滑性。The video processing method provided by this embodiment has a simple process and can improve the energy efficiency ratio of software and hardware. Moreover, the motion estimation of video frames from space and time can improve the accuracy of motion vectors and the smoothness in time and space.
进一步的,本申请实施例提供的视频处理方法,执行主体可以为视频处理装置。下面,以视频处理装置中执行视频处理方法为例,说明本申请实施例提供的视频处理装置。Further, in the video processing method provided by the embodiment of the present application, the execution body may be a video processing apparatus. Hereinafter, the video processing apparatus provided by the embodiment of the present application is described by taking the video processing method performed in the video processing apparatus as an example.
如图3所示,本申请实施例提供的视频处理装置30可以包括第一获取模块31、第一确定模块32、第一加权模块33以及第一对齐模块34。具体的,第一获取模块31,用于获取目标视频的第一帧和第二帧,其中,所述第二帧为与所述第一帧相邻的上一帧;第一确定模块32,用于根据第二帧确定第一帧的第一运动矢量和加权比例;第一加权模块33,用于根据加权比例对第一运动矢量进行调整,得到第一帧的第二运动矢量;第一对齐模块34,用于基于第二运动矢量将第一帧与第二帧进行对齐。As shown in FIG. 3 , the
本实施例提供的视频处理装置,通过确定前后两帧的运动矢量,从空间维度确定前后两帧之间的关系,然后根据前一帧对后一帧的加权比例调整该运动矢量,使得前后帧在时间维度上保持平滑性。时间维度与空间维度相结合,能够提高帧对齐的准确性和鲁棒性。并且,本实施例的技术方案流程简单、数据吞吐量小,有利于降低系统的功耗压力。The video processing device provided in this embodiment determines the motion vector of the two frames before and after, determines the relationship between the two frames from the spatial dimension, and then adjusts the motion vector according to the weighting ratio of the previous frame to the next frame, so that the frame before and after is adjusted. Maintain smoothness in the time dimension. The temporal dimension combined with the spatial dimension can improve the accuracy and robustness of frame alignment. In addition, the technical solution of this embodiment has a simple process and a small data throughput, which is beneficial to reduce the power consumption pressure of the system.
在本申请实施例中,第一确定模块32包括:第一划分模块,用于将第一帧与第二帧各划分为M个子块,M为正整数;第二确定模块,用于确定第一帧中的子块相对第二帧中对应的子块的第一局部矢量;第一滤波模块,用于对于第一帧中的第j子块,根据第j子块的相邻子块,对第j子块的第一局部矢量进行均值滤波,得到第j子块的第二局部矢量,0≤j≤M;第三确定模块,用于根据第一帧中每个子块的第二局部矢量确定第一运动矢量。In this embodiment of the present application, the first determining
示例性的,加权比例中包括第一系数和第二系数,第一加权模块33具体包括:第一系数确定模块,用于基于第一目标参数以及第二帧的第一系数确定第一帧的第一候选系数;第二系数确定模块,用于基于第二帧的第二系数与第一帧的第一运动矢量确定第一帧的第二候选系数;第二加权模块,用于基于第一候选系数与第二候选系数确定第一帧的加权比例。Exemplarily, the weighting ratio includes a first coefficient and a second coefficient, and the
在本申请实施例中,第一系数确定模块包括第一更新模块,用于基于第一目标参数对第二帧的第一系数进行更新;第一归一化模块,用于基于第二目标参数对更新后的第一系数进行归一化处理,得到第一帧的第一候选系数。In the embodiment of the present application, the first coefficient determination module includes a first update module, which is used to update the first coefficient of the second frame based on the first target parameter; the first normalization module is used to update the first coefficient of the second frame based on the second target parameter. The updated first coefficient is normalized to obtain the first candidate coefficient of the first frame.
在本申请实施例中,第二系数确定模块具体用于:确定第二帧的第二系数与第一帧的第一运动矢量之间的一阶方差,得到第一帧的第二候选系数。In the embodiment of the present application, the second coefficient determination module is specifically configured to: determine the first-order variance between the second coefficient of the second frame and the first motion vector of the first frame, and obtain the second candidate coefficient of the first frame.
在本申请实施例中,上述第一加权模块33具体包括:第一分量确定模块,用于基于第三目标参数与第一帧的加权比例确定第一分量;第二分量确定模块,用于基于第四目标参数与第一帧的第一运动矢量确定第二分量;第一获得模块,用于基于第一分量和第二分量获得第一帧的第二运动矢量。In this embodiment of the present application, the above-mentioned
在本申请实施例中,上述第一对齐模块包括:第一平均模块,用于确定第一帧的第二运动矢量的平均值;第一剔除模块,用于基于平均值剔除第一帧的第二运动矢量中的目标矢量,得到第一帧的第三运动矢量;第二对齐模块,用于基于第三运动矢量将第一帧与第二帧进行对齐。In the embodiment of the present application, the above-mentioned first alignment module includes: a first averaging module, used to determine the average value of the second motion vector of the first frame; a first elimination module, configured to eliminate the first frame based on the average value. The target vector in the second motion vector obtains the third motion vector of the first frame; the second alignment module is used for aligning the first frame and the second frame based on the third motion vector.
本申请实施例中的视频处理装置可以是电子设备,也可以是电子设备中的部件,例如集成电路或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性的,电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtualreality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personalcomputer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,还可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personalcomputer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。The video processing apparatus in this embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices other than the terminal. Exemplarily, the electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a vehicle-mounted electronic device, a Mobile Internet Device (MID), an augmented reality (AR)/virtual reality (VR) Devices, robots, wearable devices, ultra-mobile personal computers (UMPCs), netbooks or personal digital assistants (PDAs), etc., and can also be servers, network attached storages (NAS) , a personal computer (personal computer, PC), a television (television, TV), a teller machine or a self-service machine, etc., which are not specifically limited in the embodiments of the present application.
本申请实施例中的视频处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。The video processing apparatus in this embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
本申请实施例提供的视频处理装置能够实现图1中的方法实施例实现的各个过程,为避免重复,这里不再赘述。The video processing apparatus provided in this embodiment of the present application can implement each process implemented by the method embodiment in FIG. 1 , and to avoid repetition, details are not repeated here.
可选地,如图4所示,本申请实施例还提供一种电子设备400,包括处理器401和存储器402。存储器402上存储有可在所述处理器401上运行的程序或指令,该程序或指令被处理器401执行时实现上述视频处理方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。Optionally, as shown in FIG. 4 , an embodiment of the present application further provides an
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。It should be noted that the electronic devices in the embodiments of the present application include the aforementioned mobile electronic devices and non-mobile electronic devices.
图5为实现本申请实施例的一种电子设备的硬件结构示意图。FIG. 5 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
该电子设备500包括但不限于:射频单元501、网络模块5102、音频输出单元503、输入单元504、传感器505、显示单元506、用户输入单元507、接口单元508、存储器509、以及处理器510等部件。The
本领域技术人员可以理解,电子设备500还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器510逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图5中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。Those skilled in the art can understand that the
其中,处理器510可用于获取目标视频的第一帧和第二帧,其中,所述第二帧为与所述第一帧相邻的上一帧;根据所述第二帧确定所述第一帧的第一运动矢量和加权比例;根据所述加权比例对所述第一运动矢量进行调整,得到所述第一帧的第二运动矢量;基于所述第二运动矢量将所述第一帧与所述第二帧进行对齐。The
输入单元504可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风5042,图形处理器5041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元506可包括显示面板5061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板5061。用户输入单元507包括触控面板5071以及其他输入设备5072中的至少一种。触控面板5071,也称为触摸屏。触控面板5071可包括触摸检测装置和触摸控制器两个部分。其他输入设备5072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。The
存储器509可用于存储软件程序以及各种数据。存储器509可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器509可以包括易失性存储器或非易失性存储器,或者,存储器509可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器509包括但不限于这些和任意其它适合类型的存储器。The
处理器510可包括一个或多个处理单元;可选的,处理器510集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器510中。The
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述视频处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。Embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium. When the program or instruction is executed by a processor, each process of the above video processing method embodiment can be achieved, and the same can be achieved. In order to avoid repetition, the technical effect will not be repeated here.
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。Wherein, the processor is the processor in the electronic device described in the foregoing embodiments. The readable storage medium includes a computer-readable storage medium, such as computer read-only memory ROM, random access memory RAM, magnetic disk or optical disk, and the like.
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述视频处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the above video processing method embodiments. Each process can achieve the same technical effect. In order to avoid repetition, it will not be repeated here.
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。It should be understood that the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如上述视频处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。The embodiments of the present application provide a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the various processes in the above video processing method embodiments, and can achieve the same technical effect , in order to avoid repetition, it will not be repeated here.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。It should be noted that, herein, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or device comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in the reverse order depending on the functions involved. To perform functions, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to some examples may be combined in other examples.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation. Based on this understanding, the technical solutions of the present application can be embodied in the form of computer software products that are essentially or contribute to the prior art, and the computer software products are stored in a storage medium (such as ROM/RAM, magnetic disk , CD), including several instructions to make a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) execute the methods described in the various embodiments of the present application.
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。The embodiments of the present application have been described above in conjunction with the accompanying drawings, but the present application is not limited to the above-mentioned specific embodiments, which are merely illustrative rather than restrictive. Under the inspiration of this application, without departing from the scope of protection of the purpose of this application and the claims, many forms can be made, which all fall within the protection of this application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210828522.5A CN115100236B (en) | 2022-07-13 | 2022-07-13 | Video processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210828522.5A CN115100236B (en) | 2022-07-13 | 2022-07-13 | Video processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115100236A true CN115100236A (en) | 2022-09-23 |
CN115100236B CN115100236B (en) | 2025-06-27 |
Family
ID=83296002
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210828522.5A Active CN115100236B (en) | 2022-07-13 | 2022-07-13 | Video processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115100236B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1529511A (en) * | 2003-10-17 | 2004-09-15 | 清华大学 | A Video Transcoding Method Based on Motion Vector Synthesis |
JP2009049519A (en) * | 2007-08-14 | 2009-03-05 | Kddi Corp | Predicted motion vector generation apparatus for moving picture encoding apparatus |
US20140198852A1 (en) * | 2013-01-11 | 2014-07-17 | Sony Corporation | Method for stabilizing a first sequence of digital image frames and image stabilization unit |
CN107205156A (en) * | 2016-03-18 | 2017-09-26 | 谷歌公司 | Pass through the motion-vector prediction of scaling |
CN109587479A (en) * | 2017-09-29 | 2019-04-05 | 华为技术有限公司 | Inter-frame prediction method, device and the codec of video image |
US20200404256A1 (en) * | 2018-03-29 | 2020-12-24 | Huawei Technologies Co., Ltd. | Inter prediction method and apparatus |
-
2022
- 2022-07-13 CN CN202210828522.5A patent/CN115100236B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1529511A (en) * | 2003-10-17 | 2004-09-15 | 清华大学 | A Video Transcoding Method Based on Motion Vector Synthesis |
JP2009049519A (en) * | 2007-08-14 | 2009-03-05 | Kddi Corp | Predicted motion vector generation apparatus for moving picture encoding apparatus |
US20140198852A1 (en) * | 2013-01-11 | 2014-07-17 | Sony Corporation | Method for stabilizing a first sequence of digital image frames and image stabilization unit |
CN107205156A (en) * | 2016-03-18 | 2017-09-26 | 谷歌公司 | Pass through the motion-vector prediction of scaling |
CN109587479A (en) * | 2017-09-29 | 2019-04-05 | 华为技术有限公司 | Inter-frame prediction method, device and the codec of video image |
US20200404256A1 (en) * | 2018-03-29 | 2020-12-24 | Huawei Technologies Co., Ltd. | Inter prediction method and apparatus |
Non-Patent Citations (2)
Title |
---|
CHONG XU等: "Frame rate up-conversion with true motion estimation and adaptive motion vector refinement", 2011 4TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, 17 October 2011 (2011-10-17) * |
邵慧;金海红;: "H.264中利用时域信息恢复帧内误码的算法", 安徽建筑工业学院学报(自然科学版), no. 04, 15 August 2010 (2010-08-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN115100236B (en) | 2025-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11170210B2 (en) | Gesture identification, control, and neural network training methods and apparatuses, and electronic devices | |
CN107529650B (en) | Closed loop detection method and device and computer equipment | |
EP3910507B1 (en) | Method and apparatus for waking up screen | |
US20210341998A1 (en) | Gaze-point determining method, contrast adjusting method, and contrast adjusting apparatus, virtual reality device and storage medium | |
CN112348761B (en) | Equipment appearance image brightness adjusting method and device | |
KR20210000671A (en) | Head pose estimation | |
CN114742774A (en) | No-reference image quality evaluation method and system fusing local and global features | |
WO2023151511A1 (en) | Model training method and apparatus, image moire removal method and apparatus, and electronic device | |
CN111145151A (en) | A kind of motion area determination method and electronic device | |
CN109978908A (en) | A kind of quick method for tracking and positioning of single goal adapting to large scale deformation | |
Zhao et al. | DQN-based gradual fisheye image rectification | |
CN110503002B (en) | Face detection method and storage medium | |
CN114782280A (en) | Image processing method and device | |
CN115100236A (en) | Video processing method and device | |
CN114565777B (en) | Data processing method and device | |
CN115396743B (en) | Video watermark removing method, device, equipment and storage medium | |
CN117037244A (en) | Face security detection method, device, computer equipment and storage medium | |
CN113205530B (en) | Shadow area processing method and device, computer readable medium and electronic device | |
CN116188535A (en) | Video tracking method, device, equipment and storage medium based on optical flow estimation | |
CN112367470A (en) | Image processing method and device and electronic equipment | |
CN113516684B (en) | Image processing method, device, equipment and storage medium | |
CN115866240A (en) | Method and device for determining video stability | |
CN119832166B (en) | Panorama reconstruction method based on 3DGS, electronic equipment and storage medium | |
CN114708168A (en) | Image processing method and electronic device | |
CN115239777A (en) | Video processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |