CN106991650A - A kind of method and apparatus of image deblurring - Google Patents
A kind of method and apparatus of image deblurring Download PDFInfo
- Publication number
- CN106991650A CN106991650A CN201610039224.2A CN201610039224A CN106991650A CN 106991650 A CN106991650 A CN 106991650A CN 201610039224 A CN201610039224 A CN 201610039224A CN 106991650 A CN106991650 A CN 106991650A
- Authority
- CN
- China
- Prior art keywords
- dvs
- deblurring
- picture
- image
- result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/409—Edge or detail enhancement; Noise or error suppression
- H04N1/4092—Edge or detail enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
本申请公开了一种图像去模糊方法和装置,包括:获取模糊图片和曝光时间内的DVS事件集合;将模糊图片作为当前模糊图片;根据DVS事件集合,对所述模糊图片进行去模糊处理。应用本申请,能够改善图像去模糊的效果。
The present application discloses an image deblurring method and device, comprising: acquiring a blurred picture and a DVS event set within an exposure time; using the blurred picture as a current blurred picture; and performing deblurring processing on the blurred picture according to the DVS event set. By applying the present application, the effect of image deblurring can be improved.
Description
技术领域technical field
本申请涉及图像处理技术,特别涉及一种图像去模糊的方法和装置。The present application relates to image processing technology, and in particular to an image deblurring method and device.
背景技术Background technique
在相机拍照时,有时由于手抖等各种原因可能会造成图片成像模糊。为去掉或减轻由于相机运动导致的图像模糊,通常会对成像的模糊图片进行去模糊处理,以得到较为清晰的图片。When the camera is taking pictures, sometimes due to various reasons such as hand shaking, the picture may be blurred. In order to remove or reduce image blur caused by camera motion, deblurring processing is usually performed on the imaged blurred image to obtain a clearer image.
目前,现有的去模糊方式,只针对少数图像可以达到去模糊效果。这其中的原因是,现有技术都用反卷积实现去模糊,而利用反卷积实现去模糊的过程本身就是一个病态过程,因为事先对清晰图像和导致清晰图像变模糊的原因均是未知的。正是由于图像变模糊的原因未知,因此很多方法都会对待求解的清晰图像做出一些假设,然而当实际图像与假设不相符时,去模糊方法就会失效。At present, the existing deblurring methods can achieve deblurring effects only for a small number of images. The reason for this is that the existing technology uses deconvolution to achieve deblurring, and the process of using deconvolution to achieve deblurring itself is a pathological process, because the clear image and the cause of the blurring of the clear image are unknown in advance. of. It is precisely because the reason for the blurring of the image is unknown, so many methods will make some assumptions about the clear image to be solved. However, when the actual image does not match the assumption, the deblurring method will fail.
发明内容Contents of the invention
本申请提供一种基于DVS的图像去模糊方法和装置,能够改善图像去模糊的效果。The present application provides a DVS-based image deblurring method and device, which can improve the effect of image deblurring.
为实现上述目的,本申请采用如下技术方案:In order to achieve the above object, the application adopts the following technical solutions:
一种图像去模糊方法,包括:An image deblurring method, comprising:
获取待处理的模糊图片和所述模糊图片曝光时间内DVS传感器记录的DVS事件集合;Obtain the blurred picture to be processed and the DVS event set recorded by the DVS sensor within the exposure time of the blurred picture;
根据所述DVS事件集合,对所述模糊图片进行去模糊处理。Perform deblurring processing on the blurred picture according to the DVS event set.
较佳地,根据所述DVS事件集合,对所述模糊图片进行去模糊处理,包括:Preferably, according to the DVS event set, performing deblurring processing on the blurred picture includes:
根据所述DVS事件集合估计DVS边缘估计图像,并根据该DVS边缘估计图像进行所述去模糊处理。Estimate a DVS edge estimation image according to the DVS event set, and perform the deblurring process according to the DVS edge estimation image.
较佳地,根据所述DVS边缘估计图像进行去模糊处理的方式包括以下之一:Preferably, the manner of performing deblurring processing on the DVS edge estimation image includes one of the following:
将所述模糊图片与所述DVS边缘估计图像进行对齐,根据对齐结果进行去模糊处理;Align the blurred picture with the DVS edge estimation image, and perform deblurring processing according to the alignment result;
将所述模糊图片与所述DVS边缘估计图像进行对齐,根据对齐结果进行去模糊处理,当去模糊的处理结果未达到预设的去模糊处理效果且未达到设定的最大迭代次数时,根据所述去模糊的处理结果确定出平均边缘图,并利用该平均边缘图重新对齐所述模糊图片与所述DVS边缘估计图像,直到去模糊的处理结果达到预设的去模糊处理效果或达到设定的最大迭代次数,将该模糊的处理结果作为去模糊结果输出;Align the blurred picture with the DVS edge estimation image, and perform deblurring processing according to the alignment result. When the deblurring processing result does not reach the preset deblurring processing effect and does not reach the set maximum number of iterations, according to The deblurring processing result determines an average edge map, and uses the average edge map to re-align the blurred picture and the DVS edge estimation image until the deblurring processing result reaches a preset deblurring processing effect or reaches a preset The specified maximum number of iterations, the fuzzy processing result is output as the defuzzification result;
将所述模糊图片与所述DVS边缘估计图像进行对齐,根据对齐结果进行去模糊处理,在当前未达到设定的最大迭代次数时,根据所述去模糊的处理结果确定出平均边缘图,并根据该平均边缘图重新对齐所述模糊图片与所述DVS边缘估计图像,直到达到设定的最大迭代次数,将该模糊的处理结果作为去模糊结果输出;Aligning the blurred picture with the DVS edge estimation image, performing deblurring processing according to the alignment result, and determining an average edge map according to the deblurring processing result when the current maximum number of iterations has not been reached, and Re-aligning the blurred picture and the DVS edge estimation image according to the average edge map until a set maximum number of iterations is reached, and outputting the blurred processing result as a deblurring result;
将所述模糊图片与所述DVS边缘估计图像进行对齐,根据对齐结果进行去模糊处理,当去模糊的处理结果未达到预设的去模糊处理效果且当前未达到设定的最大迭代次数时,根据所述去模糊的处理结果确定出平均边缘图,并根据该平均边缘图重新对齐所述模糊图片与所述DVS边缘估计图像,直到去模糊的处理结果达到预设的去模糊处理效果或达到设定的最大迭代次数,将该模糊的处理结果作为去模糊结果输出。Align the blurred picture with the DVS edge estimation image, and perform deblurring processing according to the alignment result. When the deblurring processing result does not reach the preset deblurring processing effect and currently does not reach the set maximum number of iterations, An average edge map is determined according to the deblurring processing result, and the blurred picture and the DVS edge estimation image are realigned according to the average edge map until the deblurring processing result reaches a preset deblurring processing effect or reaches The maximum number of iterations is set, and the blurred processing result is output as the deblurring result.
较佳地,根据所述DVS事件集合,对所述模糊图片进行去模糊处理,包括:Preferably, according to the DVS event set, performing deblurring processing on the blurred picture includes:
根据所述DVS事件集合估计在所述曝光时间内的相机运动轨迹和DVS边缘估计图像;并根据所述相机运动轨迹和DVS边缘估计图像进行所述去模糊处理。Estimating the camera motion trajectory and the DVS edge estimation image within the exposure time according to the DVS event set; and performing the deblurring process according to the camera motion trajectory and the DVS edge estimation image.
较佳地,根据所述相机运动轨迹和DVS边缘估计图像进行去模糊处理的方式包括以下之一:Preferably, the manner of performing deblurring processing according to the camera motion trajectory and the DVS edge estimation image includes one of the following:
将所述模糊图片与所述DVS边缘估计图像进行对齐,根据对齐结果进行去模糊处理,当去模糊的处理结果未达到预设的去模糊处理效果时,根据所述去模糊的处理结果确定出在所述相机运动轨迹方向上的平均边缘图,并根据该平均边缘图重新对齐所述模糊图片与所述DVS边缘估计图像,直到去模糊的处理结果达到预设的去模糊处理效果,将该模糊的处理结果作为去模糊结果输出;Align the blurred picture with the DVS edge estimation image, and perform deblurring processing according to the alignment result, and when the deblurring processing result does not reach the preset deblurring processing effect, determine according to the deblurring processing result an average edge map in the direction of the camera motion track, and re-align the blurred picture and the DVS edge estimation image according to the average edge map until the deblurring processing result reaches the preset deblurring processing effect, and the The blurred processing result is output as the deblurred result;
将所述模糊图片与所述DVS边缘估计图像进行对齐,根据对齐结果进行去模糊处理,在当前未达到设定的最大迭代次数时,根据所述去模糊的处理结果确定出在所述相机运动轨迹方向上的平均边缘图,并根据该平均边缘图重新对齐所述模糊图片与所述DVS边缘估计图像,直到达到设定的最大迭代次数,将该模糊的处理结果作为去模糊结果输出;Align the blurred picture with the DVS edge estimation image, and perform deblurring processing according to the alignment result. an average edge map in the track direction, and re-align the blurred picture and the DVS edge estimation image according to the average edge map until a set maximum number of iterations is reached, and output the blurred processing result as a deblurring result;
将所述模糊图片与所述DVS边缘估计图像进行对齐,根据对齐结果进行去模糊处理,当去模糊的处理结果未达到预设的去模糊处理效果且当前未达到设定的最大迭代次数时,根据所述去模糊的处理结果确定出在所述相机运动轨迹方向上的平均边缘图,并根据该平均边缘图重新对齐所述模糊图片与所述DVS边缘估计图像,直到去模糊的处理结果达到预设的去模糊处理效果或达到设定的最大迭代次数,将该模糊的处理结果作为去模糊结果输出。Align the blurred picture with the DVS edge estimation image, and perform deblurring processing according to the alignment result. When the deblurring processing result does not reach the preset deblurring processing effect and currently does not reach the set maximum number of iterations, Determine the average edge map in the direction of the camera motion trajectory according to the deblurring processing result, and realign the blurred picture and the DVS edge estimation image according to the average edge map until the deblurring processing result reaches The preset deblurring effect or the set maximum number of iterations are reached, and the blurring result is output as the deblurring result.
较佳地,所述根据对齐结果进行去模糊处理包括:根据所述当前对齐后的DVS边缘估计图像,利用反卷积变换对所述当前对齐后的模糊图片进行去模糊处理,得到本次迭代的清晰图片作为去模糊的处理结果;Preferably, the performing deblurring processing according to the alignment result includes: performing deblurring processing on the currently aligned blurred picture according to the currently aligned DVS edge estimation image using deconvolution transformation to obtain this iteration The clear picture of is used as the result of the deblurring process;
若本次与上次迭代的清晰图片的误差小于预定阈值,则确定达到预设的去模糊效果,否则,确定未达到预设的去模糊效果。If the error between the clear picture of this iteration and the previous iteration is smaller than a predetermined threshold, it is determined that the preset deblurring effect is achieved; otherwise, it is determined that the preset deblurring effect is not achieved.
较佳地,所述根据所述DVS事件集合估计相机运动轨迹包括:将所述曝光时间按照时间先后顺序分为N个时间片段,将所述DVS事件集合中位于同一时间片段内的DVS事件独立成像为图片,根据N个时间片段内成像的图片,确定在所述曝光时间内的相机运动轨迹。Preferably, the estimating the camera motion trajectory according to the DVS event set includes: dividing the exposure time into N time segments in chronological order, and dividing the DVS events in the same time segment in the DVS event set independently The imaging is taken as a picture, and the camera movement trajectory within the exposure time is determined according to the pictures imaged in N time segments.
较佳地,确定所述相机运动轨迹的方式包括:在每两个连续的时间片段内成像的图片中,根据DVS事件的位置关系,确定该两个连续时间片段内的相机位移,并将所述曝光时间内所有的每两个连续时间片段内的相机位移,按照时间先后顺序相连,构成在所述曝光时间内的相机运动轨迹。Preferably, the method of determining the camera movement trajectory includes: in the pictures imaged in every two consecutive time segments, according to the positional relationship of the DVS event, determining the camera displacement in the two consecutive time segments, and calculating the All camera displacements in every two consecutive time segments within the exposure time are connected in chronological order to form a camera motion trajectory within the exposure time.
较佳地,所述根据DVS事件集合估计所述DVS边缘估计图像包括:将所述曝光时间按照时间先后顺序分为N个时间片段,将所述DVS事件集合中位于同一时间片段内的DVS事件统一成像为图片,将所有时间片段内成像的图片进行空间对齐并叠加,计算叠加后图片的骨架图作为所述DVS边缘估计图像。Preferably, the estimating the DVS edge estimation image according to the DVS event set includes: dividing the exposure time into N time segments in chronological order, and dividing the DVS events in the same time segment in the DVS event set Unify imaging into pictures, spatially align and superimpose the pictures imaged in all time segments, and calculate the skeleton diagram of the superimposed pictures as the DVS edge estimation image.
较佳地,所述将任意两个时间片段的图片进行空间对齐的方式包括:对任意两个时间片段的图片A和B,计算其中,(x,y)为图片中像素点的二维坐标,A(x,y)和B(x,y)分别表示图片A和B中的像素点的取值,(Δx,Δy)为图片B向图片A对齐时图片B所需的二维位移值,argmin(·)表示(·)取最小值时的自变量取值;根据计算得到的(Δx,Δy),将图片B和/或图片A进行位移,实现图片A和图片B的对齐。Preferably, the method of spatially aligning pictures of any two time segments includes: for pictures A and B of any two time segments, calculating Among them, (x, y) is the two-dimensional coordinates of the pixels in the picture, A(x, y) and B(x, y) represent the values of the pixels in pictures A and B respectively, and (Δx, Δy) is The two-dimensional displacement value required by picture B when picture B is aligned to picture A, argmin( ) indicates the value of the independent variable when ( ) takes the minimum value; according to the calculated (Δx, Δy), picture B and / Or picture A is displaced to achieve the alignment of picture A and picture B.
较佳地,所述将所有时间片段成像的图片进行空间对齐并叠加包括以下之一:Preferably, said spatially aligning and superimposing the pictures imaged in all time slices includes one of the following:
对每两个连续时间片段的图片,计算(Δx,Δy);按照时间先后顺序,依次将时间在后的图片向时间在前的图片对齐并叠加;Calculate (Δx, Δy) for every two consecutive time segments of pictures; in chronological order, sequentially align and superimpose the pictures at the back to the pictures at the front;
对每两个连续时间片段的图片,计算(Δx,Δy);按照时间先后顺序,依次将时间在前的图片向时间在后的图片对齐并叠加。Calculate (Δx, Δy) for every two consecutive time segments of pictures; in chronological order, sequentially align and superimpose the picture in front of the time to the picture in time behind.
较佳地,在进行所述去模糊处理时计算模糊核的方式为:模糊核其中,表示将矩阵x表示成为向量形式,表示求梯度的运算,I为去模糊处理后的清晰图片,C为当前对齐后的模糊图片,E为当前对齐后的DVS边缘估计图像,||vec||2表示计算向量的2范数,||vec||1表示计算向量的1范数,λ1和λ2为预设的两个权重值,argmin(·)表示(·)取最小值时的自变量取值。Preferably, the way to calculate the blur kernel when performing the deblurring process is: blur kernel in, Indicates that the matrix x is represented as a vector form, Indicates the calculation of the gradient, I is the clear image after deblurring processing, C is the blurred image after the current alignment, E is the DVS edge estimation image after the current alignment, ||vec|| 2 indicates the 2 norm of the calculation vector, ||vec|| 1 means to calculate the 1-norm of the vector, λ 1 and λ 2 are two preset weight values, and argmin( ) means the value of the independent variable when ( ) takes the minimum value.
较佳地,确定平均边缘图的方式包括:Preferably, the method of determining the average edge map includes:
对所述本次迭代的清晰图片,计算在所述相机运动轨迹中各个分段方向上的边缘图,并将所有边缘图叠加后计算骨架图,将其作为所述平均边缘图。For the clear picture of this iteration, calculate the edge map in each segment direction in the camera motion trajectory, and calculate a skeleton map after superimposing all the edge maps, which is used as the average edge map.
一种图像去模糊装置,包括:获取模块和图像去模糊模块;An image deblurring device, comprising: an acquisition module and an image deblurring module;
所述获取模块,用于获取待处理的模糊图片和所述模糊图片曝光时间内DVS传感器记录的DVS事件集合;The acquisition module is used to acquire the blurred picture to be processed and the DVS event set recorded by the DVS sensor within the exposure time of the blurred picture;
所述图像去模糊模块,用于根据所述DVS事件集合,对所述模糊图片进行去模糊处理。The image deblurring module is configured to perform deblurring processing on the blurred picture according to the DVS event set.
由上述技术方案可见,本申请中,引入DVS传感器记录的DVS事件,对模糊图片进行去模糊处理。具体地,可以估计相机运动轨迹,并利用该运动轨迹进行模糊图像的边缘估计;同时,利用DVS事件进行DVS边缘估计,并将模糊图像或估计得到的模糊图像的边缘估计图像与DVS边缘估计结果进行对齐后,参与去模糊的处理。通过上述处理,能够提高去模糊的效果。It can be seen from the above technical solution that in this application, the DVS event recorded by the DVS sensor is introduced to perform deblurring processing on the blurred picture. Specifically, the camera motion trajectory can be estimated, and the motion trajectory can be used to estimate the edge of the blurred image; at the same time, the DVS event can be used to perform DVS edge estimation, and the blurred image or the estimated edge estimation image of the blurred image can be compared with the DVS edge estimation result After alignment, participate in the deblurring process. Through the above processing, the effect of deblurring can be improved.
附图说明Description of drawings
图1为本申请中图像去模糊方法的基本流程示意图;FIG. 1 is a schematic diagram of the basic flow of the image deblurring method in the present application;
图2为DVS事件集合的成像与各个时间片段内成像的图片之间的关系示意图;Fig. 2 is a schematic diagram of the relationship between the imaging of the DVS event set and the images imaged in each time segment;
图3为连续两个时间片段内的相机位移示意图;Fig. 3 is a schematic diagram of camera displacement in two consecutive time segments;
图4为进行DVS边缘估计的示意图;Fig. 4 is a schematic diagram of performing DVS edge estimation;
图5为RGB-DVS图像对齐的处理示意图;Fig. 5 is the processing schematic diagram of RGB-DVS image alignment;
图6为计算清晰图片的示意图;Fig. 6 is a schematic diagram of calculating a clear picture;
图7为平均边缘图的生成示意图;Fig. 7 is a schematic diagram of generating an average edge map;
图8为本申请中图像去模糊装置的基本结构示意图;FIG. 8 is a schematic diagram of the basic structure of an image deblurring device in the present application;
图9为本申请图像去模糊装置中的图像去模糊模块的结构示意图;9 is a schematic structural diagram of the image deblurring module in the image deblurring device of the present application;
图10为本申请中的图像去模糊方法与现有技术比较的效果示意图。FIG. 10 is a schematic diagram of the comparison between the image deblurring method in this application and the prior art.
具体实施方式detailed description
为了使本申请的目的、技术手段和优点更加清楚明白,以下结合附图对本申请做进一步详细说明。In order to make the purpose, technical means and advantages of the present application clearer, the present application will be further described in detail below in conjunction with the accompanying drawings.
DVS传感器是一种具有微秒级时间分辨率的超高速相机,可以记录输出DVS事件,该DVS时间用于表示相机镜头内在微秒级时间单位间所发生的变化。由于在镜头内发生变化的部位通常是图像中的边缘,因此,当DVS传感器与普通相机固定在一起时,可以得到普通相机快速运动下的相对清晰边缘图像。本申请中,正是利用DVS传感器所能得到的相对清晰边缘图像,参与对普通相机拍摄的模糊图像进行的去模糊处理,从而极大地改善去模糊效果。The DVS sensor is an ultra-high-speed camera with microsecond-level time resolution, which can record and output DVS events. The DVS time is used to represent the changes that occur between microsecond-level time units in the camera lens. Since the part that changes in the lens is usually the edge of the image, when the DVS sensor is fixed with the ordinary camera, a relatively clear edge image under the fast motion of the ordinary camera can be obtained. In this application, the relatively clear edge image obtained by the DVS sensor is used to participate in the deblurring process of the blurred image captured by the ordinary camera, thereby greatly improving the deblurring effect.
本申请最基本的图像去模糊方法包括:The most basic image deblurring methods in this application include:
步骤1、获取待处理的模糊图片和所述模糊图片曝光时间内DVS传感器记录的DVS事件集合。Step 1. Obtain the blurred picture to be processed and the set of DVS events recorded by the DVS sensor within the exposure time of the blurred picture.
步骤2、根据所述DVS事件集合,对所述模糊图片进行去模糊处理。Step 2. Perform deblurring processing on the blurred picture according to the DVS event set.
在步骤2中,优选地,可以根据DVS事件集合估计DVS边缘估计图像,再根据该DVS边缘估计图像进行去模糊处理。或者,还可以根据DVS事件集合估计相机运动轨迹和DVS边缘估计图像,再根据相机运动轨迹和DVS边缘估计图像进行去模糊处理。In step 2, preferably, the DVS edge estimation image can be estimated according to the DVS event set, and then the deblurring process is performed according to the DVS edge estimation image. Alternatively, the camera motion trajectory and the DVS edge estimation image may also be estimated according to the DVS event set, and then the deblurring process is performed according to the camera motion trajectory and the DVS edge estimation image.
图1为本申请中的图像去模糊方法的流程示意图。其中,以对RGB图像进行处理为例进行说明。如图1所示,该方法包括:FIG. 1 is a schematic flow chart of an image deblurring method in this application. Wherein, the processing of an RGB image is taken as an example for description. As shown in Figure 1, the method includes:
步骤101,获取待处理的模糊图片和该模糊图片曝光时间内DVS传感器记录的DVS事件集合。Step 101, acquire the blurred picture to be processed and the set of DVS events recorded by the DVS sensor within the exposure time of the blurred picture.
获取普通相机拍下的RGB模糊图片,DVS传感器与拍照的普通相机固定在一起,因此可以获取与普通相机相同视角的记录,同时记录的DVS事件可以反映出拍照的普通相机的运动。获取模糊图片曝光时间内的DVS事件集合,能够反映在相应曝光时间内的相机运动状况。Obtain the RGB fuzzy picture taken by the ordinary camera, the DVS sensor is fixed with the ordinary camera that takes the picture, so the record of the same viewing angle as the ordinary camera can be obtained, and the recorded DVS events can reflect the movement of the ordinary camera that takes the picture. Obtain the DVS event set within the exposure time of the blurred picture, which can reflect the camera motion condition within the corresponding exposure time.
步骤102,根据DVS事件集合,估计在曝光时间内的相机运动轨迹和DVS边缘估计图像。Step 102, according to the DVS event set, estimate the camera motion track and the DVS edge estimation image within the exposure time.
将曝光时间按照时间顺序分为N个时间片段,N为预设的正整数,可以根据需要设置,本申请对具体设置依据不做限定。曝光时间内的DVS事件集合包括很多DVS事件,相应DVS事件都有时间标记,按照划分的时间片段,根据DVS事件的时间标记,将位于同一时间片段内的DVS事件进行独立成像,具体独立成像方式可以采用任何可实现的方式,本申请对此不作限定。如图2所示,为DVS事件集合的成像与各个时间片段内成像的图片之间的关系。The exposure time is divided into N time segments in chronological order, and N is a preset positive integer, which can be set according to needs, and the application does not limit the specific setting basis. The DVS event set within the exposure time includes many DVS events, and the corresponding DVS events have time marks. According to the divided time segments, according to the time marks of DVS events, the DVS events in the same time segment are independently imaged. The specific independent imaging method Any implementable manner may be adopted, which is not limited in the present application. As shown in FIG. 2 , it is the relationship between the imaging of the DVS event set and the images imaged in each time segment.
首先分析模糊图片的产生和DVS事件成像的图片的作用。如果将相机曝光时间内的被拍摄对象看作是静止的,那么图像的模糊就是相机本身相对于静止拍摄对象的运动所导致的。基于此,在每个时间片段内成像的图片,都是相应时间范围内相机快速相对运动下的相对清晰边缘图像;对于不同时间片段内的图片,被拍摄对象是相同的,也就是说边缘图像的轮廓也应当是相同的,只是该边缘图像在背景中出现了相对位移,而该位移刚好表征的是不同时间片段间的相机相对运动轨迹。同时,不同时间片段内成像的图片代表的是不同时间点的相同边缘图像,但是由于噪声等的影响,每一张图片的边缘可能不是最准确和完整的;如果按照相机的运动轨迹将所有N个时间片段内的图片恢复到同一时间点,然后将N个同一时间点的图片叠加在一起就能够得到相对更加清晰的边缘图。Firstly, the generation of blurred pictures and the effect of pictures of DVS event imaging are analyzed. If the subject within the exposure time of the camera is regarded as stationary, then the blurring of the image is caused by the movement of the camera itself relative to the stationary subject. Based on this, the images imaged in each time segment are relatively clear edge images under the rapid relative motion of the camera in the corresponding time range; for the pictures in different time segments, the objects to be photographed are the same, that is to say, the edge images The contours of should also be the same, except that the edge image has a relative displacement in the background, and this displacement just represents the relative motion trajectory of the camera between different time segments. At the same time, the images imaged in different time segments represent the same edge images at different time points, but due to the influence of noise, the edges of each picture may not be the most accurate and complete; if all N The pictures in the time segments are restored to the same time point, and then N pictures of the same time point are superimposed together to obtain a relatively clearer edge map.
简单而言,对于两个不同时间片段内的图片,图片上应当有相同的形状,而该形状在整幅图像中的位置关系变化就代表了相机在这两个时间段间的直线运动关系。将其中一个图片按照运动关系进行反向运动,就可以得到另一个图片所在时间段内的图片,这两个图片叠加能够得到效果增强的边缘图。Simply put, for pictures in two different time segments, there should be the same shape on the picture, and the change of the positional relationship of the shape in the whole image represents the linear motion relationship of the camera between the two time segments. One of the pictures is reversely moved according to the motion relationship, and the picture in the time period of the other picture can be obtained, and the edge map with enhanced effect can be obtained by superimposing these two pictures.
基于上述分析的内容,本申请中根据N个时间片段内成像的图片,确定曝光时间内的相机运动轨迹和DVS边缘估计图像。Based on the content of the above analysis, in this application, according to the images imaged in N time segments, the camera motion track and DVS edge estimation image within the exposure time are determined.
具体地,确定曝光时间内相机运动轨迹的方式可以为:在每两个连续的时间片段内成像的图片中,根据DVS事件的位置关系,确定该两个连续时间片段内的相机位移,并将曝光时间内所有的每两个连续时间片段内的相机位移,按照时间先后顺序相连,构成在曝光时间内的相机运动轨迹。其中,两个连续时间片段内的相机位移如图3所示。Specifically, the method of determining the camera motion trajectory within the exposure time can be as follows: in the pictures imaged in every two consecutive time segments, according to the positional relationship of the DVS event, determine the camera displacement in the two consecutive time segments, and All the camera displacements in every two consecutive time segments within the exposure time are connected in chronological order to form a camera motion trajectory within the exposure time. Among them, the camera displacement in two consecutive time segments is shown in Figure 3.
这里,给出一种确定两个连续时间片段内的相机位移的示例性方法:对任意两个连续时间片段的图片A和B,计算Here, an exemplary method for determining the camera displacement in two consecutive time segments is given: For pictures A and B of any two consecutive time segments, calculate
其中,(x,y)为图片中像素点的二维坐标,A(x,y)和B(x,y)分别表示图片A和B中的像素点的取值,(Δx,Δy)为相机从图片B所代表的时间点到图片A所代表的时间点的二维位移,根据该二维位移能够得到图3所示的运动向量;这里,假定在连续两个时间片段内相机做直线运动。将所有连续两个时间段内的运动向量,按照时间先后顺序相连,就能够得到在曝光时间内的相机轨迹。Among them, (x, y) is the two-dimensional coordinates of the pixels in the picture, A(x, y) and B(x, y) represent the values of the pixels in pictures A and B respectively, and (Δx, Δy) is The two-dimensional displacement of the camera from the time point represented by picture B to the time point represented by picture A, according to the two-dimensional displacement, the motion vector shown in Figure 3 can be obtained; here, it is assumed that the camera makes a straight line in two consecutive time segments sports. By connecting all motion vectors in two consecutive time periods in chronological order, the camera trajectory within the exposure time can be obtained.
确定DVS边缘估计图像的方式可以为:将所有时间片段内成像的图片进行空间对齐并叠加,计算叠加后图片的骨架图作为DVS边缘估计图像。如图4所示。The method of determining the DVS edge estimation image may be as follows: spatially align and superimpose images imaged in all time segments, and calculate a skeleton image of the superimposed images as the DVS edge estimation image. As shown in Figure 4.
其中,在进行空间对齐时,可以将所有时间片段的图片对齐到同一时间片段上,再统一进行叠加;或者,也可以将某时间片段的图片对齐到另一时间片段上,并将二者叠加,再将叠加后的结果对齐到其他时间片段上,依此类推。这里,可以采用上述公式(1)计算任意两个时间片段的图片之间的二维位移值(Δx,Δy),该二维位移值代表图片B对齐到图片A所需的二维位移值。可以将图片B位移(Δx,Δy)对齐到图片A,或者也可以将图片A位移(-Δx,-Δy)对齐到图片B,再或者,还可以将图片A和图片B一同进行位移,对齐到图片A和图片B之间的其他时间片段上。Among them, when performing spatial alignment, the pictures of all time segments can be aligned to the same time segment, and then superimposed uniformly; or, the pictures of a certain time segment can be aligned to another time segment, and the two are superimposed , and then align the superimposed results to other time segments, and so on. Here, the above formula (1) can be used to calculate the two-dimensional displacement value (Δx, Δy) between the pictures of any two time segments, and the two-dimensional displacement value represents the two-dimensional displacement value required for aligning picture B to picture A. You can align the displacement (Δx, Δy) of picture B to picture A, or you can align the displacement (-Δx, -Δy) of picture A to picture B, or you can also shift and align picture A and picture B together to other time slices between picture A and picture B.
另外,由于在采用公式(1)计算位移时,假定两个时间片段间的移动是直线运动,而事实上,随着时间间隔的增长,直线运动与实际运动的偏离可能更大,估计越不准确,因此,优选地,在进行图片对齐和叠加时可以对每两个连续时间片段的图片计算(Δx,Δy),按照时间先后顺序,依次将时间在后/前的图片向时间在前/后的图片对齐并叠加。这样的处理能够增加估计的准确性。其中“/”表示或。In addition, since the movement between two time segments is assumed to be a linear motion when formula (1) is used to calculate the displacement, in fact, as the time interval increases, the deviation between the linear motion and the actual motion may be greater, and the estimation is less accurate. Accurate, therefore, preferably, (Δx, Δy) can be calculated for the pictures of every two consecutive time segments when performing picture alignment and superimposition, and in chronological order, sequentially move the time-behind/front-time pictures to the time-before/ The final images are aligned and superimposed. Such processing can increase the accuracy of the estimation. Where "/" means or.
在本步骤后,开始进行迭代以实现图像去模糊。After this step, iterations begin to achieve image deblurring.
步骤103,将获取的模糊图片与DVS边缘估计图像进行对齐,得到当前对齐后的模糊图片和当前对齐后的DVS边缘估计图像。Step 103: Align the obtained blurred picture with the DVS edge estimation image to obtain the current aligned blurred picture and the current aligned DVS edge estimation image.
RGB-DVS图像对齐的处理如图5所示。首先对RGB图像和DVS边缘估计图像分别进行去畸变。然后在两幅图像中找到对应的关键点,最后通过计算仿射变换模型对两幅图像进行对齐。具体找到去畸变、确定对应关键点和通过模型对齐图像的处理均为现有技术,这里就不再赘述。其中,本步骤中包括去畸变的处理为优选方式,如果出于实现复杂度或其他考虑,也可以不进行去畸变,直接确定关键点进行图像对齐。在进行对齐时,可以将RGB图像向DVS边缘估计图像对齐,也可以将DVS边缘估计图像向RGB图像对齐。优选地,可以将RGB图像向DVS边缘估计图像对齐,对齐效果更佳。The processing of RGB-DVS image alignment is shown in Figure 5. Firstly, the RGB image and the DVS edge estimation image are dewarped separately. Then find the corresponding key points in the two images, and finally align the two images by calculating the affine transformation model. Specifically, the processes of finding out the distortion, determining the corresponding key points, and aligning the images through the model are all existing technologies, and will not be repeated here. Wherein, it is preferable to include de-distortion processing in this step, and if due to implementation complexity or other considerations, de-distortion may not be performed, and key points may be directly determined for image alignment. When performing alignment, the RGB image can be aligned to the DVS edge estimation image, and the DVS edge estimation image can also be aligned to the RGB image. Preferably, the RGB image can be aligned to the DVS edge estimation image, and the alignment effect is better.
另外,在第一次进行迭代时直接利用RGB模糊图像与DVS边缘估计图像进行对齐处理,在之后的迭代中根据前次迭代确定的RGB边缘估计图像,将RGB模糊图像与DVS边缘估计图像进行对齐处理,以提高RGB与DVS的配准精度。In addition, in the first iteration, the RGB blurred image and the DVS edge estimation image are directly used for alignment processing, and in the subsequent iterations, the RGB blurred image is aligned with the DVS edge estimation image according to the RGB edge estimation image determined in the previous iteration. processing to improve the registration accuracy of RGB and DVS.
步骤104,根据当前对齐后的DVS边缘估计图像,利用反卷积变换对当前对齐后的模糊图片进行去模糊处理,得到本次迭代的清晰图片。Step 104, according to the currently aligned DVS edge estimation image, use deconvolution transformation to perform deblurring processing on the currently aligned blurred picture to obtain a clear picture of this iteration.
图像去模糊模块的处理是估计图像的模糊核,并利用反卷积变换实现去模糊。如图6所示。其中,计算模糊核时利用当前对齐后的DVS边缘估计图像E。具体地,去模糊的处理包括:The processing of the image deblurring module is to estimate the blur kernel of the image, and use the deconvolution transformation to achieve deblurring. As shown in Figure 6. Wherein, the currently aligned DVS edge is used to estimate the image E when calculating the blur kernel. Specifically, the deblurring process includes:
计算模糊核k,使得Calculate the blur kernel k such that
其中表示将矩阵x表示成为向量形式,表示求梯度的运算,I是处理后的清晰RGB图像,C为当前对齐后的模糊图片,E为当前对齐后的DVS边缘估计图像,||vec||2表示计算向量的2范数,||vec||1表示计算向量的1范数,λ1和λ2为预设的两个权重值,argmin(·)表示(·)取最小值时的自变量取值;in Indicates that the matrix x is represented as a vector form, Indicates the calculation of the gradient, I is the clear RGB image after processing, C is the blurred image after the current alignment, E is the DVS edge estimation image after the current alignment, ||vec|| 2 indicates the 2 norm of the calculation vector, | |vec|| 1 means to calculate the 1 norm of the vector, λ 1 and λ 2 are the two preset weight values, argmin( ) means the value of the independent variable when ( ) takes the minimum value;
求出模糊核k之后,求出清晰的RGB图像I,After obtaining the blur kernel k, obtain a clear RGB image I,
表示将矩阵x表示成为向量形式,||vec||2表示计算向量的2范数,||vec||1表示计算1范数,B'为当前对齐后的模糊图片。 Indicates that the matrix x is expressed in vector form, ||vec|| 2 indicates the calculation of the 2-norm of the vector, ||vec|| 1 indicates the calculation of the 1-norm, and B' is the blurred image after the current alignment.
以上求模糊核和求清晰图像的过程交替进行,直到达到设定的交替次数,或者,清晰图片不再改变,将计算得到的I作为本次迭代得到的清晰图片。The above process of obtaining the blur kernel and obtaining the clear image is carried out alternately until the set number of alternations is reached, or the clear image does not change, and the calculated I is used as the clear image obtained by this iteration.
在本步骤的处理中,计算I的方式与现有技术相同,计算模糊核k时根据当前对齐后的DVS边缘估计图像E进行。这样,通过在模糊核k的计算中引入图像E,使得更完整地反映图像模糊状况,使计算出的清晰图片I更接近原始画面。In the processing of this step, the method of calculating I is the same as that of the prior art, and the blur kernel k is calculated according to the currently aligned DVS edge estimation image E. In this way, by introducing the image E into the calculation of the blur kernel k, the blurred state of the image can be reflected more completely, and the calculated clear picture I is closer to the original picture.
至此,可以结束本流程。根据DVS边缘估计图像进行了去模糊处理。如果处理能力允许,优选地,还可以继续进行以下步骤,通过迭代处理过程来优化去模糊的处理结果。So far, this process can be ended. The image was deblurred based on DVS edge estimation. If the processing capability allows, preferably, the following steps may be continued to optimize the deblurred processing result through an iterative processing process.
步骤105,判断本次与上次迭代的清晰图片是否相同,若是,则将该清晰图片作为去模糊结果,并结束本流程;否则,执行步骤106。Step 105, judge whether the clear picture of this iteration is the same as that of the previous iteration, if yes, use the clear picture as the deblurring result, and end the process; otherwise, execute step 106.
当前后两次迭代的清晰图片保持不变时,认为已经达到最佳的去模糊效果,输出该清晰图片,并结束去模糊处理;否则,继续计算RGB的边缘图进行下一次迭代。其中,前后两次迭代的清晰图片是否相同的判定方式可以为:本次与上次迭代的清晰图片的误差小于预定阈值时,可以认为两次迭代的清晰图片相同,否则,认为两次迭代的清晰图片不相同。当然,也可以采用其他的判定方式,本申请对此不作限定。When the clear image of the previous two iterations remains unchanged, it is considered that the best deblurring effect has been achieved, and the clear image is output, and the deblurring process ends; otherwise, continue to calculate the RGB edge image for the next iteration. Among them, the judgment method of whether the clear pictures of the two iterations are the same can be as follows: when the error between the clear pictures of this iteration and the previous iteration is less than a predetermined threshold, the clear pictures of the two iterations can be considered to be the same; otherwise, the clear pictures of the two iterations can be considered as Clear pictures are not the same. Of course, other determination methods may also be used, which is not limited in the present application.
步骤106,根据本次迭代的清晰图片确定平均边缘图,返回步骤103,并将平均边缘图作为下次迭代时RGB模糊图片与DVS边缘估计图像进行对齐的依据。Step 106, determine the average edge map according to the clear picture in this iteration, return to step 103, and use the average edge map as the basis for aligning the RGB fuzzy picture and the DVS edge estimation image in the next iteration.
本步骤的处理用于生成RGB图像的有向边缘图,本申请中称为平均边缘图。具体地,基于本次迭代中已经估计出的清晰图片,求出该清晰图片在设定方向上的边缘图,然后将多个边缘图取平均得到平均边缘图。其中,设定方向可以是随机指定的若干方向,或者,优选地,可以是相机运动轨迹的方向。The processing in this step is used to generate a directed edge map of the RGB image, which is called an average edge map in this application. Specifically, based on the clear picture estimated in this iteration, the edge map of the clear picture in the set direction is obtained, and then the average edge map is obtained by averaging the multiple edge maps. Wherein, the set direction may be several randomly designated directions, or, preferably, may be the direction of the camera motion track.
然后,将该平均边缘图作为下次迭代中RGB图像与DVS边缘估计图像进行对齐的依据。具体地,可以将平均边缘图在下次迭代中与DVS边缘估计图像进行比较确定出对齐的位移,再将RGB模糊图像和/或DVS边缘估计图像进行移动以对齐,从而提高RGB与DVS图像的配准精度。其中,根据平均边缘图进行RGB图像与DVS边缘估计图像对齐的处理可以按照现有方式实现,本申请不做限定。Then, this average edge map is used as the basis for aligning the RGB image with the DVS edge estimation image in the next iteration. Specifically, the average edge map can be compared with the DVS edge estimation image in the next iteration to determine the alignment displacement, and then the RGB blurred image and/or the DVS edge estimation image can be moved to align, thereby improving the matching of RGB and DVS images. quasi-precision. Wherein, the process of aligning the RGB image with the DVS edge estimation image according to the average edge map can be implemented in an existing manner, which is not limited in this application.
设定方向为相机运动轨迹方向时,平均边缘图的生成方式具体可以为:将曝光时间内的相机运动轨迹进行分段,在各个分段方向上计算本次迭代的清晰图片的边缘图,将得到的所有边缘图进行叠加后,计算骨架图作为平均边缘图。When the direction is set as the direction of the camera motion trajectory, the average edge map can be generated specifically as follows: segment the camera motion trajectory within the exposure time, calculate the edge map of the clear picture of this iteration in each segment direction, and divide After all the obtained edge maps are superimposed, the skeleton map is calculated as the average edge map.
其中,最简单地,在进行相机运动轨迹分段时,可以直接按照步骤102中构造运动轨迹时各个分段进行。平均边缘图的生成方法可以如图7所示。Among them, in the simplest way, when segmenting the camera motion trajectory, it can be directly performed according to each segment when constructing the motion trajectory in step 102 . The method for generating the average edge map can be shown in FIG. 7 .
上述即为本申请中的图像去模糊方法的一种具体实现。在上述迭代处理流程中,当两次迭代的清晰图片相同时,结束迭代过程,得到去模糊的最终处理结果。当然,还可以更简单地,设置最大迭代次数,在达到最大迭代次数前,反复迭代,在达到最大迭代次数后,结束迭代过程,得到去模糊的最终处理结果。或者,也可以将上述比较前后两次迭代的清晰图片和最大迭代次数的设置结合起来,当达到其中任一条件时,结束迭代过程,否则进行反复迭代。其中,最大迭代次数可以根据实际需要和设备处理能力进行设置,本申请对此不作限定。The above is a specific implementation of the image deblurring method in this application. In the above iterative processing flow, when the clear images of the two iterations are the same, the iterative process ends and the final deblurred processing result is obtained. Of course, it is also possible to set the maximum number of iterations more simply, iterate repeatedly before reaching the maximum number of iterations, and end the iterative process after reaching the maximum number of iterations to obtain the final processing result of deblurring. Alternatively, it is also possible to combine the clear pictures of the two iterations before and after the above comparison with the setting of the maximum number of iterations, and when any of the conditions is met, the iterative process ends, otherwise, repeated iterations are performed. Wherein, the maximum number of iterations may be set according to actual needs and device processing capabilities, which is not limited in this application.
在上述去模糊处理中,可以引入DVS传感器记录的DVS事件,估计相机运动轨迹,并利用该运动轨迹进行模糊图像的边缘估计;同时,利用DVS事件进行DVS边缘估计,并将模糊图像或估计得到的模糊图像的边缘估计图像与DVS边缘估计结果进行对齐后,参与去模糊的处理。通过上述处理,能够利用相机运动轨迹和DVS边缘估计图像提供更加趋近于真实的运动假设,从而提高去模糊的效果。In the above-mentioned deblurring process, the DVS event recorded by the DVS sensor can be introduced to estimate the camera motion trajectory, and use the motion trajectory to estimate the edge of the blurred image; at the same time, use the DVS event to perform DVS edge estimation, and the blurred image or estimate to get After the edge estimation image of the blurred image is aligned with the DVS edge estimation result, it participates in the deblurring process. Through the above processing, it is possible to use the camera motion trajectory and the DVS edge estimation image to provide a motion hypothesis closer to reality, thereby improving the effect of deblurring.
本申请还提供了一种图像去模糊装置,可以用于实施上述图1所示的去模糊方法。图8为本申请中图像去模糊装置的基本结构示意图,如图8所示,该装置包括:获取模块和图像去模糊模块。The present application also provides an image deblurring device, which can be used to implement the above-mentioned deblurring method shown in FIG. 1 . FIG. 8 is a schematic diagram of the basic structure of an image deblurring device in the present application. As shown in FIG. 8 , the device includes: an acquisition module and an image deblurring module.
其中,获取模块,用于获取待处理的模糊图片和模糊图片曝光时间内DVS传感器记录的DVS事件集合。图像去模糊模块,用于根据DVS事件集合,对所述模糊图片进行去模糊处理。Wherein, the obtaining module is used to obtain the blurred picture to be processed and the DVS event set recorded by the DVS sensor within the exposure time of the blurred picture. The image deblurring module is configured to perform deblurring processing on the blurred picture according to the DVS event set.
更优选地,图像去模糊模块还可以包括轨迹估计子模块、图像配准子模块、图像去模糊子模块和平均边缘图生成子模块。如图9所示。More preferably, the image deblurring module may further include a trajectory estimation submodule, an image registration submodule, an image deblurring submodule and an average edge map generation submodule. As shown in Figure 9.
其中,轨迹估计子模块,用于根据DVS事件集合,估计在曝光时间内的相机运动轨迹和DVS边缘估计图像。具体用于实施步骤101-102的处理。Wherein, the track estimation sub-module is used for estimating the camera motion track and the DVS edge estimation image within the exposure time according to the DVS event set. It is specifically used to implement the processing of steps 101-102.
图像配准子模块,用于将模糊图片与DVS边缘估计图像进行对齐,得到当前对齐后的模糊图片和当前对齐后的DVS边缘估计图像。具体用于实施步骤103的处理。The image registration sub-module is used for aligning the fuzzy picture and the DVS edge estimation image to obtain the currently aligned fuzzy picture and the current aligned DVS edge estimation image. It is specifically used to implement the processing of step 103.
图像去模糊子模块,用于将根据当前对齐后的DVS边缘估计图像,利用反卷积变换对当前对齐后的模糊图片进行去模糊处理,得到当前次的清晰图片,若本次与上次迭代的清晰图片的误差小于预定阈值,则将该清晰图片作为去模糊结果进行输出,否则,将当前次的清晰图片输出给平均边缘图输出模块。具体用于实施步骤104-105的处理。The image deblurring sub-module is used to estimate the image according to the DVS edge after the current alignment, and use the deconvolution transformation to perform deblurring processing on the blurred picture after the current alignment to obtain the clear picture of the current time. If this and the previous iteration If the error of the clear picture is smaller than the predetermined threshold, then output the clear picture as the deblurring result; otherwise, output the current clear picture to the average edge map output module. It is specifically used to implement the processing of steps 104-105.
平均边缘图输出模块,用于根据图像去模糊模块输入的清晰图片确定平均边缘图。并将平均边缘图输入给图像配准模块,用于作为下次模糊图片与DVS边缘估计图像对齐的依据。优选地,确定在相机运动轨迹方向的平均边缘图,具体用于实施步骤106的处理。The average edge map output module is used to determine the average edge map according to the clear picture input by the image deblurring module. And the average edge map is input to the image registration module, which is used as the basis for aligning the blurred picture with the DVS edge estimation image next time. Preferably, the average edge map in the direction of the camera motion track is determined, specifically for implementing the processing of step 106 .
采用本申请的去模糊方法与现有去模糊方法分别进行了去模糊处理的实验,并将结果进行了对比,图10为本申请与现有去模糊方法比较的示意图。由图10可以看出,本申请得到的去模糊效果更佳。Deblurring experiments were carried out using the deblurring method of the present application and the existing deblurring method, and the results were compared. FIG. 10 is a schematic diagram of the comparison between the present application and the existing deblurring method. It can be seen from Fig. 10 that the deblurring effect obtained by the present application is better.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the present invention. within the scope of protection.
Claims (14)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610039224.2A CN106991650B (en) | 2016-01-21 | 2016-01-21 | Image deblurring method and device |
KR1020160107863A KR102563750B1 (en) | 2016-01-21 | 2016-08-24 | Method and Device of Image Deblurring |
US15/356,808 US10062151B2 (en) | 2016-01-21 | 2016-11-21 | Image deblurring method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610039224.2A CN106991650B (en) | 2016-01-21 | 2016-01-21 | Image deblurring method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106991650A true CN106991650A (en) | 2017-07-28 |
CN106991650B CN106991650B (en) | 2020-09-15 |
Family
ID=59414291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610039224.2A Active CN106991650B (en) | 2016-01-21 | 2016-01-21 | Image deblurring method and device |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102563750B1 (en) |
CN (1) | CN106991650B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107451973A (en) * | 2017-07-31 | 2017-12-08 | 西安理工大学 | Motion blur image restoration method based on the extraction of abundant fringe region |
CN108629751A (en) * | 2018-05-08 | 2018-10-09 | 深圳市唯特视科技有限公司 | A kind of image deblurring method always to be made a variation based on weight weighted graph |
CN110428397A (en) * | 2019-06-24 | 2019-11-08 | 武汉大学 | A kind of angular-point detection method based on event frame |
CN111369482A (en) * | 2020-03-03 | 2020-07-03 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112313947A (en) * | 2018-06-20 | 2021-02-02 | 三星电子株式会社 | Method and apparatus for processing 360 degree image |
CN112805718A (en) * | 2018-10-05 | 2021-05-14 | 三星电子株式会社 | Object recognition method for automatic driving device and automatic driving device |
WO2021169136A1 (en) * | 2020-02-28 | 2021-09-02 | 深圳市商汤科技有限公司 | Image processing method and apparatus, and electronic device and storage medium |
CN113724142A (en) * | 2020-05-26 | 2021-11-30 | 杭州海康威视数字技术股份有限公司 | Image restoration system and method |
CN113784014A (en) * | 2020-06-04 | 2021-12-10 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment |
CN113923319A (en) * | 2021-12-14 | 2022-01-11 | 成都时识科技有限公司 | Noise reduction device, noise reduction method, chip, event imaging device and electronic equipment |
CN114494085A (en) * | 2022-04-14 | 2022-05-13 | 季华实验室 | A video stream restoration method, system, electronic device and storage medium |
CN115836318A (en) * | 2020-06-23 | 2023-03-21 | 普罗费塞公司 | Image enhancement method, device and system |
CN117221731A (en) * | 2023-09-21 | 2023-12-12 | 思翼科技(深圳)有限公司 | Cloud deck camera image processing system and method based on artificial intelligence |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102508992B1 (en) | 2018-06-19 | 2023-03-14 | 삼성디스플레이 주식회사 | Image processing device and image processing method |
KR102683294B1 (en) | 2018-09-10 | 2024-07-10 | 삼성전자주식회사 | Electronic apparatus for recognizing an object and controlling method thereof |
KR102790539B1 (en) * | 2019-12-12 | 2025-04-07 | 삼성전자주식회사 | Air conditioning device and control method thereof |
CN116527407B (en) * | 2023-07-04 | 2023-09-01 | 贵州毅丹恒瑞医药科技有限公司 | Encryption transmission method for fundus image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103533263A (en) * | 2012-07-03 | 2014-01-22 | 三星电子株式会社 | Image sensor chip, operation method, and system having the same |
KR20140042016A (en) * | 2012-09-26 | 2014-04-07 | 삼성전자주식회사 | Proximity sensor and proximity sensing method using design vision sensor |
US20140354537A1 (en) * | 2013-05-29 | 2014-12-04 | Samsung Electronics Co., Ltd. | Apparatus and method for processing user input using motion of object |
US20150110404A1 (en) * | 2013-10-23 | 2015-04-23 | Adobe Systems Incorporated | Automatically suggesting regions for blur kernel estimation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5139516B2 (en) * | 2007-04-23 | 2013-02-06 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Camera motion detection and estimation |
-
2016
- 2016-01-21 CN CN201610039224.2A patent/CN106991650B/en active Active
- 2016-08-24 KR KR1020160107863A patent/KR102563750B1/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103533263A (en) * | 2012-07-03 | 2014-01-22 | 三星电子株式会社 | Image sensor chip, operation method, and system having the same |
KR20140042016A (en) * | 2012-09-26 | 2014-04-07 | 삼성전자주식회사 | Proximity sensor and proximity sensing method using design vision sensor |
US20140354537A1 (en) * | 2013-05-29 | 2014-12-04 | Samsung Electronics Co., Ltd. | Apparatus and method for processing user input using motion of object |
US20150110404A1 (en) * | 2013-10-23 | 2015-04-23 | Adobe Systems Incorporated | Automatically suggesting regions for blur kernel estimation |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107451973A (en) * | 2017-07-31 | 2017-12-08 | 西安理工大学 | Motion blur image restoration method based on the extraction of abundant fringe region |
CN107451973B (en) * | 2017-07-31 | 2020-05-22 | 西安理工大学 | Motion blurred image restoration method based on rich edge region extraction |
CN108629751A (en) * | 2018-05-08 | 2018-10-09 | 深圳市唯特视科技有限公司 | A kind of image deblurring method always to be made a variation based on weight weighted graph |
CN112313947A (en) * | 2018-06-20 | 2021-02-02 | 三星电子株式会社 | Method and apparatus for processing 360 degree image |
US11989849B2 (en) | 2018-06-20 | 2024-05-21 | Samsung Electronics Co., Ltd. | Method and device for processing 360-degree image |
CN112805718B (en) * | 2018-10-05 | 2024-07-16 | 三星电子株式会社 | Object recognition method for automatic driving device and automatic driving device |
CN112805718A (en) * | 2018-10-05 | 2021-05-14 | 三星电子株式会社 | Object recognition method for automatic driving device and automatic driving device |
US11875574B2 (en) | 2018-10-05 | 2024-01-16 | Samsung Electronics Co., Ltd. | Object recognition method of autonomous driving device, and autonomous driving device |
CN110428397A (en) * | 2019-06-24 | 2019-11-08 | 武汉大学 | A kind of angular-point detection method based on event frame |
WO2021169136A1 (en) * | 2020-02-28 | 2021-09-02 | 深圳市商汤科技有限公司 | Image processing method and apparatus, and electronic device and storage medium |
CN111369482B (en) * | 2020-03-03 | 2023-06-23 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111369482A (en) * | 2020-03-03 | 2020-07-03 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113724142B (en) * | 2020-05-26 | 2023-08-25 | 杭州海康威视数字技术股份有限公司 | Image Restoration System and Method |
CN113724142A (en) * | 2020-05-26 | 2021-11-30 | 杭州海康威视数字技术股份有限公司 | Image restoration system and method |
CN113784014B (en) * | 2020-06-04 | 2023-04-07 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment |
CN113784014A (en) * | 2020-06-04 | 2021-12-10 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment |
CN115836318A (en) * | 2020-06-23 | 2023-03-21 | 普罗费塞公司 | Image enhancement method, device and system |
CN113923319A (en) * | 2021-12-14 | 2022-01-11 | 成都时识科技有限公司 | Noise reduction device, noise reduction method, chip, event imaging device and electronic equipment |
CN114494085A (en) * | 2022-04-14 | 2022-05-13 | 季华实验室 | A video stream restoration method, system, electronic device and storage medium |
CN114494085B (en) * | 2022-04-14 | 2022-07-15 | 季华实验室 | A video stream restoration method, system, electronic device and storage medium |
CN117221731A (en) * | 2023-09-21 | 2023-12-12 | 思翼科技(深圳)有限公司 | Cloud deck camera image processing system and method based on artificial intelligence |
CN117221731B (en) * | 2023-09-21 | 2024-07-30 | 思翼科技(深圳)有限公司 | Cloud deck camera image processing system and method based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
CN106991650B (en) | 2020-09-15 |
KR20170087814A (en) | 2017-07-31 |
KR102563750B1 (en) | 2023-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106991650B (en) | Image deblurring method and device | |
Su et al. | Deep video deblurring for hand-held cameras | |
US9998666B2 (en) | Systems and methods for burst image deblurring | |
CN109410130B (en) | Image processing method and image processing device | |
EP3216216B1 (en) | Methods and systems for multi-view high-speed motion capture | |
Shah et al. | Resolution enhancement of color video sequences | |
US8290212B2 (en) | Super-resolving moving vehicles in an unregistered set of video frames | |
Delbracio et al. | Hand-held video deblurring via efficient fourier aggregation | |
US10062151B2 (en) | Image deblurring method and apparatus | |
GB2536430B (en) | Image noise reduction | |
CN104284059A (en) | Apparatus and method for stabilizing images | |
Zhang et al. | Intra-frame deblurring by leveraging inter-frame camera motion | |
TWI459325B (en) | Digital image processing device and processing method thereof | |
CN102779335B (en) | Digital image processing device and processing method thereof | |
JPH07505033A (en) | Mechanical method for compensating nonlinear image transformations, e.g. zoom and pan, in video image motion compensation systems | |
JP2006146926A (en) | 2D image representation method, image representation, image comparison method, image sequence processing method, motion representation derivation method, motion representation, image position determination method, representation use, control device, apparatus, computer program , System, and computer-readable storage medium | |
CN104318518A (en) | Projection-onto-convex-sets image reconstruction method based on SURF matching and edge detection | |
Liu et al. | OmniColor: a global camera pose optimization approach of LiDAR-360camera fusion for colorizing point clouds | |
CN107945119B (en) | Intra-image correlation noise estimation method based on Bayer pattern | |
Al Ismaeil et al. | Real-time non-rigid multi-frame depth video super-resolution | |
Fang et al. | A video stabilization algorithm based on affine SIFT | |
Al Ismaeil et al. | Dynamic super resolution of depth sequences with non-rigid motions | |
Gaidhani | Super-resolution | |
KR100805802B1 (en) | Apparatus and method for camera auto-calibration in motion blurred sequence, Augmented reality system using it | |
US20150350669A1 (en) | Method and apparatus for improving estimation of disparity in a stereo image pair using a hybrid recursive matching processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |