[go: up one dir, main page]

CN106934806A - It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus - Google Patents

It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus Download PDF

Info

Publication number
CN106934806A
CN106934806A CN201710135456.2A CN201710135456A CN106934806A CN 106934806 A CN106934806 A CN 106934806A CN 201710135456 A CN201710135456 A CN 201710135456A CN 106934806 A CN106934806 A CN 106934806A
Authority
CN
China
Prior art keywords
image
image block
blocks
block
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710135456.2A
Other languages
Chinese (zh)
Other versions
CN106934806B (en
Inventor
沈傲东
王坤
孔佑勇
胡轶宁
伍家松
舒华忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201710135456.2A priority Critical patent/CN106934806B/en
Publication of CN106934806A publication Critical patent/CN106934806A/en
Application granted granted Critical
Publication of CN106934806B publication Critical patent/CN106934806B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开一种基于结构清晰度的无参考图失焦模糊区域分割方法,包括以下步骤:(1)缩放图像,将图像缩放为原图像面积的约1/4倍;(2)计算清晰度差值,分别计算原图和缩放后图像对应位置图像块的结构清晰度,并计算二者的差;(3)提取模糊区域,滤除差值图像的噪声,使用图像分割算法分割出模糊区域,并对分割后的结果进行上采样。针对无参考图像的失焦模糊区域分割,本发明使用原始图像构造缩放图像,分别计算缩放图像以及原始图像的清晰度,进而获得模糊度分布图像,最终快速有效地分割出图像失焦模糊区域。

The invention discloses a method for segmenting an out-of-focus blurred area without a reference image based on structural definition, which comprises the following steps: (1) zooming the image to about 1/4 times the area of the original image; (2) calculating the definition Difference, respectively calculate the structural clarity of the image block corresponding to the original image and the zoomed image, and calculate the difference between the two; (3) Extract the blurred area, filter out the noise of the difference image, and use the image segmentation algorithm to segment the blurred area , and upsample the segmented result. For the segmentation of out-of-focus and blurred areas without reference images, the present invention uses the original image to construct a zoomed image, calculates the sharpness of the zoomed image and the original image respectively, and then obtains a blur degree distribution image, and finally quickly and effectively segments the out-of-focus blurred area of the image.

Description

一种基于结构清晰度的无参考图失焦模糊区域分割方法A Segmentation Method for Out-of-focus Blurred Regions without Reference Image Based on Structural Definition

技术领域technical field

本发明涉及数字图像技术,具体涉及一种基于结构清晰度的无参考图失焦模糊区域分割方法。The invention relates to digital image technology, in particular to a method for segmenting out-of-focus fuzzy regions without a reference image based on structure definition.

背景技术Background technique

图像模糊就是一种常见的图像退化过程,通常是因为在曝光过程中相机的移动、物体的移动或者对焦不准而产生,也有的情况下是为了追求某种艺术效果而人为形成的,包括在摄影或者后期的图像处理过程中。失焦模糊是一种常见的图像模糊,主要是由对焦不准导致的。图像模糊导致了图像中信息的丢失,为进一步处理图像造成了困难。准确而高效的发现模糊像素在图像分割、物体检测、场景分类、图像编辑等领域都有着重要且实际的应用。无参考图模糊区域分割应用中,只有一张输入图像,目前无参考图像失焦模糊区域分割方法主要包括图像频率域方法、空间域方法和一些结合机器学习算法的方法。然而这些算法主要存在两个问题,首先,处理速度慢,实用性不强,其次,分割效果不佳。Image blur is a common image degradation process, usually caused by camera movement, object movement or inaccurate focus during the exposure process, and in some cases it is artificially formed for the pursuit of a certain artistic effect, including in During photography or post-image processing. Out-of-focus blur is a common image blur that is mainly caused by poor focus. Image blurring leads to the loss of information in the image, which makes it difficult to further process the image. Accurate and efficient detection of blurred pixels has important and practical applications in image segmentation, object detection, scene classification, image editing and other fields. In the application of fuzzy region segmentation without a reference image, there is only one input image. Currently, methods for segmentation of out-of-focus blurred regions without a reference image mainly include image frequency domain methods, space domain methods, and some methods combined with machine learning algorithms. However, these algorithms mainly have two problems. First, the processing speed is slow and the practicability is not strong. Second, the segmentation effect is not good.

一个理想的图像质量评价指标能够有效的区分模糊和清晰图像,因此可以用来做图像模糊区域分割。无参考图结构清晰度图像质量评价指标(NRSS)就是一个效果较好的图像质量评价指标,其在结构清晰度(SSIM)指标的基础上实现了无参考图图像质量评价。但是,目前使用NRSS进行图像模糊区域分割的方法还比较少,即使有最后的分割效果也较差。An ideal image quality evaluation index can effectively distinguish blurred and clear images, so it can be used for image blurred area segmentation. The No-Reference Structural Sharpness Index (NRSS) is a good image quality evaluation index, which realizes the no-reference image quality evaluation on the basis of the Structural Sharpness (SSIM) index. However, there are still relatively few methods for segmenting blurred areas of images using NRSS at present, and the final segmentation effect is poor even if there is one.

发明内容Contents of the invention

发明目的:本发明的目的在于解决现有技术中存在的不足,提供一种基于结构清晰度的无参考图失焦模糊区域分割方法。Purpose of the invention: The purpose of the present invention is to solve the deficiencies in the prior art, and provide a method for segmenting out-of-focus blurred areas without reference images based on structure definition.

技术方案:本发明所述的一种基于结构清晰度的无参考图失焦模糊区域分割方法,包括如下步骤:Technical solution: A method for segmenting out-of-focus blurred areas without reference images based on structure definition according to the present invention comprises the following steps:

(1)缩放原始图像:将原始图像等比例缩放为原来图像面积的约0.25倍,即原始图像的尺寸为M×N,缩放后图像的尺寸为采用双线性插值对缩放后的图像进行插值,从而得到缩放后的图像;(1) Scale the original image: the original image is proportionally scaled to about 0.25 times the area of the original image, that is, the size of the original image is M×N, and the size of the zoomed image is The scaled image is interpolated by bilinear interpolation to obtain the scaled image;

(2)计算图像清晰度差值:(2) Calculate the image definition difference:

(2.1)对原始图像和步骤(1)中缩放后的图像进行分块,从而得到原始图像块集合R和缩放后图像块集合S;(2.1) block the original image and the scaled image in step (1), thereby obtaining the original image block set R and the scaled image block set S;

(2.2)分别计算每个图像块的无参考图结构相似性用于衡量每个图像块的清晰程度;(2.2) Calculate the non-reference graph structural similarity of each image block separately to measure the clarity of each image block;

(2.3)分别计算每个图像块清晰程度的差,得到差值矩阵;(2.3) Calculate the difference of clarity of each image block respectively to obtain a difference matrix;

(3)分割出模糊区域:(3) Segment the fuzzy area:

(3.1)使用导向滤波对步骤(2)中获得的差值矩阵进行滤波以滤除噪声;(3.1) Use guided filtering to filter the difference matrix obtained in step (2) to filter out noise;

(3.2)使用大津阈值法对去噪后的图像进行分割,即先找出能够最大化类间方差的灰度值,并以此灰度值进行二值化;(3.2) Use the Otsu threshold method to segment the image after denoising, that is, first find the gray value that can maximize the variance between classes, and use this gray value to binarize;

(3.3)对分割后的图像进行上采样,恢复为原始图像尺度。(3.3) Upsample the segmented image and restore it to the original image scale.

进一步的,所述步骤(2.1)的具体过程为:Further, the concrete process of described step (2.1) is:

首先,从原始图像中选取图像块,步长为(2,2),即:从图像的左上角开始,先每次沿X轴移动2个像素,然后沿着Y移动2个像素,重复上述步骤,直到获取所有图像块为止,选取的图像块集合为R,每个图像块记为Ri,jFirst, select an image block from the original image with a step size of (2,2), that is: start from the upper left corner of the image, first move 2 pixels along the X axis, and then move 2 pixels along the Y, repeat the above step, until all image blocks are obtained, the selected image block set is R, and each image block is recorded as R i,j ;

其中,i表示行标记,j表示列标记,假设图像块的大小为2m×2n,则 Among them, i represents the row mark, j represents the column mark, assuming that the size of the image block is 2m×2n, then

接着,从缩放后的图像中选取图像块,步长为(1,1),即:从图像的左上角开始,先每次沿着X轴移动1个像素,然后沿着Y轴移动1个像素,重复上述步骤,直到获取所有图像块为止;选取的图像块序列记为S,每个图像块记为Si,jNext, select an image block from the zoomed image with a step size of (1,1), that is, starting from the upper left corner of the image, first move 1 pixel along the X axis each time, and then move 1 pixel along the Y axis pixel, repeat the above steps until all the image blocks are obtained; the selected image block sequence is denoted as S, and each image block is denoted as S i,j ;

其中,i表示行标记,j表示列标记,假设图像块的大小为m×n,则 Among them, i represents the row mark, j represents the column mark, assuming that the size of the image block is m×n, then

集合R和S中图像块的个数应保持一致。The number of image blocks in sets R and S should be consistent.

进一步的,所述步骤(2.2)的具体过程为:Further, the concrete process of described step (2.2) is:

假设给定图像块P,无参考图结构相似性的计算步骤如下:模糊图像块P,使用高斯模糊对图像块进行模糊得到图像Pb;提取梯度,使用Sobel算子提取图像块的水平和垂直方向的梯度,图像块P和Pb的梯度图像分别记为G和Gb;找出梯度图像中信息最丰富的N个图像块,梯度信息的丰富程度使用图像的方差衡量,即找出其中方差最大的N个图像块;计算图像块P的无参考图结构清晰度NRSS,NRSS的计算公式为:Assuming a given image block P, the calculation steps of structural similarity without a reference image are as follows: blur image block P, use Gaussian blur to blur the image block to obtain image P b ; extract gradient, use Sobel operator to extract the horizontal and vertical direction, the gradient images of the image blocks P and P b are denoted as G and G b respectively; find the N image blocks with the most information in the gradient image, and the richness of the gradient information is measured by the variance of the image, that is, find out the The N image blocks with the largest variance; calculate the no-reference image structure definition NRSS of the image block P, and the calculation formula of NRSS is:

其中,G中图像块记为Gi,Gb中图像块记为Gi b;SSIM函数用来计算两个图片块的结构相似性,该函数同时考虑图像块之间的亮度、对比度和结构信息的相关性。在给定两个图像块a、b的情况下,SSIM函数可以表示为:Among them, the image block in G is recorded as G i , and the image block in G b is recorded as G i b ; the SSIM function is used to calculate the structural similarity of two image blocks, and this function considers the brightness, contrast and structure between image blocks at the same time Relevance of information. Given two image blocks a, b, the SSIM function can be expressed as:

SSIM(a,b)=[l(a,b)]α[c(a,b)]β[s(a,b)]γ SSIM(a,b)=[l(a,b)] α [c(a,b)] β [s(a,b)] γ

其中,in,

s(a,b)=(σab+C3)/(σaσb+C3),s(a,b)=(σ ab +C 3 )/(σ a σ b +C 3 ),

ua,ub分别表示图像块a,b的图像灰度均值,σab分别表示图像块a,b的图像灰度标准差,σab表示图像块a,b的图像灰度协方差,α,β,γ为参数项,C1,C2,C3是常数项,用以防止分母接近于零时出现运算不稳定的情况。u a , u b represent the image gray mean values of image blocks a and b respectively, σ a , σ b represent the image gray standard deviations of image blocks a and b respectively, and σ ab represents the image gray value of image blocks a and b. Variance, α, β, γ are parameter items, and C 1 , C 2 , C 3 are constant items, which are used to prevent unstable operation when the denominator is close to zero.

进而分别得到NRSS(Si,j)和NRSS(Ri,j)。Further, NRSS(S i,j ) and NRSS(R i,j ) are obtained respectively.

进一步的,所述步骤(2.3)的方法如下:Further, the method of the step (2.3) is as follows:

计算图像块集合R和S中对应图像块清晰度的差值M′i,j,从而得到差值矩阵M′={M′i,j}:Calculate the difference M′ i,j of the sharpness of corresponding image blocks in the image block sets R and S, so as to obtain the difference matrix M′={M′ i,j }:

M′i,j=NRSS(Si,j)-NRSS(Ri,j);M′ i,j = NRSS(S i,j )-NRSS(R i,j );

对矩阵M′进行归一化得到矩阵M={Mi,j},采用最大值最小值归一化:Normalize the matrix M′ to get the matrix M={M i,j }, and use the maximum and minimum values to normalize:

Mi,j=(M′i,j-min(M′))/(max(M′)-min(M′)),其中max(M′)表示矩阵中元素的最大值,min(M′)表示矩阵中元素的最小值。M i,j =(M′ i,j -min(M′))/(max(M′)-min(M′)), where max(M′) represents the maximum value of elements in the matrix, min(M ') represents the minimum value of the elements in the matrix.

有益效果:本发明不失一般性,图像缩小后,原来模糊的图像会变得清晰,因此可以使用缩小后图像和原始图像的清晰度差作为新的判别指标。基于这一方法可获得图像的模糊度分布,结合图像分割算法,最终有效的分割出图像模糊区域。通过图像尺度变换,所需处理的数据量大大降低,所以本发明的算法处理速度快,同时,分割效果优于现有算法。Beneficial effects: the present invention does not lose generality. After the image is shrunk, the original blurred image will become clear, so the sharpness difference between the shrunk image and the original image can be used as a new discriminant index. Based on this method, the blur degree distribution of the image can be obtained, combined with the image segmentation algorithm, the blurred area of the image can be effectively segmented finally. Through image scale conversion, the amount of data to be processed is greatly reduced, so the algorithm of the invention has a fast processing speed, and meanwhile, the segmentation effect is better than the existing algorithm.

综上所述,本发明具有分割速度快、分割效果好、适用于无参考图像等优点。To sum up, the present invention has the advantages of fast segmentation speed, good segmentation effect, and being applicable to no reference images.

附图说明Description of drawings

图1为实施例中原始灰度图像;Fig. 1 is the original grayscale image in the embodiment;

图2为实施例中采用其他方法人工分割出的图像;Fig. 2 is the image that adopts other methods artificial segmentation in the embodiment;

图3为实施例中使用本发明得出的归一化后差值矩阵;Fig. 3 uses the difference matrix after normalization that the present invention draws in the embodiment;

图4为实施例中使用本发明得到的分割结果;Fig. 4 is the segmentation result that uses the present invention to obtain in the embodiment;

图5为实施例中效果对比图。Fig. 5 is the comparison chart of effect in the embodiment.

具体实施方式detailed description

下面对本发明技术方案进行详细说明,但是本发明的保护范围不局限于所述实施例。The technical solutions of the present invention will be described in detail below, but the protection scope of the present invention is not limited to the embodiments.

实施例:Example:

步骤1:读取原始彩色图像,取得彩色图像矩阵;Step 1: read the original color image and obtain the color image matrix;

步骤2:将彩色图像矩阵转换为灰度图像矩阵,从而得到如图1所述的灰度图像,此时确定图像的大小为640x621;Step 2: convert the color image matrix into a grayscale image matrix, so as to obtain the grayscale image as shown in Figure 1, at this time, determine the size of the image to be 640x621;

步骤3:缩放图像,确定缩放后图像的大小为320x310,使用线性插值把图1所示的图像缩放成上述大小的图像;Step 3: Scale the image, determine that the size of the zoomed image is 320x310, and use linear interpolation to scale the image shown in Figure 1 to the image of the above size;

步骤4:图像分块并计算每个图像块的清晰度。对原始图像和缩放后的图像进行分块,在此过程中,原始图像的移动步长为(2,2),缩放后图像的移动步长为(1,1),原始图像的图像块大小为32x32,缩放后图像的图像块大小为16x16,此过程完成后得到原始图像块集合R和缩放后图像块集合S。对集合中的每个图像块计算无参考图结构清晰度(NRSS),用以衡量每个图像块的清晰程度,此过程完成后得到原始图像清晰度矩阵Rc和缩放后图像清晰度矩阵ScStep 4: Block the image and calculate the sharpness of each image block. The original image and the scaled image are divided into blocks. In this process, the moving step of the original image is (2,2), the moving step of the zoomed image is (1,1), and the image block size of the original image is is 32x32, and the image block size of the scaled image is 16x16. After this process is completed, the original image block set R and the scaled image block set S are obtained. pair collection Each image block in calculates the no-reference image structure sharpness (NRSS) to measure the sharpness of each image block. After this process is completed, the original image sharpness matrix R c and the scaled image sharpness matrix S c are obtained;

步骤5:计算清晰度的差,根据步骤4中计算得到的清晰度矩阵Rc和Sc,可得到差值矩阵M′=Sc-RcStep 5: Calculate the difference in sharpness. According to the sharpness matrices R c and S c calculated in step 4, the difference matrix M'=S c -R c can be obtained;

步骤6:对差值矩阵M′进行归一化得到M,具体的归一化方法可以采用最大值最小值归一化方法,即Mi,j=(M′i,j-min(M′))/(max(M′)-min(M′)),得到的归一化图像如图3所示。Step 6: Normalize the difference matrix M′ to obtain M. The specific normalization method can use the maximum value and minimum value normalization method, that is, M i,j =(M′ i,j −min(M′ ))/(max(M′)-min(M′)), the resulting normalized image is shown in Figure 3.

步骤7:对矩阵M进行导向滤波,把M自身作为参考图像;Step 7: Perform guided filtering on the matrix M, and use M itself as a reference image;

步骤8:使用大津法对差值矩阵M进行分割;Step 8: segment the difference matrix M using the Otsu method;

步骤9:放大图像,对分割后的结果进行上采样并使用线性插值算法进行插值,得到最终分割结果,如图4所示。Step 9: Enlarge the image, upsample the segmented result and perform interpolation using a linear interpolation algorithm to obtain the final segmented result, as shown in Figure 4.

另外,再对图1中的原始灰度图像采用人工分割,得到如图2所示的图像,然后将采用两种方法分割的结构进行效果对比,如图5所示,图中白色为重叠清晰区域,黑色为重叠模糊区域,灰色为误判区域。In addition, the original grayscale image in Figure 1 is artificially segmented to obtain the image shown in Figure 2, and then the effect of the structure segmented by the two methods is compared, as shown in Figure 5, and the white color in the figure is clearly overlapped The black area is the overlapping fuzzy area, and the gray area is the misjudgment area.

由此可以看出,本发明分割效果好,更加精准。It can be seen from this that the segmentation effect of the present invention is good and more accurate.

Claims (4)

1. A reference-image-free out-of-focus fuzzy region segmentation method based on structural definition comprises the following steps:
(1) scaling the original image by scaling the original image to about 0.25 times the original image area, i.e. the size of the original image is M × N and the size of the scaled image isInterpolating the zoomed image by bilinear interpolation to obtain a zoomed image;
(2) calculating an image sharpness difference:
(2.1) partitioning the original image and the zoomed image in the step (1) to obtain an original image block set R and a zoomed image block set S;
(2.2) respectively calculating the structure similarity of the reference-free image of each image block for measuring the definition degree of each image block;
(2.3) respectively calculating the difference of the definition degree of each image block to obtain a difference matrix;
(3) and (3) dividing fuzzy areas:
(3.1) filtering the difference matrix obtained in the step (2) by using guide filtering to filter noise;
(3.2) segmenting the denoised image by using an Otsu threshold method, namely finding out a gray value capable of maximizing the inter-class variance, and carrying out binarization on the gray value;
and (3.3) up-sampling the segmented image and restoring the segmented image to the original image scale.
2. The method for dividing the out-of-focus blurred region without the reference image based on the structural definition according to claim 1, wherein: the specific process of the step (2.1) is as follows:
first, an image block is selected from an original image, with a step size of (2,2), that is: starting from the upper left corner of the image, firstly moving 2 pixels along the X axis each time, then moving 2 pixels along the Y axis, repeating the steps until all image blocks are obtained, wherein the selected image block set is R, and each image block is marked as Ri,j
Wherein i denotes a row label and j denotes a column label, assuming that the size of the image block is 2m × 2n
Next, an image block is selected from the scaled image, with a step size of (1,1), that is: starting from the upper left corner of the image, firstly moving 1 pixel along the X axis each time, then moving 1 pixel along the Y axis, and repeating the steps until all image blocks are obtained; selected image blockThe sequence is denoted S, each image block is denoted Si,j
Wherein i denotes a row label and j denotes a column label, assuming that the size of the image block is m × n
The number of image blocks in the sets R and S should be kept the same.
3. The method for dividing the out-of-focus blurred region without the reference image based on the structural definition according to claim 1, wherein: the specific process of the step (2.2) is as follows:
assuming a given image block P, the computation steps for the structure similarity of the no-reference pictures are as follows: blurring the image block P by Gaussian blur to obtain an image Pb(ii) a Extracting gradients, namely extracting gradients in horizontal and vertical directions of image blocks, image blocks P and P by using Sobel operatorbRespectively denoted as G and Gb(ii) a Finding out N image blocks with most abundant information in the gradient image, wherein the abundance of the gradient information is measured by using the variance of the image, namely finding out the N image blocks with the largest variance; calculating the non-reference picture structure definition NRSS of the image block P, wherein the calculation formula of the NRSS is as follows:
N R S S ( P ) = 1 / N Σ 1 N S S I M ( G i , G b i ) ;
wherein, the image block in G is marked as Gi,GbMiddle image block markIs composed ofThe SSIM function is used to compute the structural similarity of two picture blocks, which takes into account the correlation of luminance, contrast and structural information between the picture blocks at the same time. Given two image blocks a, b, the SSIM function can be expressed as:
SSIM(a,b)=[l(a,b)]α[c(a,b)]β[s(a,b)]γ
wherein,
l ( a , b ) = ( 2 u a u b + C 1 ) / ( u a 2 + u b 2 + C 1 ) ,
c ( a , b ) = ( 2 σ a σ b + C 2 ) / ( σ a 2 + σ b 2 + C 2 ) ,
s(a,b)=(σab+C3)/(σaσb+C3),
ua,ubrepresenting the image mean values, σ, of the image blocks a, b, respectivelyabRepresenting the image grey scale standard deviation, σ, of the image blocks a, b, respectivelyabRepresenting the image block a, b image gray-scale covariance α, gamma is a parameter term, C1,C2,C3The constant term is used for preventing unstable operation when the denominator is close to zero;
further, NRSS (S) is obtainedi,j) And NRSS (R)i,j)。
4. The method for dividing the out-of-focus blurred region without the reference image based on the structural definition according to claim 1, wherein: the method of the step (2.3) is as follows:
calculating difference value M 'of corresponding image block definition in image block sets R and S'i,jThereby obtaining a difference matrix M '═ M'i,j}:
M′i,j=NRSS(Si,j)-NRSS(Ri,j);
Normalizing the matrix M' to obtain a matrix M ═ Mi,jNormalization with maximum and minimum values:
Mi,j=(M'i,j-min (M '))/(max (M ') -min (M ')), where max (M ') denotes the maximum value of an element in the matrix and min (M ') denotes the minimum value of an element in the matrix.
CN201710135456.2A 2017-03-09 2017-03-09 It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus Active CN106934806B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710135456.2A CN106934806B (en) 2017-03-09 2017-03-09 It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710135456.2A CN106934806B (en) 2017-03-09 2017-03-09 It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus

Publications (2)

Publication Number Publication Date
CN106934806A true CN106934806A (en) 2017-07-07
CN106934806B CN106934806B (en) 2019-09-10

Family

ID=59432070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710135456.2A Active CN106934806B (en) 2017-03-09 2017-03-09 It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus

Country Status (1)

Country Link
CN (1) CN106934806B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292879A (en) * 2017-07-17 2017-10-24 电子科技大学 A kind of sheet metal surface method for detecting abnormality based on graphical analysis
CN107492078A (en) * 2017-08-14 2017-12-19 厦门美图之家科技有限公司 The black method made an uproar and computing device in a kind of removal image
WO2019173954A1 (en) * 2018-03-12 2019-09-19 华为技术有限公司 Method and apparatus for detecting resolution of image
CN111010556A (en) * 2019-12-27 2020-04-14 成都极米科技股份有限公司 Method, device and readable storage medium for projection bidirectional thermal defocus compensation
CN111179259A (en) * 2019-12-31 2020-05-19 北京灵犀微光科技有限公司 Optical clarity test method and device
WO2020172999A1 (en) * 2019-02-28 2020-09-03 苏州润迈德医疗科技有限公司 Quality evaluation method and apparatus for sequence of coronary angiogram images
CN112017163A (en) * 2020-08-17 2020-12-01 中移(杭州)信息技术有限公司 Image blur degree detection method and device, electronic equipment and storage medium
CN112714246A (en) * 2019-10-25 2021-04-27 Tcl集团股份有限公司 Continuous shooting photo obtaining method, intelligent terminal and storage medium
CN113962942A (en) * 2021-09-30 2022-01-21 嘉善三思光电技术有限公司 A kind of LED chip welding quality detection method for LED display screen
CN116977351A (en) * 2023-07-26 2023-10-31 北京航空航天大学 An interactive hematoma segmentation and analysis method and system based on brain CT images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030194119A1 (en) * 2002-04-15 2003-10-16 General Electric Company Semi-automatic segmentation algorithm for pet oncology images
CN101996406A (en) * 2010-11-03 2011-03-30 中国科学院光电技术研究所 No-reference structure sharpness image quality assessment method
CN103955934A (en) * 2014-05-06 2014-07-30 北京大学 Image blurring detecting algorithm combined with image obviousness region segmentation
CN104200475A (en) * 2014-09-05 2014-12-10 中国传媒大学 Novel no-reference image blur degree estimation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030194119A1 (en) * 2002-04-15 2003-10-16 General Electric Company Semi-automatic segmentation algorithm for pet oncology images
CN101996406A (en) * 2010-11-03 2011-03-30 中国科学院光电技术研究所 No-reference structure sharpness image quality assessment method
CN103955934A (en) * 2014-05-06 2014-07-30 北京大学 Image blurring detecting algorithm combined with image obviousness region segmentation
CN104200475A (en) * 2014-09-05 2014-12-10 中国传媒大学 Novel no-reference image blur degree estimation method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292879B (en) * 2017-07-17 2019-08-20 电子科技大学 A Method for Surface Abnormality Detection of Sheet Metal Based on Image Analysis
CN107292879A (en) * 2017-07-17 2017-10-24 电子科技大学 A kind of sheet metal surface method for detecting abnormality based on graphical analysis
CN107492078A (en) * 2017-08-14 2017-12-19 厦门美图之家科技有限公司 The black method made an uproar and computing device in a kind of removal image
CN107492078B (en) * 2017-08-14 2020-04-07 厦门美图之家科技有限公司 Method for removing black noise in image and computing equipment
WO2019173954A1 (en) * 2018-03-12 2019-09-19 华为技术有限公司 Method and apparatus for detecting resolution of image
CN111417981A (en) * 2018-03-12 2020-07-14 华为技术有限公司 Image definition detection method and device
WO2020172999A1 (en) * 2019-02-28 2020-09-03 苏州润迈德医疗科技有限公司 Quality evaluation method and apparatus for sequence of coronary angiogram images
CN112714246A (en) * 2019-10-25 2021-04-27 Tcl集团股份有限公司 Continuous shooting photo obtaining method, intelligent terminal and storage medium
CN111010556A (en) * 2019-12-27 2020-04-14 成都极米科技股份有限公司 Method, device and readable storage medium for projection bidirectional thermal defocus compensation
US11934089B2 (en) 2019-12-27 2024-03-19 Chengdu Xgimi Technology Co., Ltd. Bidirectional compensation method and apparatus for projection thermal defocusing, and readable storage medium
CN111179259A (en) * 2019-12-31 2020-05-19 北京灵犀微光科技有限公司 Optical clarity test method and device
CN111179259B (en) * 2019-12-31 2023-09-26 北京灵犀微光科技有限公司 Optical definition testing method and device
CN112017163A (en) * 2020-08-17 2020-12-01 中移(杭州)信息技术有限公司 Image blur degree detection method and device, electronic equipment and storage medium
CN113962942A (en) * 2021-09-30 2022-01-21 嘉善三思光电技术有限公司 A kind of LED chip welding quality detection method for LED display screen
CN116977351A (en) * 2023-07-26 2023-10-31 北京航空航天大学 An interactive hematoma segmentation and analysis method and system based on brain CT images

Also Published As

Publication number Publication date
CN106934806B (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN106934806B (en) It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus
CN114529459B (en) Method, system and medium for enhancing image edge
US8917948B2 (en) High-quality denoising of an image sequence
US8422783B2 (en) Methods and systems for region-based up-scaling
US10628924B2 (en) Method and device for deblurring out-of-focus blurred images
CN104867111B (en) A kind of blind deblurring method of non-homogeneous video based on piecemeal fuzzy core collection
CN104408707A (en) Rapid digital imaging fuzzy identification and restored image quality assessment method
CN104182983B (en) Highway monitoring video definition detection method based on corner features
CN105574823B (en) A kind of deblurring method and device of blurred picture out of focus
WO2017135120A1 (en) Computationally efficient frame rate conversion system
KR101028628B1 (en) An image texture filtering method, a recording medium recording a program for performing the same, and an apparatus for performing the same
Kim et al. Sredgenet: Edge enhanced single image super resolution using dense edge detection network and feature merge network
CN106529549B (en) Vision significance detection method based on self-adaptive features and discrete cosine transform
Banerjee et al. Super-resolution of text images using edge-directed tangent field
Nay Single image super resolution using ESPCN–with SSIM loss
CN114419006B (en) A method and system for removing text watermarks from grayscale video that changes with background
CN114549703B (en) Quick-action image generation method, system, device and storage medium
CN108647605B (en) Human eye gaze point extraction method combining global color and local structural features
CN108154488B (en) An Image Motion Blur Removal Method Based on Salient Image Block Analysis
Wu et al. High-resolution images based on directional fusion of gradient
CN106056575B (en) An Image Matching Method Based on Similarity Recommendation Algorithm
US9225876B2 (en) Method and apparatus for using an enlargement operation to reduce visually detected defects in an image
Peng et al. Research on qr 2-d code graphics correction algorithms based on morphological expansion closure and edge detection
Wei A survey of low-light image enhancement
Abebe et al. Application of radial basis function interpolation for content aware image retargeting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant