[go: up one dir, main page]

CN110648341B - Target boundary detection method based on scale space and subgraph - Google Patents

Target boundary detection method based on scale space and subgraph Download PDF

Info

Publication number
CN110648341B
CN110648341B CN201910885873.8A CN201910885873A CN110648341B CN 110648341 B CN110648341 B CN 110648341B CN 201910885873 A CN201910885873 A CN 201910885873A CN 110648341 B CN110648341 B CN 110648341B
Authority
CN
China
Prior art keywords
area
scale space
image
subgraph
equal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910885873.8A
Other languages
Chinese (zh)
Other versions
CN110648341A (en
Inventor
施文灶
詹振林
刘芫汐
林耀辉
乔星星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Normal University
Original Assignee
Fujian Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Normal University filed Critical Fujian Normal University
Priority to CN201910885873.8A priority Critical patent/CN110648341B/en
Publication of CN110648341A publication Critical patent/CN110648341A/en
Application granted granted Critical
Publication of CN110648341B publication Critical patent/CN110648341B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a target boundary detection method based on a scale space and a subgraph. The method comprises the following steps: step 1, inputting a digital image, step 2, setting a mask window set, step 3, constructing a scale space, step 4, decomposing the scale space, step 5, extracting point feature vectors, step 6, calculating a region estimation value, step 7, extracting a candidate target region set, step 8, dividing the candidate target region set, step 9, calculating the reliability of the divided region set, step 10, superpixel division, step 11, calculating the average reliability of the superpixel block set, step 12, nonlinear transformation processing, step 13, binarization and boundary extraction, and step 14, outputting a target boundary. The method solves the problem of low accuracy of target boundary detection in the digital image, and achieves the effect of full automation. The method can be used in the fields of ground feature extraction of remote sensing images, target recognition of robot vision and the like.

Description

一种基于尺度空间和子图的目标边界检测方法An object boundary detection method based on scale space and subgraph

技术领域technical field

本发明涉及一种数字图像处理领域,具体说是一种基于尺度空间和子图的目标边界检测方法。The invention relates to the field of digital image processing, in particular to a target boundary detection method based on scale space and subgraph.

背景技术Background technique

目标边界检测是图像处理和计算机视觉中的基本问题,其主要目的是标识数字图像中亮度变化明显的位置。图像属性中的显著变化通常反映了属性的重要事件和变化,这些包括深度上的不连续、表面方向不连续、物质属性变化和场景照明变化等。目标边界检测是图像处理和计算机视觉中,尤其是特征提取中的一个研究领域。随着科学技术的不断发展,近年来在目标边界检测领域涌现出了一些新的技术和方法,例如小波多尺度方法、分形理论方法、数学形态学方法、人工智能和遗传算法等,这些方法抗噪性能强,能够更好地检测出目标的边界。然而,目前目标边界检测方法都是在一定条件范围内才能取得较好的效果,每一种方法都存在一定的不足,特别是检测精确度和抗噪能力的均衡问题。有的方法边界检测精确度高,但抗噪声性能较差;有的方法抗噪声性能强,但是检测精确度又不够;有些方法尽管在一定程度上较好地解决了上述两者的协调问题,但是算法复杂,运算时间长,不存在一种适合各种要求的边界检测方法。因此,寻找一种算法简单、能较好解决边界检测精确度与抗噪声性能协调问题的目标边界检测算法一直是图像处理与分析研究的一个重要方向。Object boundary detection is a fundamental problem in image processing and computer vision, and its main purpose is to identify locations in digital images with significant brightness changes. Significant changes in image properties usually reflect important events and changes in properties, these include discontinuities in depth, discontinuities in surface orientation, changes in material properties, and changes in scene lighting, among others. Object boundary detection is a research field in image processing and computer vision, especially feature extraction. With the continuous development of science and technology, some new technologies and methods have emerged in the field of target boundary detection in recent years, such as wavelet multi-scale method, fractal theory method, mathematical morphology method, artificial intelligence and genetic algorithm, etc. It has strong noise performance and can better detect the boundary of the target. However, the current target boundary detection methods can only achieve good results within a certain range of conditions, and each method has certain shortcomings, especially the balance of detection accuracy and anti-noise ability. Some methods have high boundary detection accuracy, but poor anti-noise performance; some methods have strong anti-noise performance, but the detection accuracy is not enough; although some methods can solve the coordination problem of the above two to a certain extent, However, the algorithm is complex and the operation time is long, and there is no boundary detection method suitable for various requirements. Therefore, it is always an important direction of image processing and analysis research to find a target boundary detection algorithm with simple algorithm, which can better solve the coordination problem of boundary detection accuracy and anti-noise performance.

发明内容SUMMARY OF THE INVENTION

本发明提供了一种基于尺度空间和子图的目标边界检测方法,可克服目前数字图像中目标边界检测困难的问题,利用尺度空间和子图集合作为掩码,可以检测数字图像中各类目标,无需人工干预,自动化程度高。The present invention provides a target boundary detection method based on scale space and sub-image, which can overcome the difficulty of detection of target boundary in current digital images. By using the scale space and sub-image sets as masks, various targets in digital images can be detected without the need for Manual intervention, high degree of automation.

为实现本发明的目标所采用的技术方案是:The technical scheme adopted for realizing the object of the present invention is:

步骤1:输入数字图像I;Step 1: Input digital image I;

步骤2:在数字图像I中设置掩膜窗口集合SW,掩膜窗口集合SW所包含的每个窗口的尺寸为L1×L2Step 2: set a mask window set SW in the digital image I, and the size of each window included in the mask window set SW is L 1 ×L 2 ;

步骤3:基于数字图像I构建尺度空间SI;Step 3: Construct scale space SI based on digital image I;

步骤4:用掩膜窗口集合SW分解尺度空间SI包含的所有图像,得到子图集合SR;Step 4: Use the mask window set SW to decompose all the images contained in the scale space SI to obtain the subgraph set SR;

步骤5:提取子图集合SR包含的子图SRa中像素(x,y)的点特征gfxy,由子图SRa中所有像素的点特征构成子图SRa的点特征向量GFa,其中,1≤a≤A,A为子图集合SR中的子图数量,GFa=(gf1,gf2,gf3,…,gf(y-1)×L+x,…,gfL1×L2),其中,1≤x≤L1,1≤y≤L2,即GFa为L1×L2维的行向量;Step 5: Extract the point feature gf xy of the pixel (x, y) in the sub-image SR a included in the sub-image set SR, and form the point feature vector GF a of the sub-image SR a by the point features of all the pixels in the sub-image SR a , where , 1≤a≤A, A is the number of subgraphs in the subgraph set SR, GF a =(gf 1 ,gf 2 ,gf 3 ,…,gf (y-1)×L+x ,…,gf L1× L2 ), where 1≤x≤L 1 , 1≤y≤L 2 , that is, GF a is a L 1 ×L 2 -dimensional row vector;

步骤6:通过下式计算子图SRa的区域估计值VSRaStep 6: Calculate the area estimate V SRa of the subgraph SR a by the following formula:

VSRa=GFa·WTV SRa =GF a ·WT

式中,符号·表示内积运算,WT为子图的模板权值向量,其向量维度与GFa一致;In the formula, the symbol · represents the inner product operation, WT is the template weight vector of the subgraph, and its vector dimension is consistent with GF a ;

步骤7:设定阈值VT,当VSRa大于VT时,将子图SRa保留为候选目标区域,否则,删除子图SRaStep 7: Set the threshold V T , when V SRa is greater than V T , keep the sub-image SR a as the candidate target area, otherwise, delete the sub-image SR a ;

步骤8:迭代执行步骤5-7,直到遍历子图集合SR包含的所有子图SRa,得到候选目标区域集合SC;Step 8: Iteratively execute steps 5-7 until all subgraphs SR a included in the subgraph set SR are traversed, and the candidate target area set SC is obtained;

步骤9:对候选目标区域集合SC包含的区域SCb进行分割,得到区域SCb包含的分割区域集合SA,其中,1≤b≤B,B为候选目标区域集合SC中的区域数量;Step 9: Divide the area SC b contained in the candidate target area set SC to obtain the divided area set SA contained in the area SC b , where 1≤b≤B, and B is the number of areas in the candidate target area set SC;

步骤10:通过下式计算分割区域集合SA包含的分割区域SAc的可信度CVc,并将分割区域SAc的可信度CVc赋值给分割区域SAc包含的每一个像素,其中,1≤c≤C,C为分割区域集合SA中的元素数量,Step 10: Calculate the credibility CV c of the segmentation area SA c included in the segmentation area set SA by the following formula, and assign the credibility CV c of the segmentation area SA c to each pixel included in the segmentation area SA c , wherein, 1≤c≤C, C is the number of elements in the segmented area set SA,

Figure BDA0002207286270000021
Figure BDA0002207286270000021

式中,SM(SAc,SAq)为分割区域SAc和分割区域SAq之间的灰度相似度,Ds(SAc,SAq)为分割区域SAc和分割区域SAq之间的邻近距离,定义为分割区域SAc和分割区域SAq两质心之间的欧氏距离,α为邻近距离Ds(SAc,SAq)的贡献度控制系数,取值越小,则邻近距离Ds(SAc,SAq)对可信度CVc的贡献度越大,wc为分割区域SAc的像素数量占区域SCb的像素数量的比值,用于衡量分割区域SAc的大小;In the formula, SM(SA c , SA q ) is the grayscale similarity between the segmented area SA c and the segmented area SA q , D s (SA c , SA q ) is the difference between the segmented area SA c and the segmented area SA q The adjacent distance is defined as the Euclidean distance between the centroids of the segmented area SA c and the segmented area SA q , α is the contribution control coefficient of the adjacent distance D s (SA c , SA q ), the smaller the value, the closer The greater the contribution of the distance D s (SA c , SA q ) to the reliability CV c , w c is the ratio of the number of pixels in the segmented area SA c to the number of pixels in the area SC b , which is used to measure the effectiveness of the segmented area SA c . size;

步骤11:迭代执行步骤9-10,直到遍历分割区域集合SA包含的所有分割区域SAc以及遍历候选目标区域集合SC包含的所有区域SCbStep 11: Iteratively execute steps 9-10 until traversing all the segmented regions SA c included in the segmented region set SA and traversing all the regions SC b included in the candidate target region set SC;

步骤12:对输入数字图像I进行超像素分割,得到超像素区块集合SSB;Step 12: carry out superpixel segmentation to input digital image I, obtain superpixel block set SSB;

步骤13:迭代计算超像素区块集合SSB包含的超像素区块SSBd的平均可信度ACVd,其中,1≤d≤D,D为超像素区块的数量;Step 13: iteratively calculate the average reliability ACV d of the superpixel block SSB d included in the superpixel block set SSB, wherein 1≤d≤D, and D is the number of superpixel blocks;

步骤14:通过下式对超像素区块SSBd的平均可信度ACVd进行非线性变换处理,得到超像素区块SSBd的变换平均可信度EACVdStep 14: Perform nonlinear transformation processing on the average reliability ACV d of the superpixel block SSB d by the following formula to obtain the transformed average reliability EACV d of the superpixel block SSB d :

Figure BDA0002207286270000031
Figure BDA0002207286270000031

并将超像素区块SSBd的变换平均可信度EACVd赋值给超像素区块SSBd包含的每一个像素。The transformed average reliability EACV d of the superpixel block SSB d is assigned to each pixel included in the superpixel block SSB d .

步骤15:迭代执行步骤13-14,直到遍历超像素区块集合SSB包含的所有超像素区块SSBd,得到由取值为变换平均可信度的像素组成的变换图像Ic;Step 15: iteratively execute steps 13-14, until traversing all the superpixel blocks SSB d included in the superpixel block set SSB, obtain the transformed image Ic that is formed of the pixel whose value is the transformed average reliability;

步骤16:对变换图像Ic进行二值化处理,并对二值化结果进行边界提取,得到边界集合SedgeStep 16: carry out binarization processing to the transformed image Ic, and carry out boundary extraction to the binarization result to obtain a boundary set S edge ;

步骤17:将边界集合Sedge叠加在原图作为输出结果。步骤2所述的增强方法采用直方图均衡化方法。Step 17: Superimpose the boundary set S edge on the original image as the output result. The enhancement method described in step 2 adopts the histogram equalization method.

步骤2所述的掩膜窗口集合SW的位置是随机的,每个窗口之间可以重叠,且掩膜窗口集合SW完全覆盖数字图像I。The positions of the mask window set SW described in step 2 are random, each window can overlap, and the mask window set SW completely covers the digital image I.

步骤3所述的尺度空间SI包含图像I,I1,I2,…,In,其中,n为尺度空间SI的层数,The scale space SI described in step 3 includes images I,I 1 ,I 2 ,...,In , where n is the number of layers of the scale space SI,

Figure BDA0002207286270000032
Figure BDA0002207286270000032

Figure BDA0002207286270000033
Figure BDA0002207286270000033

......

Figure BDA0002207286270000036
Figure BDA0002207286270000036

式中,

Figure BDA0002207286270000034
为卷积运算符,g(x,y,t)为二维卷积核,
Figure BDA0002207286270000035
(x,y)为图像中的像素坐标,t为尺度空间因子。In the formula,
Figure BDA0002207286270000034
is the convolution operator, g(x, y, t) is the two-dimensional convolution kernel,
Figure BDA0002207286270000035
(x, y) are the pixel coordinates in the image, and t is the scale space factor.

步骤4所述的分解尺度空间SI的方法为:用掩膜窗口集合SW包含的掩膜窗口SWh分别与尺度空间SI包含的所有图像进行与运算,截取掩膜窗口SWh对应位置的图像区域作为子图,其中,1≤h≤H,H为子图集合SR包含的子图数量,迭代执行上述步骤,直到遍历掩膜窗口集合SW包含的所有掩膜窗口SWhThe method for decomposing the scale space SI described in step 4 is: using the mask window SW h included in the mask window set SW to perform AND operation with all the images included in the scale space SI respectively, and intercepting the image area corresponding to the mask window SW h As a subgraph, where 1≤h≤H, H is the number of subgraphs included in the subgraph set SR, and the above steps are iteratively executed until all the mask windows SW h included in the mask window set SW are traversed.

步骤5所述的像素(x,y)的点特征gfxy的提取方法为:The extraction method of the point feature gf xy of the pixel (x, y) described in step 5 is:

gfxy=min(|GH(x,y)|+|GV(x,y)|,255)gf xy = min(|G H (x,y)|+|G V (x,y)|,255)

式中,GH(x,y)=I(x+1,y)-I(x-1,y),GV(x,y)=I(x,y+1)-I(x,y-1),其中,|GH(x,y)|为像素(x,y)在水平方向的梯度模值,|GV(x,y)|为像素(x,y)在垂直方向的梯度模值,min()为求最小值函数。In the formula, G H (x,y)=I(x+1,y)-I(x-1,y), G V (x,y)=I(x,y+1)-I(x, y-1), where |G H (x, y)| is the gradient modulus value of the pixel (x, y) in the horizontal direction, |G V (x, y)| is the pixel (x, y) in the vertical direction The gradient modulus value of , min() is the minimum function.

步骤10所述的分割区域SAc和分割区域SAq之间灰度相似度SM(SAc,SAq)由以下公式计算:The grayscale similarity SM (SA c , SA q ) between the segmented area SA c and the segmented area SA q described in step 10 is calculated by the following formula:

Figure BDA0002207286270000041
Figure BDA0002207286270000041

式中,n1和n2分别为分割区域SAc和分割区域SAq的灰度值,N1和N2分别为分割区域SAc和分割区域SAq的灰度级,

Figure BDA0002207286270000049
为灰度值n1在分割区域SAc中出现的概率,
Figure BDA0002207286270000048
为灰度值n2在分割区域SAq中出现的概率,O(n1,n2)为灰度值n1和灰度值n2之间的欧氏距离。In the formula, n1 and n2 are the grayscale values of the segmented area SA c and the segmented area SA q , respectively, N1 and N2 are the gray levels of the segmented area SA c and the segmented area SA q , respectively,
Figure BDA0002207286270000049
is the probability that the gray value n1 appears in the segmented area SA c ,
Figure BDA0002207286270000048
is the probability that the gray value n2 appears in the segmented area SA q , and O(n1, n2) is the Euclidean distance between the gray value n1 and the gray value n2.

本发明的有益效果是:解决了数字图像中目标边界检测准确性不高的问题,达到完全自动化的效果。可以用于遥感影像的地物提取、机器人视觉的目标识别等领域。The beneficial effects of the invention are as follows: the problem that the detection accuracy of the target boundary in the digital image is not high is solved, and the effect of complete automation is achieved. It can be used in the fields of object extraction of remote sensing images and target recognition of robot vision.

附图说明Description of drawings

图1是本发明的总体处理流程图;Fig. 1 is the overall processing flow chart of the present invention;

图2是本发明的掩膜窗口设置示意图。FIG. 2 is a schematic diagram of the mask window arrangement of the present invention.

具体实施方式Detailed ways

下面结合附图详细描述本发明的具体实施方式。The specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

图1是本发明的总体处理流程图。如图1所示,101是输入数字图像步骤,102是设置掩膜窗口集合步骤,103是构建尺度空间步骤,104是分解尺度空间步骤,105是提取点特征向量步骤,106是计算区域估计值步骤,107是提取候选目标区域集合步骤,108是分割候选目标区域集合步骤,109是计算分割区域集合的可信度步骤,110是超像素分割步骤,111是计算超像素区块集合的平均可信度步骤,112是非线性变换处理步骤,113是二值化及边界提取步骤,114是输出目标边界步骤。FIG. 1 is a flow chart of the overall processing of the present invention. As shown in Figure 1, 101 is the step of inputting a digital image, 102 is the step of setting the mask window set, 103 is the step of constructing the scale space, 104 is the step of decomposing the scale space, 105 is the step of extracting point feature vectors, and 106 is the step of calculating the region estimate Step, 107 is the step of extracting the candidate target area set, 108 is the step of dividing the candidate target area set, 109 is the step of calculating the reliability of the set of dividing areas, 110 is the step of superpixel segmentation, 111 is the average probability of calculating the set of superpixel blocks. In the reliability step, 112 is the nonlinear transformation processing step, 113 is the binarization and boundary extraction step, and 114 is the output target boundary step.

步骤101:输入灰度级为256的灰度数字图像I;Step 101: input a grayscale digital image I with a grayscale level of 256;

步骤102:在数字图像I中设置掩膜窗口集合SW,掩膜窗口集合SW所包含的每个窗口的尺寸为L1×L2,设置L1=L2=40,其中,掩膜窗口集合SW的位置是随机的,为了提高目标的查准率,每个窗口之间可以重叠,且掩膜窗口集合SW完全覆盖数字图像I;Step 102: Set a mask window set SW in the digital image I, the size of each window included in the mask window set SW is L 1 ×L 2 , and set L 1 =L 2 =40, wherein the mask window set The position of SW is random. In order to improve the accuracy of the target, each window can overlap, and the mask window set SW completely covers the digital image I;

步骤103:基于数字图像I构建尺度空间SI,尺度空间SI包含图像I,I1,I2,…,In,其中,n为尺度空间SI的层数,权衡准确性和算法运行效率,将n设置为4,Step 103: Construct a scale space SI based on the digital image I, and the scale space SI includes images I, I 1 , I 2 , . n is set to 4,

Figure BDA0002207286270000042
Figure BDA0002207286270000042

Figure BDA0002207286270000043
Figure BDA0002207286270000043

Figure BDA0002207286270000044
Figure BDA0002207286270000044

Figure BDA0002207286270000045
Figure BDA0002207286270000045

式中,

Figure BDA0002207286270000046
为卷积运算符,g(x,y,t)为二维卷积核,
Figure BDA0002207286270000047
(x,y)为图像中的像素坐标,t为尺度空间因子;In the formula,
Figure BDA0002207286270000046
is the convolution operator, g(x, y, t) is the two-dimensional convolution kernel,
Figure BDA0002207286270000047
(x, y) is the pixel coordinates in the image, and t is the scale space factor;

步骤104:用掩膜窗口集合SW分解尺度空间SI包含的所有图像,得到子图集合SR,具体分解尺度空间SI的方法为:用掩膜窗口集合SW包含的掩膜窗口SWh分别与尺度空间SI包含的所有图像进行与运算,截取掩膜窗口SWh对应位置的图像区域作为子图,其中,1≤h≤H,H为子图集合SR包含的子图数量,迭代执行上述步骤,直到遍历掩膜窗口集合SW包含的所有掩膜窗口SWhStep 104: Use the mask window set SW to decompose all images contained in the scale space SI to obtain a subgraph set SR. The specific method for decomposing the scale space SI is: use the mask window SW h contained in the mask window set SW to separate the scale space Perform AND operation on all images included in SI, and intercept the image area corresponding to the mask window SW h as a sub-image, where 1≤h≤H, H is the number of sub-images included in the sub-image set SR, and iteratively execute the above steps until Traverse all mask windows SW h contained in the mask window set SW;

步骤105:提取子图集合SR包含的子图SRa中像素(x,y)的点特征gfxy,由子图SRa中所有像素的点特征构成子图SRa的点特征向量GFa,其中,1≤a≤A,A为子图集合SR中的子图数量,GFa=(gf1,gf2,gf3,…,gf(y-1)×L+x,…,gfL1×L2),其中,1≤x≤L1,1≤y≤L2,由于L1=L2=40,所以GFa为40×40=1600维的行向量,像素(x,y)的点特征gfxy的提取方法为:Step 105: Extract the point feature gf xy of the pixel (x, y) in the sub-image SR a included in the sub-image set SR, and form the point feature vector GF a of the sub-image SR a by the point features of all the pixels in the sub-image SR a , where , 1≤a≤A, A is the number of subgraphs in the subgraph set SR, GF a =(gf 1 ,gf 2 ,gf 3 ,…,gf (y-1)×L+x ,…,gf L1× L2 ), where 1≤x≤L 1 , 1≤y≤L 2 , since L 1 =L 2 =40, GF a is a 40×40=1600-dimensional row vector, the point of pixel (x, y) The extraction method of feature gf xy is:

gfxy=min(|GH(x,y)|+|GV(x,y)|,255)gf xy = min(|G H (x,y)|+|G V (x,y)|,255)

式中,GH(x,y)=I(x+1,y)-I(x-1,y),GV(x,y)=I(x,y+1)-I(x,y-1),其中,|GH(x,y)|为像素(x,y)在水平方向的梯度模值,|GV(x,y)|为像素(x,y)在垂直方向的梯度模值,min()为求最小值函数;In the formula, G H (x,y)=I(x+1,y)-I(x-1,y), G V (x,y)=I(x,y+1)-I(x, y-1), where |G H (x, y)| is the gradient modulus value of the pixel (x, y) in the horizontal direction, |G V (x, y)| is the pixel (x, y) in the vertical direction The gradient modulus value of , min() is the minimum function;

步骤106:通过下式计算子图SRa的区域估计值VSRaStep 106: Calculate the region estimated value V SRa of the sub-graph SR a by the following formula:

VSRa=GFa·WTV SRa =GF a ·WT

区域估计值VSR用于评价子图SRa包含目标的可能性,取值越大,则包含目标的可能性越大。式中,符号·表示内积运算,WT为子图的模板权值向量,其向量维度与GFa一致,同为1600维;The region estimation value V SR is used to evaluate the possibility that the sub-graph SR a contains the target. The larger the value, the greater the possibility of containing the target. In the formula, the symbol · represents the inner product operation, WT is the template weight vector of the subgraph, and its vector dimension is consistent with GF a , which is 1600 dimensions;

步骤107:设定阈值VT,当VSRa大于VT时,将子图SRa保留为候选目标区域,否则,删除子图SRaStep 107: Set a threshold value V T , when V SRa is greater than V T , keep the sub-image SR a as a candidate target area, otherwise, delete the sub-image SR a ;

迭代执行步骤105-107,直到遍历子图集合SR包含的所有子图SRa,得到候选目标区域集合SC;Steps 105-107 are iteratively executed until all subgraphs SR a contained in the subgraph set SR are traversed to obtain the candidate target area set SC;

步骤108:采用区域生长分割算法对候选目标区域集合SC包含的区域SCb进行分割,得到区域SCb包含的分割区域集合SA,其中,1≤b≤B,B为候选目标区域集合SC中的区域数量;Step 108: Use the region growing segmentation algorithm to segment the region SC b included in the candidate target region set SC to obtain the segmented region set SA included in the region SC b , where 1≤b≤B, and B is the the number of areas;

步骤109:通过下式计算分割区域集合SA包含的分割区域SAc的可信度CVc,并将分割区域SAc的可信度CVc赋值给分割区域SAc包含的每一个像素,其中,1≤c≤C,C为分割区域集合SA中的元素数量,Step 109: Calculate the reliability CV c of the segmented area SA c included in the segmented area set SA by the following formula, and assign the reliability CV c of the segmented area SA c to each pixel included in the segmented area SA c , wherein, 1≤c≤C, C is the number of elements in the segmented area set SA,

Figure BDA0002207286270000051
Figure BDA0002207286270000051

式中,SM(SAc,SAq)为分割区域SAc和分割区域SAq之间的灰度相似度,Ds(SAc,SAq)为分割区域SAc和分割区域SAq之间的邻近距离,定义为分割区域SAc和分割区域SAq两质心之间的欧氏距离,α为邻近距离Ds(SAc,SAq)的贡献度控制系数,取值越小,则邻近距离Ds(SAc,SAq)对可信度CVc的贡献度越大,wc为分割区域SAc的像素数量占区域SCb的像素数量的比值,用于衡量分割区域SAc的大小;In the formula, SM(SA c , SA q ) is the grayscale similarity between the segmented area SA c and the segmented area SA q , D s (SA c , SA q ) is the difference between the segmented area SA c and the segmented area SA q The adjacent distance is defined as the Euclidean distance between the centroids of the segmented area SA c and the segmented area SA q , α is the contribution control coefficient of the adjacent distance D s (SA c , SA q ), the smaller the value, the closer The greater the contribution of the distance D s (SA c , SA q ) to the reliability CV c , w c is the ratio of the number of pixels in the segmented area SA c to the number of pixels in the area SC b , which is used to measure the effectiveness of the segmented area SA c . size;

分割区域SAc和分割区域SAq之间灰度相似度SM(SAc,SAq)由以下公式计算:The grayscale similarity SM(SA c , SA q ) between the segmented area SA c and the segmented area SA q is calculated by the following formula:

Figure BDA0002207286270000061
Figure BDA0002207286270000061

式中,n1和n2分别为分割区域SAc和分割区域SAq的灰度值,N1和N2分别为分割区域SAc和分割区域SAq的灰度级,由步骤101可知:N1=N2=256,

Figure BDA0002207286270000062
为灰度值n1在分割区域SAc中出现的概率,
Figure BDA0002207286270000063
为灰度值n2在分割区域SAq中出现的概率,O(n1,n2)为灰度值n1和灰度值n2之间的欧氏距离;In the formula, n1 and n2 are the grayscale values of the divided area SA c and the divided area SA q , respectively, N1 and N2 are the gray level of the divided area SA c and the divided area SA q , respectively, it can be known from step 101: N1=N2= 256,
Figure BDA0002207286270000062
is the probability that the gray value n1 appears in the segmented area SA c ,
Figure BDA0002207286270000063
is the probability that the gray value n2 appears in the segmented area SA q , O(n1, n2) is the Euclidean distance between the gray value n1 and the gray value n2;

迭代执行步骤108-109,直到遍历分割区域集合SA包含的所有分割区域SAc以及遍历候选目标区域集合SC包含的所有区域SCbSteps 108-109 are iteratively executed until all the segmented regions SA c included in the segmented region set SA and all regions SC b included in the candidate target region set SC are traversed;

步骤110:对输入数字图像I进行超像素分割,得到超像素区块集合SSB;Step 110: carry out superpixel segmentation to input digital image I, obtain superpixel block set SSB;

步骤111:迭代计算超像素区块集合SSB包含的超像素区块SSBd的平均可信度ACVd,其中,1≤d≤D,D为超像素区块的数量;Step 111: Iteratively calculate the average reliability ACV d of the superpixel block SSB d included in the superpixel block set SSB, wherein 1≤d≤D, and D is the number of superpixel blocks;

步骤112:通过下式对超像素区块SSBd的平均可信度ACVd进行非线性变换处理,得到超像素区块SSBd的变换平均可信度EACVdStep 112: Perform nonlinear transformation processing on the average reliability ACV d of the superpixel block SSB d by the following formula to obtain the transformed average reliability EACV d of the superpixel block SSB d :

Figure BDA0002207286270000064
Figure BDA0002207286270000064

并将超像素区块SSBd的变换平均可信度EACVd赋值给超像素区块SSBd包含的每一个像素And the transformation average reliability EACV d of super pixel block SSB d is assigned to each pixel that super pixel block SSB d comprises

迭代执行步骤111-112,直到遍历超像素区块集合SSB包含的所有超像素区块SSBd,得到由取值为变换平均可信度的像素组成的变换图像Ic;Iteratively execute steps 111-112, until traversing all superpixel blocks SSB d included in the superpixel block set SSB, obtain the transformed image Ic that is formed by taking the value of the pixel of the transformed average reliability;

步骤113:对变换图像Ic进行二值化处理,并对二值化结果进行边界提取,得到边界集合SedgeStep 113: Binarization processing is performed on the transformed image Ic, and boundary extraction is performed on the binarization result to obtain a boundary set S edge ;

步骤114:将边界集合Sedge叠加在原图作为输出结果。Step 114: Superimpose the boundary set S edge on the original image as an output result.

图2是本发明的掩膜窗口设置示意图。如图2所示,201是输入图像I,202是掩膜窗口集合SW包含的一个掩膜窗口,203是两个掩膜窗口之间的重叠部分,由掩膜窗口(202)截取的输入图像I(201)的区域为子图集合SR中的一个子图。FIG. 2 is a schematic diagram of the mask window arrangement of the present invention. As shown in Figure 2, 201 is the input image I, 202 is a mask window included in the mask window set SW, 203 is the overlapping part between the two mask windows, the input image captured by the mask window (202) The region of I(201) is a subgraph in the subgraph set SR.

Claims (6)

1. A target boundary detection method based on scale space and subgraph is characterized by comprising the following steps:
step 1: inputting a digital image I;
step 2: setting a mask window set SW in the digital image I, wherein the size of each window contained in the mask window set SW is L 1 ×L 2
And 3, step 3: constructing a scale space SI based on the digital image I;
and 4, step 4: decomposing all images contained in the scale space SI by using a mask window set SW to obtain a sub-image set SR;
and 5: extracting subgraph SR contained in subgraph set SR a Dot characteristic gf of middle pixel (x, y) xy From the sub-diagram SR a The point characteristics of all the pixels in the sub-graph SR a Point feature vector GF of a Wherein a is more than or equal to 1 and less than or equal to A, A is the number of subgraphs in the subgraph set SR, GF a =(gf 1 ,gf 2 ,gf 3 ,…,gf (y-1)×L+x ,…,gf L1×L2 ) Wherein x is more than or equal to 1 and less than or equal to L 1 ,1≤y≤L 2 I.e. GF a Is L 1 ×L 2 A row vector of dimensions;
step 6: calculation of the subgraph SR by a Is estimated by the region of SRa
V SRa =GF a ·WT
Where the symbol represents an inner product operation, WT is the template weight vector of the subgraph, its vector dimension and GF a The consistency is achieved;
and 7: setting a threshold value V T When V is SRa Greater than V T When, sub-graph SR a Reserved as a candidate target region, otherwise, the sub-graph SR is deleted a
And 8: iteratively executing steps 5-7 until all sub-graphs SR included in the sub-graph set SR are traversed a Obtaining a candidate target area set SC;
and step 9: for the region SC contained in the candidate target region set SC b Dividing to obtain regions SC b B is more than or equal to 1 and less than or equal to B, wherein B is the number of the areas in the candidate target area set SC;
step 10: the divided regions SA included in the divided region set SA are calculated by the following formula c Reliability CV of (a) c And dividing the area SA c Reliability CV of c Assign to a partitioned area SA c Each pixel is contained, wherein C is more than or equal to 1 and less than or equal to C, C is the number of elements in the set SA of the divided areas,
Figure FDA0003787663340000011
in the formula, SM (SA) c ,SA q ) As a divided area SA c And a divided area SA q Gray scale similarity therebetween, D s (SA c ,SA q ) As a divided area SA c And a divided area SA q The adjacent distance between them is defined as the dividing area SA c And a divided area SA q The Euclidean distance between two centroids, alpha being the proximity distance D s (SA c ,SA q ) The smaller the value of the contribution degree control coefficient is, the closer the distance D is s (SA c ,SA q ) For reliability CV c The greater the contribution of (a), w c As a divided area SA c Occupied area SC of the number of pixels b Is used for measuring the division area SA c The size of (d);
step 11: the steps 9-10 are iteratively executed until all the segmented areas SA comprised by the segmented area set SA are traversed c And traversing all the regions SC contained in the candidate target region set SC b
Step 12: performing superpixel segmentation on an input digital image I to obtain a superpixel block set SSB;
step 13: iteratively calculating the super-pixel block SSB included in the super-pixel block set SSB d Average reliability of ACV d D is more than or equal to 1 and less than or equal to D, and D is the number of the super pixel blocks;
step 14: for super pixel block SSB by d Average reliability of ACV d Performing nonlinear transformation to obtain super pixel block SSB d Mean likelihood of transformation of EACV d
Figure FDA0003787663340000021
And the super pixel block SSB d Mean likelihood of transformation of EACV d Assign to super pixel block SSB d Each pixel contained;
step 15: steps 13-14 are iteratively performed until all super pixel blocks SSB included in the super pixel block set SSB are traversed d Obtaining a transformation image Ic consisting of pixels with the values of transformation average reliability;
step 16: binarization processing is carried out on the converted image Ic, and boundary extraction is carried out on the binarization result to obtain a boundary set S edge
And step 17: set the boundary S edge And the image is superposed on the original image as an output result.
2. The method of claim 1, wherein the mask window set SW of step 2 is randomly positioned, each window can overlap, and the mask window set SW completely covers the digital image I.
3. The method as claimed in claim 1, wherein the scale space SI in step 3 comprises an image I, I 1 ,I 2 ,…,I n Wherein n is the number of layers of the scale space SI,
Figure FDA0003787663340000022
Figure FDA0003787663340000023
...
Figure FDA0003787663340000024
in the formula (I), the compound is shown in the specification,
Figure FDA0003787663340000025
for the convolution operator, g (x, y, t) is a two-dimensional convolution kernel,
Figure FDA0003787663340000026
(x, y) are the pixel coordinates in the image, and t is the scale space factor.
4. The method of claim 1, wherein the method of decomposing the scale space SI in step 4 comprises: mask window SW contained by mask window set SW h Respectively AND-calculating with all images contained in the scale space SI, intercepting the mask window SW h Taking the image area at the corresponding position as a subgraph, wherein H is more than or equal to 1 and less than or equal to H, H is the number of subgraphs contained in the subgraph set SR, and iteratively executing the steps until all mask windows SW contained in the mask window set SW are traversed h
5. The method of claim 1, wherein the step 5 comprises a step of detecting a point feature gf of the pixel (x, y) xy The extraction method comprises the following steps:
gf xy =min(|G H (x,y)|+|G V (x,y)|,255)
in the formula, G H (x,y)=I(x+1,y)-I(x-1,y),G V (x, y) ═ I (x, y +1) -I (x, y-1), where, | G H (x, y) | is a gradient modulus of the pixel (x, y) in the horizontal direction, | G V (x, y) | is a gradient modulus value of the pixel (x, y) in the vertical direction, and min () is a minimum function.
6. The method of claim 1, wherein the segmented region SA of step 10 is a scale space and sub-graph-based object boundary detection method c And a divided area SA q Similarity of gray scale SM (SA) c ,SA q ) Calculated by the following formula:
Figure FDA0003787663340000031
wherein n1 and n2 denote divisional areas SA c And a divided area SA q N1 and N2 are respectivelyPartitioned area SA c And a divided area SA q Gray level of p SAc (n1) Gray value n1 in the divided region SA c Probability of occurrence of p SAq (n2) Gray value n2 in the divided region SA q O (n1, n2) is the euclidean distance between the gradation value n1 and the gradation value n 2.
CN201910885873.8A 2019-09-19 2019-09-19 Target boundary detection method based on scale space and subgraph Expired - Fee Related CN110648341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910885873.8A CN110648341B (en) 2019-09-19 2019-09-19 Target boundary detection method based on scale space and subgraph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910885873.8A CN110648341B (en) 2019-09-19 2019-09-19 Target boundary detection method based on scale space and subgraph

Publications (2)

Publication Number Publication Date
CN110648341A CN110648341A (en) 2020-01-03
CN110648341B true CN110648341B (en) 2022-09-30

Family

ID=68991982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910885873.8A Expired - Fee Related CN110648341B (en) 2019-09-19 2019-09-19 Target boundary detection method based on scale space and subgraph

Country Status (1)

Country Link
CN (1) CN110648341B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8515171B2 (en) * 2009-01-09 2013-08-20 Rochester Institute Of Technology Methods for adaptive and progressive gradient-based multi-resolution color image segmentation and systems thereof
CN103942803B (en) * 2014-05-05 2017-05-17 北京理工大学 SAR (Synthetic Aperture Radar) image based automatic water area detection method
CN108830869B (en) * 2018-05-11 2022-05-10 西安电子科技大学 Parallel segmentation of remote sensing images based on superpixels
CN110031843B (en) * 2019-05-09 2020-12-25 中国科学院自动化研究所 ROI (region of interest) -based SAR (synthetic Aperture Radar) image target positioning method, system and device

Also Published As

Publication number Publication date
CN110648341A (en) 2020-01-03

Similar Documents

Publication Publication Date Title
CN110428433B (en) Canny edge detection algorithm based on local threshold
CN111340824B (en) An Image Feature Segmentation Method Based on Data Mining
CN110866924A (en) A kind of linear structured light centerline extraction method and storage medium
CN108805863B (en) Deep Convolutional Neural Networks Combined with Morphology to Detect Image Changes
CN104217422B (en) A kind of sonar image detection method of adaptive narrow-band level set
CN107680120A (en) Tracking Method of IR Small Target based on rarefaction representation and transfer confined-particle filtering
CN101916446A (en) Gray Target Tracking Algorithm Based on Edge Information and Mean Shift
CN110766708B (en) Image comparison method based on contour similarity
CN110334762A (en) A Feature Matching Method Based on Quadtree Combining ORB and SIFT
CN113344810B (en) Image enhancement method based on dynamic data distribution
CN109961416B (en) A business license information extraction method based on multi-scale fusion of morphological gradients
CN104616308A (en) Multiscale level set image segmenting method based on kernel fuzzy clustering
CN110751680A (en) An Image Processing Method with Fast Alignment Algorithm
CN107180436A (en) A kind of improved KAZE image matching algorithms
CN116823824A (en) Underground belt conveyor dust fall detecting system based on machine vision
CN107220962A (en) A kind of image detecting method and device of tunnel crackle
CN113129323A (en) Remote sensing ridge boundary detection method and system based on artificial intelligence, computer equipment and storage medium
CN108062523B (en) Infrared far-small target detection method
CN113205494A (en) Infrared small target detection method and system based on adaptive scale image block weighting difference measurement
CN111105430B (en) Variational Level Set Image Segmentation Method Based on Landmark Simplex Constraint
CN110910332B (en) A dynamic blur processing method for visual SLAM system
CN106815851B (en) A kind of grid circle oil level indicator automatic reading method of view-based access control model measurement
CN112465845A (en) Image fuzzy edge information segmentation method based on computer vision
CN110264417B (en) Automatic detection and extraction method of local motion blur region based on hierarchical model
CN112329677A (en) Remote sensing image river target detection method and device based on feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220930

CF01 Termination of patent right due to non-payment of annual fee