CN103413332B - Based on the image partition method of two passage Texture Segmentation active contour models - Google Patents
Based on the image partition method of two passage Texture Segmentation active contour models Download PDFInfo
- Publication number
- CN103413332B CN103413332B CN201310371336.4A CN201310371336A CN103413332B CN 103413332 B CN103413332 B CN 103413332B CN 201310371336 A CN201310371336 A CN 201310371336A CN 103413332 B CN103413332 B CN 103413332B
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- phi
- segmentation
- curve
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 68
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000005192 partition Methods 0.000 title 1
- 238000003709 image segmentation Methods 0.000 claims abstract description 22
- 239000000284 extract Substances 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 230000003247 decreasing effect Effects 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 11
- 238000009792 diffusion process Methods 0.000 description 7
- 241000283070 Equus zebra Species 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 2
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了数字图像处理技术领域中的一种基于两通道纹理分割主动轮廓模型的图像分割方法。包括,提取图像中每个像素的灰度值、水平梯度场和垂直梯度场;计算图像中每个像素的灰度值、水平梯度场和垂直梯度场对应的纹理特征;根据所述纹理特征获取灰度特征通道和边缘特征通道;建立两通道纹理分割主动轮廓线模型;通过水平集函数的演化最小化纹理分割模型完成图像分割。本发明提高了算法效率并避免由灰度信息引起的误分割,提高了算法的准确性。
The invention discloses an image segmentation method based on a two-channel texture segmentation active contour model in the technical field of digital image processing. Including, extracting the gray value, horizontal gradient field and vertical gradient field of each pixel in the image; calculating the texture features corresponding to the gray value, horizontal gradient field and vertical gradient field of each pixel in the image; obtaining according to the texture features The gray feature channel and the edge feature channel; the two-channel texture segmentation active contour model is established; the image segmentation is completed by minimizing the texture segmentation model through the evolution of the level set function. The invention improves the efficiency of the algorithm, avoids the wrong segmentation caused by the gray level information, and improves the accuracy of the algorithm.
Description
技术领域technical field
本发明属于数字图像处理技术领域,尤其涉及一种基于两通道纹理分割主动轮廓模型的图像分割方法。The invention belongs to the technical field of digital image processing, in particular to an image segmentation method based on a two-channel texture segmentation active contour model.
背景技术Background technique
图像分割,尤其是纹理图像的分割,一直是计算机视觉与数字图像处理领域的重要内容和难题。纹理分割是根据图像区域内的纹理特征一致性将目标图像划分成若干个互不交叠的区域。目前常用的方法是首先提取图像的特征信息,再根据一定的模型在特征空间分割图像。其中,基于水平集理论的主动轮廓线方法由于可以自动实现演化曲线的分裂和合并而引起了研究人员的关注,被广泛应用于纹理分割中。Image segmentation, especially texture image segmentation, has always been an important content and difficult problem in the field of computer vision and digital image processing. Texture segmentation is to divide the target image into several non-overlapping regions according to the consistency of texture features in the image region. At present, the commonly used method is to first extract the feature information of the image, and then segment the image in the feature space according to a certain model. Among them, the active contour method based on level set theory has attracted the attention of researchers because it can automatically realize the splitting and merging of evolution curves, and is widely used in texture segmentation.
在纹理特征提取方面Gabor滤波和结构张量法最具代表性。结构张量法提取纹理特征通常采用加性算子分离法(AdditiveOperatorSeparation,AOS)迭代求解非线性扩散方程,名称为“基于ROF模型与C-V模型的图像处理的AOS算法”(黄承琦,吉林大学硕士学位论文,2008年)的文献中(第12页-15页)具体描述了AOS算法的求解过程。Gabor滤波利用不同方向、不同频段的滤波器组得到能够充分表征特征的纹理描述。一般首先利用Gabor滤波器提取纹理图像的多维特征向量组,然后采用主动轮廓线模型,如:多通道C-V(Chan-Vese)模型,根据分割曲线内外每一幅特征图像的均值信息分割图像;此外,基于Beltrami框架的纹理特征边缘检测算子也被融入到模型中,在一定程度上提高了纹理图像的分割准确率。In terms of texture feature extraction, Gabor filter and structure tensor method are the most representative. Structural tensor method to extract texture features usually uses Additive Operator Separation (AOS) iteratively to solve the nonlinear diffusion equation, the name is "AOS algorithm for image processing based on ROF model and C-V model" (Huang Chengqi, Jilin University master's degree thesis, 2008) in the literature (page 12-15) specifically described the solution process of the AOS algorithm. Gabor filtering uses filter banks in different directions and different frequency bands to obtain texture descriptions that can fully characterize features. Generally, the Gabor filter is used to extract the multidimensional feature vector group of the texture image first, and then the active contour model, such as: multi-channel C-V (Chan-Vese) model, is used to segment the image according to the mean information of each feature image inside and outside the segmentation curve; in addition , the texture feature edge detection operator based on the Beltrami framework is also integrated into the model, which improves the segmentation accuracy of the texture image to a certain extent.
但是,Gabor滤波计算繁琐且会产生大量冗余信息,导致算法复杂度过大;而C-V模型则不能很好的处理结构明显的纹理图像。基于各项异性扩散的结构张量法将一幅图像分成灰度通道和水平、垂直、45°三个方向的梯度通道,通过对各个通道实施非线性扩散,有效的平滑纹理细节信息,提取灰度和梯度特征。目前,常用的技术是将高斯拟合法、Wasserstein距离度量法、局部尺度测量法等与结构张量相结合,对自然纹理图像的分割取得了较好的效果,然而结构张量面临和Gabor滤波同样的问题,对高维特征的处理使得图像分割时计算速度较慢。此外,直方图特征和一些局部信息也被用于图像分割,由于自然纹理图像复杂多样,所有的算法模型都只能适用于特定类型的纹理图像,如何提高算法的计算效率和分割性能一直是人们致力解决的难题。However, the calculation of Gabor filtering is cumbersome and will generate a lot of redundant information, resulting in excessive complexity of the algorithm; while the C-V model cannot handle texture images with obvious structures well. The structural tensor method based on anisotropic diffusion divides an image into gray channels and gradient channels in three directions: horizontal, vertical, and 45°. By implementing nonlinear diffusion on each channel, it can effectively smooth texture details and extract gray and Gradient features. At present, the commonly used technology is to combine Gaussian fitting method, Wasserstein distance measurement method, local scale measurement method, etc. with the structure tensor, which has achieved good results in the segmentation of natural texture images. The problem is that the processing of high-dimensional features makes the calculation speed of image segmentation slower. In addition, histogram features and some local information are also used for image segmentation. Due to the complexity and variety of natural texture images, all algorithm models can only be applied to specific types of texture images. How to improve the computational efficiency and segmentation performance of the algorithm has always been an issue Difficulties to solve.
发明内容Contents of the invention
本发明的目的在于,提出一种基于两通道纹理分割主动轮廓模型的图像分割方法,用以解决目前常用的纹理图像的分割方法存在的不足。The object of the present invention is to propose an image segmentation method based on the two-channel texture segmentation active contour model to solve the shortcomings of the currently commonly used texture image segmentation methods.
为了实现上述目的,本发明提出的技术方案是,一种基于两通道纹理分割主动轮廓模型的图像分割方法,其特征是所述方法包括:In order to achieve the above object, the technical solution proposed by the present invention is an image segmentation method based on two-channel texture segmentation active contour model, which is characterized in that the method comprises:
步骤1:提取图像中每个像素的灰度值、水平梯度场和垂直梯度场;Step 1: extract the gray value, horizontal gradient field and vertical gradient field of each pixel in the image;
步骤2:计算图像中每个像素的灰度值、水平梯度场和垂直梯度场对应的纹理特征;Step 2: Calculate the gray value of each pixel in the image, the texture features corresponding to the horizontal gradient field and the vertical gradient field;
步骤3:根据所述纹理特征获取灰度特征通道和边缘特征通道;Step 3: Obtain grayscale feature channels and edge feature channels according to the texture features;
步骤4:建立两通道纹理分割主动轮廓线模型;Step 4: Establish a two-channel texture segmentation active contour model;
步骤5:通过水平集函数的演化最小化纹理分割模型完成图像分割;Step 5: complete the image segmentation through the evolution minimization texture segmentation model of the level set function;
所述步骤2包括:根据图像的灰度值I(x,y)、水平梯度场Ix(x,y)和垂直梯度场Iy(x,y),计算对应的图像特征为:其中div是散度算子,是梯度算子,初始时u1~u3分别表示图像I、Ix、Iy,迭代终止时得到图像的灰度特征u1(x,y)、水平纹理特征u2(x,y)和垂直纹理特征u3(x,y);The step 2 includes: according to the gray value I (x, y) of the image, the horizontal gradient field I x (x, y) and the vertical gradient field I y (x, y), the corresponding image features are calculated as: where div is the divergence operator, is a gradient operator, initially u 1 ~ u 3 represent images I, I x , I y respectively, and at the end of the iteration, the image grayscale feature u 1 (x,y) and horizontal texture feature u 2 (x,y) are obtained and vertical texture features u 3 (x,y);
所述步骤3具体过程为:The specific process of step 3 is:
步骤301:根据图像中每个像素的水平梯度场和垂直梯度场对应的纹理特征u2(x,y)和u3(x,y),采用公式提取边缘特征uedge;Step 301: According to the texture features u 2 (x,y) and u 3 (x,y) corresponding to the horizontal gradient field and vertical gradient field of each pixel in the image, use the formula Extract edge feature u edge ;
其中,I是像素的灰度值,是像素的灰度值的梯度;Among them, I is the gray value of the pixel, is the gradient of the gray value of the pixel;
步骤302:根据公式计算灰度特征通道u1′(x,y)和边缘特征通道u′2(x,y);其中,i=1,2,L1=u1(x,y),L2=uedge(x,y);Step 302: According to the formula Calculate grayscale feature channel u 1 ′(x,y) and edge feature channel u′ 2 (x,y); where, i=1,2, L 1 =u 1 (x,y), L 2 =u edge (x,y);
所述建立两通道纹理分割主动轮廓线模型为:The establishment of two-channel texture segmentation active contour model is:
其中,和分别是灰度特征通道和边缘特征通道中的曲线C内部和外部的均值;in, and are the mean values inside and outside of the curve C in the grayscale feature channel and edge feature channel, respectively;
曲线C满足C={(x,y):φ(x,y)=0},φ(x,y)为水平函数集;Curve C satisfies C={(x,y):φ(x,y)=0}, and φ(x,y) is a set of horizontal functions;
μ、α和β分别为长度项、灰度项和边缘项的参数;μ, α and β are the parameters of the length item, gray level item and edge item respectively;
Ω为积分区域,即图像区域;Ω is the integral area, that is, the image area;
δ(·)为Dirac函数;δ( ) is Dirac function;
为水平函数集φ(x,y)的梯度; is the gradient of the horizontal function set φ(x,y);
和分别为灰度特征通道中的曲线C内外区域的灰度均值; and are the gray mean values of the inner and outer areas of the curve C in the gray feature channel;
和分别为灰度特征通道的参数,且 and are the parameters of the grayscale feature channel, and
u1′(x,y)为像素(x,y)对应的灰度特征通道的值;u 1 ′(x, y) is the value of the grayscale feature channel corresponding to the pixel (x, y);
H(·)为Heaviside函数;H( ) is the Heaviside function;
a为用于调整函数形状的常数且a>0;a is a constant used to adjust the shape of the function and a>0;
和分别为边缘特征通道中的曲线C内外区域的灰度均值; and are the gray mean values of the inner and outer areas of the curve C in the edge feature channel;
和分别为边缘特征通道的参数,且 and are the parameters of the edge feature channel, and
u2′(x,y)为像素(x,y)对应的边缘特征通道的值;u 2 ′(x, y) is the value of the edge feature channel corresponding to the pixel (x, y);
H(·)为Heaviside函数。H(·) is the Heaviside function.
所述提取图像中每个像素的水平梯度场具体为采用公式计算图像中的第i行第j列像素的水平梯度场其中,I(i,j)是图像中的第i行第j列像素的灰度值,I(i+1,j)是图像中的第i+1行第j列像素的灰度值。The horizontal gradient field of each pixel in the extracted image is specifically using the formula Calculate the horizontal gradient field of the i-th row and j-th column pixel in the image Wherein, I(i, j) is the gray value of the pixel in row i and column j in the image, and I(i+1, j) is the gray value of the pixel in row i+1 and column j in the image.
所述提取图像中每个像素的垂直梯度场具体为采用公式计算图像中的第i行第j列像素的水平梯度场其中,I(i,j)是图像中的第i行第j列像素的灰度值,I(i,j+1)是图像中的第i行第j+1列像素的灰度值。The vertical gradient field of each pixel in the extracted image is specifically using the formula Calculate the horizontal gradient field of the i-th row and j-th column pixel in the image Among them, I(i, j) is the gray value of the pixel in row i and column j in the image, and I(i, j+1) is the gray value of the pixel in row i and column j+1 in the image.
所述步骤5包括:Said step 5 includes:
步骤501:随机给定初始闭合分割曲线C0,并计算初始闭合分割曲线C0对应的初始水平集函数φ0(x,y);Step 501: Randomly give the initial closed segmentation curve C 0 , and calculate the initial level set function φ 0 (x,y) corresponding to the initial closed segmentation curve C 0 ;
步骤502:设定模型参数μ,α,β, Step 502: Set model parameters μ, α, β,
步骤503:令k=0,分别计算初始闭合分割曲线C0内部和外部的均值;Step 503: Let k=0, respectively calculate the mean values inside and outside the initial closed segmentation curve C0 ;
初始闭合分割曲线C0内部的均值的计算公式为:The calculation formula of the mean value inside the initial closed segmentation curve C 0 is:
初始闭合分割曲线C0外部的均值的计算公式为:The calculation formula of the mean value outside the initial closed segmentation curve C 0 is:
上述两个公式中,i=1,2;In the above two formulas, i=1,2;
Ω为积分区域,即图像区域;Ω is the integral area, that is, the image area;
u1′(x,y)为像素(x,y)对应的灰度特征通道的值;u 1 ′(x, y) is the value of the grayscale feature channel corresponding to the pixel (x, y);
u2′(x,y)为像素(x,y)对应的边缘特征通道的值;u 2 ′(x, y) is the value of the edge feature channel corresponding to the pixel (x, y);
H(·)为Heaviside函数;H( ) is the Heaviside function;
步骤504:根据公式φk+1(x,y)-φk(x,y)=Δt×L(φk(x,y))迭代计算φk+1(x,y);Step 504: iteratively calculate φ k+1 (x, y) according to the formula φ k+1 (x, y)-φ k (x, y)=Δt×L(φ k (x, y));
即which is
Δt为设定时间步长,δε(·)为Dirac函数;φk(x,y)为第k次迭代后得到的水平集函数,
步骤505:从水平函数集φk+1(x,y)中提取零水平集,该提取的零水平集即演化曲线;Step 505: extract a zero level set from the level function set φ k+1 (x, y), the extracted zero level set is the evolution curve;
步骤506:判断水平函数集φk+1(x,y)是否稳定,即当相邻两次迭代得到的演化曲线长度之差小于设定阈值时,则水平函数集φk+1(x,y)是稳定的,执行步骤507;否则,令k=k+1,跳转至步骤504;Step 506: Determine whether the horizontal function set φ k+1 (x, y) is stable, that is, when the difference between the lengths of evolution curves obtained by two adjacent iterations is less than the set threshold, then the horizontal function set φ k+1 (x, y) is stable, execute step 507; otherwise, make k=k+1, and jump to step 504;
步骤507:将水平函数集φk+1(x,y)中提取演化曲线作为分割曲线,用所述分割曲线分割图像完成图像分割过程。Step 507: Extract the evolution curve from the horizontal function set φ k+1 (x, y) as a segmentation curve, and use the segmentation curve to segment the image to complete the image segmentation process.
本发明通过提取图像的边缘和灰度特征作为分割特征集,避免了对高维特征组的繁琐计算,提高了算法效率;通过建立的两通道纹理分割C-V模型,能够在灰度变化平坦区域以边缘特征为主导项驱动曲线演化,避免由灰度信息引起的误分割,提高了算法的准确性。The present invention extracts the edge and grayscale features of the image as the segmentation feature set, avoids the cumbersome calculation of the high-dimensional feature group, and improves the algorithm efficiency; through the established two-channel texture segmentation C-V model, it can be used in the flat area of grayscale change. The edge feature is the dominant term to drive the evolution of the curve, avoiding false segmentation caused by gray information, and improving the accuracy of the algorithm.
附图说明Description of drawings
图1是基于两通道纹理分割主动轮廓模型的图像分割方流程图;Figure 1 is a flow chart of the image segmentation method based on the two-channel texture segmentation active contour model;
图2是本发明仿真采用的实例纹理图像;Fig. 2 is the example texture image that simulation of the present invention adopts;
图3a是对图2提取的边缘特征图;Figure 3a is the edge feature map extracted from Figure 2;
图3b是对图2提取的灰度特征图;Figure 3b is the grayscale feature map extracted from Figure 2;
图4a是对图2的初始分割曲线;Fig. 4a is the initial segmentation curve to Fig. 2;
图4b是本发明对图2的分割过程与结果;Fig. 4b is the segmentation process and result of Fig. 2 in the present invention;
图4c是基本C-V模型对图2的分割过程与结果;Figure 4c is the segmentation process and results of the basic C-V model for Figure 2;
图5a是对其它三幅纹理图像提取的边缘特征;Figure 5a is the edge features extracted from the other three texture images;
图5b是对其它三幅纹理图像提取的灰度特征;Figure 5b is the grayscale features extracted from the other three texture images;
图5c是对其它三幅纹理图像提取的最终分割结果。Figure 5c is the final segmentation result extracted from the other three texture images.
具体实施方式detailed description
下面结合附图,对优选实施例作详细说明。应该强调的是,下述说明仅仅是示例性的,而不是为了限制本发明的范围及其应用。The preferred embodiments will be described in detail below in conjunction with the accompanying drawings. It should be emphasized that the following description is only exemplary and not intended to limit the scope of the invention and its application.
图1是基于两通道纹理分割主动轮廓模型的图像分割方法流程图。如图1所示,本发明提供的基于两通道纹理分割主动轮廓模型的图像分割方法包括:Fig. 1 is a flowchart of an image segmentation method based on a two-channel texture segmentation active contour model. As shown in Figure 1, the image segmentation method based on the two-channel texture segmentation active contour model provided by the present invention includes:
步骤1:提取图像中每个像素的灰度值、水平梯度场和垂直梯度场。Step 1: Extract the gray value, horizontal gradient field and vertical gradient field of each pixel in the image.
现有技术已经提供了多种图像中像素的灰度值提取的方法,选择其中的任何一种即可。比如,对于像素中具有RGB三通道的彩色图像来说,只要使R=G=B,三者的值相等就可以得到灰度图像。R=G=B=255为白色,R=G=B=0为黑色,R=G=B=小于255的某个整数时,此时就为某个灰度值。The prior art has provided a variety of methods for extracting the gray value of pixels in an image, and any one of them can be selected. For example, for a color image with three RGB channels in a pixel, as long as R=G=B and the values of the three are equal, a grayscale image can be obtained. R=G=B=255 is white, R=G=B=0 is black, and when R=G=B=a certain integer less than 255, it is a certain gray value at this time.
提取图像中每个像素的水平梯度场采用公式:Extract the horizontal gradient field of each pixel in the image using the formula:
公式(1)中,为图像中的第i行第j列像素的水平梯度场,I(i,j)是图像中的第i行第j列像素的灰度值,I(i+1,j)是图像中的第i+1行第j列像素的灰度值。In formula (1), is the horizontal gradient field of the i-th row and j-column pixel in the image, I(i,j) is the gray value of the i-th row and j-column pixel in the image, and I(i+1,j) is the The gray value of the pixel in row i+1 and column j.
提取图像中每个像素的垂直梯度场采用公式:Extracting the vertical gradient field of each pixel in the image uses the formula:
公式(2)中,为图像中的第i行第j列像素的水平梯度场,I(i,j)是图像中的第i行第j列像素的灰度值,I(i,j+1)是图像中的第i行第j+1列像素的灰度值。In formula (2), is the horizontal gradient field of the i-th row and j-column pixel in the image, I(i,j) is the gray value of the i-th row and j-column pixel in the image, and I(i,j+1) is the The gray value of the pixel in row i and column j+1.
步骤2:计算图像中每个像素的灰度值、水平梯度场和垂直梯度场对应的纹理特征。Step 2: Calculate the gray value of each pixel in the image, the texture features corresponding to the horizontal gradient field and the vertical gradient field.
本发明通过建立非线性扩散方程,对得到图像中每个像素的灰度值、水平梯度场和垂直梯度场进行非线性扩散滤波,再提取出平滑后每个像素的灰度值、水平梯度场和垂直梯度场对应的纹理特征u1、u2和u3:The present invention performs nonlinear diffusion filtering on the gray value, horizontal gradient field and vertical gradient field of each pixel in the obtained image by establishing a nonlinear diffusion equation, and then extracts the smoothed gray value and horizontal gradient field of each pixel Texture features u 1 , u 2 and u 3 corresponding to the vertical gradient field:
u=(u1,u2,u3)=TV(I,Ix 2,Iy 2)(3)u=(u 1 ,u 2 ,u 3 )=TV(I,I x 2 ,I y 2 )(3)
公式(3)中,TV表示非线性扩散方程,如式(4)所示:In formula (3), TV represents the nonlinear diffusion equation, as shown in formula (4):
公式(4)中,i=1,2,3,则u1、u2和u3分别为每个像素的灰度值、水平梯度场和垂直梯度场对应的纹理特征,div(·)是散度运算,g(·)是单调递减函数且有ξ为设定值。本发明中取ξ=e-10,uix是纹理特征ui的水平梯度场,uiy是纹理特征ui的垂直梯度场。采用加性算子分离法(AdditiveOperatorSeparation,AOS)迭代求解式(4),得到特征u1~u3,具体步骤为:In formula (4), i=1, 2, 3, then u 1 , u 2 and u 3 are respectively the texture features corresponding to the gray value of each pixel, the horizontal gradient field and the vertical gradient field, and div(·) is Divergence operation, g( ) is a monotonically decreasing function and has ξ is the set value. In the present invention, ξ=e -10 is taken, u ix is the horizontal gradient field of texture feature u i , and u iy is the vertical gradient field of texture feature u i . Additive Operator Separation (AOS) is used to iteratively solve formula (4) to obtain features u 1 ~ u 3 , the specific steps are:
步骤101:初始化ui的初值即令每个像素的灰度值、水平梯度场和垂直梯度场对应纹理特征u1、u2和u3的初始值和分别为该像素的灰度值、水平梯度场和垂直梯度场,令k=0.Step 101: Initialize the initial value of u i That is, the gray value, horizontal gradient field and vertical gradient field of each pixel correspond to the initial values of texture features u 1 , u 2 and u 3 and are the gray value of the pixel, the horizontal gradient field and the vertical gradient field respectively, let k=0.
子步骤102:令 Sub-step 102: make
子步骤103:采用公式vi=Kσ*ui进行高斯平滑处理。其中,vi是特征ui经过高斯平滑后得到的图像,Kσ是以σ为标准差的高斯函数。Sub-step 103: Gaussian smoothing is performed using the formula v i =K σ *u i . Among them, v i is the image obtained by Gaussian smoothing of feature u i , and K σ is a Gaussian function with σ as the standard deviation.
子步骤104:根据公式计算扩散系数。Sub-step 104: according to the formula Calculate the diffusion coefficient.
步骤105:根据Thomas算法分别求解x方向的方程vix=(2I-4τAx)-1ui和y方向的方程viy=(2I-4τAy)-1ui。其中,I是单位矩阵,其阶数为图像像素数,τ是将时间域离散化之后的时间步长,Ax和Ay分别为对求偏导数的两个一维算子。Step 105: According to the Thomas algorithm, the equation v ix =(2I-4τA x ) -1 u i in the x direction and the equation v iy =(2I-4τA y ) -1 u i in the y direction are respectively solved. Among them, I is the identity matrix, its order is the number of image pixels, τ is the time step after discretizing the time domain, A x and A y are the pair Two one-dimensional operators for partial derivatives.
步骤106:根据公式ui=vix+viy更新ui.Step 106: Update u i according to the formula u i =v ix +v iy .
步骤107:令并判断k≤K是否成立,如果k≤K,则令k=k+1,返回步骤102。否则,执行步骤108;K为设定值,本发明中K=30。Step 107: Order And judge whether k≤K holds true, if k≤K, set k=k+1, and return to step 102 . Otherwise, execute step 108; K is a set value, and K=30 in the present invention.
步骤108:将此时的作为纹理特征。Step 108: Change the current as texture features.
步骤3:根据所述纹理特征获取灰度特征通道和边缘特征通道。Step 3: Obtain grayscale feature channels and edge feature channels according to the texture features.
根据纹理图像灰度与梯度特征的定义可知,u1存在明显的纹理结构信息,无法单独用于图像分割;u2和u3中只包含图像的部分梯度信息,其中,水平(垂直)梯度场在图像的垂直(水平)边缘处具有较大的数值,而在水平(垂直)边缘方向取值较小,因此,定义边缘特征uedge为:According to the definition of texture image grayscale and gradient features, u 1 has obvious texture structure information, which cannot be used for image segmentation alone; u 2 and u 3 only contain part of the gradient information of the image, among which, the horizontal (vertical) gradient field It has a larger value at the vertical (horizontal) edge of the image, and a smaller value in the direction of the horizontal (vertical) edge. Therefore, the edge feature u edge is defined as:
公式(5)中,I是像素的灰度值,是像素的灰度值的梯度。为避免量纲不同造成的影响,由用式(6)统一二者的取值范围,得到灰度通道u1′(x,y)和边缘通道u2′(x,y):In the formula (5), I is the gray value of the pixel, is the gradient of the gray value of the pixel. In order to avoid the influence caused by different dimensions, the gray channel u 1 ′(x, y) and the edge channel u 2 ′(x, y) are obtained by unifying the value ranges of the two with formula (6):
公式(6)中,i=1,2,L1=u1(x,y),L2=uedge(x,y);x和y分别为图像中像素的横纵坐标。In formula (6), i=1, 2, L 1 =u 1 (x,y), L 2 =u edge (x,y); x and y are the horizontal and vertical coordinates of pixels in the image, respectively.
步骤4:建立两通道纹理分割主动轮廓线模型。Step 4: Establish a two-channel texture segmentation active contour model.
基本的多通道C-V模型是一种区域主动轮廓模型,能在无明显边界时分割目标和背景,设原始图像的N个特征通道为ui(i=1,2,...,N),C为分割曲线,和分别为第i个通道中曲线C内外的均值,则多通道C-V能量模型可描述为:The basic multi-channel CV model is a regional active contour model, which can segment the target and the background when there is no obvious boundary. Let the N feature channels of the original image be u i (i=1,2,...,N), C is the split curve, and are the mean values inside and outside the curve C in the i-th channel, then the multi-channel CV energy model can be described as:
公式(7)中,μ为长度项参数且μ≥0,和分别为第i个特征通道的参数,第一项为曲线C的长度,保证演化曲线的光滑度。In formula (7), μ is the length parameter and μ≥0, and are the parameters of the i-th feature channel, and the first item is the length of the curve C, which ensures the smoothness of the evolution curve.
C-V模型根据所有通道能量的平均值驱动曲线C的演化,而在实际中,并不是所有的特征都有助于找到理想分割曲线,尤其是当不同纹理区域之间具有相似灰度值、或者同一纹理区域内的灰度差别较大时,在灰度特征通道的作用下将导致错误的分割。The C-V model drives the evolution of the curve C according to the average value of all channel energies, but in practice, not all features are helpful to find the ideal segmentation curve, especially when there are similar gray values between different texture regions, or the same When the gray level difference in the texture area is large, it will lead to wrong segmentation under the action of the gray level feature channel.
假设曲线C内外区域的灰度特征通道的均值分别为和当灰度差较小时,灰度能量的比例尽可能的少以降低纹理区域间灰度相近时的误分割;随着的增加,其能量逐渐增加以驱动曲线C向目标边界演化。根据对常见隶属度函数的分析,以sigmoid函数作为灰度能量项的调整系数,因此,在C-V模型的基础上建立灰度能量F1,并用水平集的方法表示,如式(8)所示:Assume that the mean values of the grayscale feature channels in the inner and outer regions of the curve C are and When the grayscale is poor When is small, the proportion of gray energy is as small as possible to reduce the mis-segmentation when the gray levels between texture regions are similar; with With the increase of , its energy gradually increases to drive the curve C to evolve towards the target boundary. According to the analysis of the common membership function, the sigmoid function is used as the adjustment coefficient of the gray energy item. Therefore, the gray energy F 1 is established on the basis of the CV model and expressed by the level set method, as shown in formula (8) :
公式(8)中,和分别为灰度特征通道中的曲线C内外区域的灰度均值,和分别为灰度特征通道的参数,且u1′(x,y)为像素(x,y)对应的灰度特征通道的值,H(·)为Heaviside函数,a为用于调整函数形状的常数且a>0,可取a=3。系数为随着变化而变化的sigmoid函数。In formula (8), and are the gray mean values of the inner and outer areas of the curve C in the gray feature channel, and are the parameters of the grayscale feature channel, and u 1 ′(x, y) is the value of the grayscale feature channel corresponding to the pixel (x, y), H(·) is the Heaviside function, a is a constant used to adjust the shape of the function and a>0, a=3 is advisable . coefficient for following The sigmoid function that varies with the change.
设边缘通道中C内外的均值分别为和建立边缘能量F2为:Let the mean values inside and outside C in the edge channel be respectively and The edge energy F2 is established as:
公式(9)中,和分别为边缘特征通道中的曲线C内外区域的灰度均值,和分别为边缘特征通道的参数,且u2′(x,y)为像素(x,y)对应的边缘特征通道的值,H(·)为Heaviside函数。In formula (9), and are the gray mean values of the inner and outer areas of the curve C in the edge feature channel, and are the parameters of the edge feature channel, and u 2 ′(x, y) is the value of the edge feature channel corresponding to the pixel (x, y), and H(·) is the Heaviside function.
添加曲线长度调整项,则基于边缘和灰度特征通道的新纹理分割主动轮廓模型为:Adding the curve length adjustment item, the new texture segmentation active contour model based on the edge and gray feature channels is:
公式(10)中,,和分别是灰度特征通道和边缘特征通道中的曲线C内部和外部的均值,曲线C为C={(x,y):φ(x,y)=0},φ(x,y)为水平函数集。α和β分别为灰度项和边缘项的参数且α>0,β>0。Ω为积分区域,即图像区域。δ(·)为Dirac函数,为水平函数集φ(x,y)的梯度。In formula (10), and are the mean values of the inside and outside of the curve C in the grayscale feature channel and the edge feature channel respectively, the curve C is C={(x,y):φ(x,y)=0}, φ(x,y) is the level set of functions. α and β are the parameters of grayscale item and edge item respectively and α>0, β>0. Ω is the integral area, that is, the image area. δ( ) is Dirac function, is the gradient of the horizontal function set φ(x,y).
另外,公式(8)和(9)中,H(φ(x,y))表示公式(7)中的积分区域inside(C),1-H(φ(x,y))表示公式(7)中的积分区域outside(C)。采用如式(11)和(12)所示的正则化形式:In addition, in formulas (8) and (9), H(φ(x,y)) represents the integration area inside (C) in formula (7), and 1-H(φ(x,y)) represents formula (7 ) in the integration region outside(C). Using the regularization form shown in equations (11) and (12):
步骤5:通过水平集函数的演化最小化纹理分割模型完成图像分割。Step 5: Complete the image segmentation by minimizing the texture segmentation model with the evolution of the level set function.
首先,根据模型即公式(10),固定水平集函数φ,对图像的特征通道均值和求导,根据变分法得到两个特征通道在曲线C内外的均值如下:First, according to the model, that is, formula (10), the level set function φ is fixed, and the mean value of the feature channel of the image and Derivation, according to the variation method, the average value of the two feature channels inside and outside the curve C is as follows:
再固定和求关于φ求极小值,通过推导φ的Euler-Lagrange方程,得到模型的曲线演化方程为:Refix and begging Regarding the minimum value of φ, by deriving the Euler-Lagrange equation of φ, the curve evolution equation of the model is obtained as:
采用有限差分法离散化曲线演化方程,根据前向差分得式(15)的离散化形式为:Using the finite difference method to discretize the curve evolution equation, according to the forward difference, the discretization form of formula (15) is:
其中,Δt为时间步长,是式(15)等号右边的数值逼近。中的曲率表示为:Among them, Δt is the time step, is the numerical approximation on the right side of the equal sign in formula (15). curvature in Expressed as:
其中:in:
其中,h表示离散网络间隔,常用h=1。根据水平集函数的差分表示,可得曲线演化方程(15)的离散化形式为:Among them, h represents the discrete network interval, and h=1 is commonly used. According to the differential expression of the level set function, the discretization form of the curve evolution equation (15) can be obtained as:
综上所述的模型求解方程,确定纹理图像模型水平集演化的具体步骤为:To sum up the model solving equations mentioned above, the specific steps to determine the evolution of the level set of the texture image model are as follows:
步骤501:随机给定初始闭合分割曲线C0,并计算初始闭合分割曲线C0对应的初始水平集函数φ0(x,y)。Step 501: An initial closed segmentation curve C 0 is randomly given, and an initial level set function φ 0 (x,y) corresponding to the initial closed segmentation curve C 0 is calculated.
步骤502:设定模型参数μ,α,β,一般情况下μ=0.2、其它参数取为1,参数值可根据不同的纹理图像进行调整。Step 502: Set model parameters μ, α, β, Generally, μ=0.2, other parameters are set to 1, and the parameter values can be adjusted according to different texture images.
步骤503:令k=0,分别计算初始闭合分割曲线C0内部和外部的均值。Step 503: Let k=0, respectively calculate the mean values inside and outside the initial closed segmentation curve C 0 .
初始闭合分割曲线C0内部的均值的计算公式为:The calculation formula of the mean value inside the initial closed segmentation curve C 0 is:
初始闭合分割曲线C0外部的均值的计算公式为:The calculation formula of the mean value outside the initial closed segmentation curve C 0 is:
上述两个公式中,i=1,2,Ω为积分区域,即图像区域,u1′(x,y)为像素(x,y)对应的灰度特征通道的值,u2′(x,y)为像素(x,y)对应的边缘特征通道的值,H(·)为Heaviside函数。In the above two formulas, i=1, 2, Ω is the integral area, that is, the image area, u 1 ′(x, y) is the value of the grayscale feature channel corresponding to the pixel (x, y), u 2 ′(x ,y) is the value of the edge feature channel corresponding to the pixel (x,y), and H(·) is the Heaviside function.
步骤504:根据公式φk+1(x,y)-φk(x,y)=Δt×L(φk(x,y))迭代计算φk+1(x,y)。Step 504: Iteratively calculate φ k+1 (x, y) according to the formula φ k+1 (x, y)-φ k (x, y)=Δt×L(φ k (x, y)).
即which is
步骤505:从水平函数集φk+1(x,y)中提取零水平集,该提取的零水平集即演化曲线。Step 505: Extract a zero level set from the level function set φ k+1 (x, y), and the extracted zero level set is the evolution curve.
步骤506:判断水平函数集φk+1(x,y)是否稳定,即当相邻两次迭代得到的演化曲线长度之差小于设定阈值时,则水平函数集φk+1(x,y)是稳定的,执行步骤507;否则,令k=k+1,跳转至步骤504。Step 506: Determine whether the horizontal function set φ k+1 (x, y) is stable, that is, when the difference between the lengths of evolution curves obtained by two adjacent iterations is less than the set threshold, then the horizontal function set φ k+1 (x, y) is stable, go to step 507; otherwise, let k=k+1, go to step 504.
步骤507:将水平函数集φk+1(x,y)中提取演化曲线作为分割曲线,用所述分割曲线分割图像完成图像分割过程。Step 507: Extract the evolution curve from the horizontal function set φ k+1 (x, y) as a segmentation curve, and use the segmentation curve to segment the image to complete the image segmentation process.
本发明的效果可以通过以下仿真进一步说明:Effect of the present invention can be further illustrated by following simulation:
采用本发明的方法对附图2所示的斑马纹理图像进行分割,获取的图像边缘特征和灰度特征分别如附图3a和3b所示。在图像分割阶段,分别采用本发明所公开的方法和基本的多通道C-V模型进行对比,初始分割曲线、分割过程和最终分割曲线分别在附图4中给出。The method of the present invention is used to segment the zebra texture image shown in Figure 2, and the acquired image edge features and grayscale features are shown in Figures 3a and 3b respectively. In the image segmentation stage, the methods disclosed in the present invention and the basic multi-channel C-V model are used for comparison. The initial segmentation curve, segmentation process and final segmentation curve are respectively given in Figure 4.
由附图3和4可知,本发明在曲线演化前期以边缘特征为主要驱动力,演化曲线能够准确的停留在斑马的背部等具有明显边缘的地方;而对于斑马头部、尾巴和四肢等边缘特征较弱的地方,灰度能量起主导作用使曲线继续演化,正确的实现了对目标区域的提取。基本C-V模型由于将所有梯度特征与灰度特征等效处理,在演化过程中曲线受灰度信息影响而出现了大量误分割,并不能得到正确的分割结果。As can be seen from accompanying drawings 3 and 4, the present invention takes the edge feature as the main driving force in the early stage of curve evolution, and the evolution curve can accurately stay in places with obvious edges such as the back of the zebra; and for the edge of the zebra head, tail and limbs Where the features are weak, the gray energy plays a leading role to make the curve continue to evolve, and the target area is correctly extracted. Because the basic C-V model treats all gradient features and gray-scale features equivalently, the curve is affected by gray-scale information during the evolution process, and a lot of mis-segmentation occurs, and the correct segmentation results cannot be obtained.
附图5中给出了采用本发明对几种典型纹理图像提取的特征图与分割结果,两个特征通道中包含了充足的信息,所建立的分割模型具有良好的分割性能。由于只在两个特征通道中进行纹理分割,算法效率较传统的Gabor滤波器、结构张量等得到了显著的提高。Figure 5 shows the feature maps and segmentation results extracted by the present invention for several typical texture images. The two feature channels contain sufficient information, and the established segmentation model has good segmentation performance. Since texture segmentation is only performed in two feature channels, the efficiency of the algorithm is significantly improved compared with traditional Gabor filters and structure tensors.
综上所述,本发明避免了对多特征通道的繁琐计算,两通道的纹理分割模型克服了不同纹理区域灰度相近造成的分割困难,尤其是对背景纹理细微、目标纹理结构明显的图像取得了很好的分割效果;此外,本发明可以认为是一种非监督方法,具有很强的适用性,是一种非常有效的纹理分割方法。In summary, the present invention avoids the tedious calculation of multiple feature channels, and the two-channel texture segmentation model overcomes the segmentation difficulties caused by the similar gray levels of different texture regions, especially for images with subtle background textures and obvious target texture structures. In addition, the present invention can be regarded as a non-supervised method, has strong applicability, and is a very effective texture segmentation method.
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求的保护范围为准。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Any person skilled in the art can easily conceive of changes or modifications within the technical scope disclosed in the present invention. Replacement should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be determined by the protection scope of the claims.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310371336.4A CN103413332B (en) | 2013-08-23 | 2013-08-23 | Based on the image partition method of two passage Texture Segmentation active contour models |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310371336.4A CN103413332B (en) | 2013-08-23 | 2013-08-23 | Based on the image partition method of two passage Texture Segmentation active contour models |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103413332A CN103413332A (en) | 2013-11-27 |
CN103413332B true CN103413332B (en) | 2016-05-18 |
Family
ID=49606337
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310371336.4A Expired - Fee Related CN103413332B (en) | 2013-08-23 | 2013-08-23 | Based on the image partition method of two passage Texture Segmentation active contour models |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103413332B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123719B (en) * | 2014-06-03 | 2017-01-25 | 南京理工大学 | Method for carrying out infrared image segmentation by virtue of active outline |
CN105894496A (en) * | 2016-03-18 | 2016-08-24 | 常州大学 | Semi-local-texture-feature-based two-stage image segmentation method |
CN106296649B (en) * | 2016-07-21 | 2018-11-20 | 北京理工大学 | A kind of texture image segmenting method based on Level Set Models |
CN109961424B (en) * | 2019-02-27 | 2021-04-13 | 北京大学 | Hand X-ray image data generation method |
CN110490859A (en) * | 2019-08-21 | 2019-11-22 | 西安工程大学 | A kind of texture inhibits the fabric defect detection method in conjunction with Active contour |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976445A (en) * | 2010-11-12 | 2011-02-16 | 西安电子科技大学 | Level set SAR (Synthetic Aperture Radar) image segmentation method by combining edges and regional probability density difference |
CN102426700A (en) * | 2011-11-04 | 2012-04-25 | 西安电子科技大学 | Level set SAR image segmentation method based on local and global area information |
CN102426699A (en) * | 2011-11-04 | 2012-04-25 | 西安电子科技大学 | Level set SAR image segmentation method based on edge and region information |
-
2013
- 2013-08-23 CN CN201310371336.4A patent/CN103413332B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976445A (en) * | 2010-11-12 | 2011-02-16 | 西安电子科技大学 | Level set SAR (Synthetic Aperture Radar) image segmentation method by combining edges and regional probability density difference |
CN102426700A (en) * | 2011-11-04 | 2012-04-25 | 西安电子科技大学 | Level set SAR image segmentation method based on local and global area information |
CN102426699A (en) * | 2011-11-04 | 2012-04-25 | 西安电子科技大学 | Level set SAR image segmentation method based on edge and region information |
Non-Patent Citations (3)
Title |
---|
Active contours without edges for vector-valued images;Tony F. Chan et al.;《Journal of Visual Communication and Image Representation》;20000630;第11卷(第2期);全文 * |
Active unsupervised texture segmentation on a diffusion based feature space;Mikael Rousson et al.;《Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition》;20030620;第2卷;第2-3节、图2 * |
利用非线性扩散的半自动纹理图像分割;张煜,谭德宝;《武汉大学学报·信息科学版》;20070405;第32卷(第4期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN103413332A (en) | 2013-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107506761B (en) | Brain image segmentation method and system based on saliency learning convolutional neural network | |
CN107491726B (en) | A real-time expression recognition method based on multi-channel parallel convolutional neural network | |
CN110276264B (en) | Crowd density estimation method based on foreground segmentation graph | |
CN105740945B (en) | A kind of people counting method based on video analysis | |
CN104732545B (en) | The texture image segmenting method with quick spectral clustering is propagated with reference to sparse neighbour | |
CN105243670B (en) | A kind of sparse and accurate extracting method of video foreground object of low-rank Combined expression | |
CN103413332B (en) | Based on the image partition method of two passage Texture Segmentation active contour models | |
CN108053398A (en) | A kind of melanoma automatic testing method of semi-supervised feature learning | |
CN101673345A (en) | Method for extracting target closed contour based on shape prior | |
CN106920243A (en) | The ceramic material part method for sequence image segmentation of improved full convolutional neural networks | |
CN104933709A (en) | Automatic random-walk CT lung parenchyma image segmentation method based on prior information | |
CN106203430A (en) | A kind of significance object detecting method based on foreground focused degree and background priori | |
CN104050722A (en) | Indoor three-dimensional scene layout and color transfer generation method driven by image contents | |
CN110675421B (en) | Cooperative segmentation method of depth image based on few annotation boxes | |
CN107292259A (en) | The integrated approach of depth characteristic and traditional characteristic based on AdaRank | |
CN102903102A (en) | Non-local-based triple Markov random field synthetic aperture radar (SAR) image segmentation method | |
CN105374039A (en) | Monocular image depth information estimation method based on contour acuity | |
CN105184766A (en) | Horizontal set image segmentation method of frequency-domain boundary energy model | |
CN104732551A (en) | Level set image segmentation method based on superpixel and graph-cup optimizing | |
CN102254327A (en) | A Method for Automatic Segmentation of Faces in Digital Photos | |
CN108122219B (en) | Infrared and Visible Image Fusion Method Based on Joint Sparse and Non-negative Sparse | |
CN106096615A (en) | A kind of salient region of image extracting method based on random walk | |
CN105335965A (en) | High-resolution remote sensing image multi-scale self-adaptive decision fusion segmentation method | |
CN103871060B (en) | Image partition method based on steady direction wave zone probability graph model | |
CN103037168B (en) | Steady Surfacelet domain multi-focus image fusing method based on compound PCNN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160518 Termination date: 20170823 |