[go: up one dir, main page]

CN113409335B - Image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering - Google Patents

Image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering Download PDF

Info

Publication number
CN113409335B
CN113409335B CN202110693319.7A CN202110693319A CN113409335B CN 113409335 B CN113409335 B CN 113409335B CN 202110693319 A CN202110693319 A CN 202110693319A CN 113409335 B CN113409335 B CN 113409335B
Authority
CN
China
Prior art keywords
membership
strong
weak
degree
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110693319.7A
Other languages
Chinese (zh)
Other versions
CN113409335A (en
Inventor
赵凤
吝晓娟
刘汉强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202110693319.7A priority Critical patent/CN113409335B/en
Publication of CN113409335A publication Critical patent/CN113409335A/en
Application granted granted Critical
Publication of CN113409335B publication Critical patent/CN113409335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image segmentation method based on strong and weak joint semi-supervised intuitionistic fuzzy clustering, which mainly solves the problems that the conventional image segmentation is sensitive to an initial value, is easy to fall into local optimum and is inseparable to low-dimensional data linearity. The scheme is as follows: inputting an image to be segmented, setting initial parameters and manually scribing; performing intuitive fuzzification processing on the image; designing a strong and weak combined semi-supervised strategy to obtain a strong supervision membership degree, a weak supervision membership degree and an initial clustering center; introducing a kernel function, a strong supervision membership degree and a weak supervision membership degree into an intuitive fuzzy clustering target function to obtain a strong and weak combined semi-supervised kernel intuitive fuzzy clustering target function; minimizing a target function by adopting a Lagrange multiplier method to calculate a clustering optimal solution; and obtaining a classification result of the image pixel points according to the maximum membership principle. The method improves the sensitivity to the initial value, prevents the local optimization, improves the segmentation accuracy of linear irreparable data, and can be used for identifying natural images.

Description

基于强弱联合半监督直觉模糊聚类的图像分割方法Image segmentation method based on strong and weak joint semi-supervised intuitionistic fuzzy clustering

技术领域Technical Field

本发明属于数字图像处理领域,具体涉及一种图像分割方法,可用于自然图像的识别和计算机视觉的预处理。The invention belongs to the field of digital image processing, and in particular relates to an image segmentation method, which can be used for natural image recognition and computer vision preprocessing.

背景技术Background Art

图像分割作为图像处理与后续图像理解之间的一个枢纽环节,一直是学者们研究的热点问题,其占有着越来越重要的地位。图像分割的目的是根据图像的自身特性,将其划分成若干个具有不同属性且无交集的子区域,每个子区域内的各个像素都具有不同程度的相似特性,不同子区域之间的像素特征也具有显著的差异性。近年来,图像分割技术已在卫星遥感、智能安防、无人驾驶、医学图像处理和生物特征识别等领域提供了可靠且有效的帮助。在实际应用过程中,随着分割场景的日趋复杂化,人们对图像分割技术的性能要求也越来越严格,相继出现了基于阈值、区域、聚类、边缘和人工神经网络的分割算法。其中,基于聚类的图像分割算法具有计算复杂度低、算法稳定性好、运行速度快等优点,受到了学者们的普遍关注。常用的聚类方法主要包括硬聚类算法、模糊聚类算法、层次聚类算法、密度峰值聚类算法以及谱聚类算法等。模糊聚类算法立足于模糊集理论的基本思想,对各个样本点数据给出了它们对于不同类别的隶属度,能够贴切地表示客观世界中事物亦此亦彼的特点,受到了学者们的广泛关注。Image segmentation, as a key link between image processing and subsequent image understanding, has always been a hot topic for scholars to study, and it occupies an increasingly important position. The purpose of image segmentation is to divide the image into several sub-regions with different attributes and no intersection according to its own characteristics. Each pixel in each sub-region has different degrees of similarity, and the pixel features between different sub-regions also have significant differences. In recent years, image segmentation technology has provided reliable and effective help in the fields of satellite remote sensing, intelligent security, unmanned driving, medical image processing and biometric recognition. In the process of practical application, with the increasing complexity of segmentation scenes, people have more and more stringent requirements on the performance of image segmentation technology, and segmentation algorithms based on thresholds, regions, clustering, edges and artificial neural networks have emerged one after another. Among them, the image segmentation algorithm based on clustering has the advantages of low computational complexity, good algorithm stability and fast running speed, and has attracted widespread attention from scholars. Commonly used clustering methods mainly include hard clustering algorithm, fuzzy clustering algorithm, hierarchical clustering algorithm, density peak clustering algorithm and spectral clustering algorithm. The fuzzy clustering algorithm is based on the basic idea of fuzzy set theory. It gives the degree of membership of each sample point data to different categories. It can accurately represent the characteristics of both things in the objective world and has attracted widespread attention from scholars.

刘健庄于1992年提出了基于二维直方图的图像模糊聚类分割方法,该方法是一种基于局部搜索的无监督聚类方法,其除了考虑像素点的灰度信息外还考虑了像素点与其邻域的空间相关信息,利用经典的欧氏距离构造了模糊C-均值聚类目标函数,迭代计算得到像素点的隶属度,并由各像素点的隶属度实现图像分割。该方法在实现图像分割时存在两个方面的问题:一是未利用人工可以获得的少量先验信息,导致其对于最优解的搜索具有盲目性,容易陷入局部最优,从而造成对背景分布不均的图像分割性能不理想;二是未考虑图像中更多的模糊性和不确定性,使得对于某些模糊像素的分割并不准确。针对第一个问题,Yasunori等人在2009年提出了将监督隶属度引入到模糊C-均值聚类算法中,构建了半监督模糊C-均值聚类算法,其利用少量监督信息对聚类过程进行指导,提高了聚类分割精度。针对第二个问题,Chaira等人发现引入直觉模糊集理论可以考虑数据更多的模糊性,使得对模糊数据的分类更加精确,提出了基于直觉模糊集的直觉模糊聚类方法。In 1992, Liu Jianzhuang proposed a fuzzy clustering segmentation method for images based on two-dimensional histograms. This method is an unsupervised clustering method based on local search. In addition to considering the grayscale information of pixels, it also considers the spatial correlation information between pixels and their neighborhoods. The fuzzy C-means clustering objective function is constructed using the classic Euclidean distance. The membership of pixels is iteratively calculated, and image segmentation is achieved based on the membership of each pixel. This method has two problems when implementing image segmentation: first, it does not use a small amount of prior information that can be obtained manually, resulting in blindness in the search for the optimal solution, which is easy to fall into the local optimum, resulting in unsatisfactory performance for image segmentation with uneven background distribution; second, it does not consider more fuzziness and uncertainty in the image, making the segmentation of some fuzzy pixels inaccurate. In response to the first problem, Yasunori et al. proposed in 2009 to introduce supervised membership into the fuzzy C-means clustering algorithm and constructed a semi-supervised fuzzy C-means clustering algorithm, which uses a small amount of supervised information to guide the clustering process and improves the accuracy of clustering segmentation. In response to the second question, Chaira et al. found that introducing intuitionistic fuzzy set theory can take into account more fuzziness of data, making the classification of fuzzy data more accurate, and proposed an intuitionistic fuzzy clustering method based on intuitionistic fuzzy sets.

但是以上两种方法均使用经典的欧氏距离来构造模糊聚类目标函数,仅考虑了线性可分数据的情况,而实际上在绝大多数图像分割问题中,要处理的数据往往是线性不可分的,所以使用经典的欧氏距离来构造模糊聚类目标函数是不合理的。为了能够处理图像分割中线性不可分的情况,学者们又提出引入核函数的方法,将原始空间中线性不可分数据变换到一个更高维度的特征空间中,在高维度的特征空间内找到一个线性函数实现数据的划分。2012年,Li等人提出了基于邻近度的半监督核模糊C-均值数据聚类算法,该方法将半监督和KFCM算法有效结合不仅可以使线性不可分的数据得以划分,而且可以利用用户输入数据之间的邻近性来对聚类进行指导,最后通过在合成数据上的仿真实验验证了该方法的可行性和优越性。但是该方法由于依然未考虑数据更多的模糊性、未对人工先验信息进行充分地利用,因而存在对初始值比较敏感,容易陷入局部最优解,对于背景分布不均的图像分割性能不理想的问题。However, both of the above methods use the classic Euclidean distance to construct the fuzzy clustering objective function, and only consider the case of linearly separable data. In fact, in most image segmentation problems, the data to be processed is often linearly inseparable, so it is unreasonable to use the classic Euclidean distance to construct the fuzzy clustering objective function. In order to deal with the linearly inseparable situation in image segmentation, scholars have proposed the method of introducing kernel functions to transform the linearly inseparable data in the original space into a higher-dimensional feature space, and find a linear function in the high-dimensional feature space to achieve data division. In 2012, Li et al. proposed a semi-supervised kernel fuzzy C-means data clustering algorithm based on proximity. This method effectively combines semi-supervision and KFCM algorithms to not only divide linearly inseparable data, but also use the proximity between user input data to guide clustering. Finally, the feasibility and superiority of this method were verified through simulation experiments on synthetic data. However, since this method still does not consider more fuzziness of the data and does not make full use of artificial prior information, it is sensitive to the initial value, easily falls into the local optimal solution, and has unsatisfactory performance for image segmentation with uneven background distribution.

发明内容Summary of the invention

本发明的目的在于针对上有技术存在的不足,提供一种基于强弱联合半监督直觉模糊聚类的图像分割方法,以降低对初始值的敏感性,避免陷入局部最优,实现对低维线性不可分数据的分割,提高对背景分布不均的图像分割准确率。The purpose of the present invention is to address the shortcomings of the existing technologies and provide an image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering to reduce the sensitivity to initial values, avoid falling into local optimality, achieve the segmentation of low-dimensional linear inseparable data, and improve the accuracy of image segmentation with uneven background distribution.

为实现上述目的,本发明的技术包括:To achieve the above objectives, the technology of the present invention includes:

(1)输入待分割的图像X,并设置初始参数值:聚类数目k,最大迭代次数T=100,终止阈值ε=10-5(1) Input the image to be segmented X and set the initial parameter values: number of clusters k, maximum number of iterations T = 100, termination threshold ε = 10 -5 ;

(2)在待分割图像X上进行人工划线标记,获取人工先验信息;(2) Manually mark the image X to be segmented to obtain artificial prior information;

(3)对待分割的图像X进行直觉模糊化处理,求出图像各个像素点xj对应的隶属度μ(xj)、非隶属度v(xj)、犹豫度π(xj);(3) Perform intuitive fuzzy processing on the image X to be segmented, and calculate the membership degree μ(x j ), non-membership degree v(x j ), and hesitation degree π(x j ) corresponding to each pixel point x j in the image;

(4)利用SLIC算法将待分割图像X划分成Q个不同的子区域R={R1,R2,…,Ri,…,RQ},其中Ri表示第i个子区域,每个子区域内像素都具有不同程度的相似性;(4) Using the SLIC algorithm, the image to be segmented X is divided into Q different sub-regions R = {R 1 , R 2 , …, R i , …, R Q }, where R i represents the i-th sub-region, and the pixels in each sub-region have different degrees of similarity;

(5)设计类标签传递的强弱联合半监督策略,利用人工标记的先验信息求出图像的强监督隶属度

Figure BDA0003127498640000021
弱监督隶属度
Figure BDA0003127498640000022
及初始直觉模糊聚类中心
Figure BDA0003127498640000023
(5) Design a strong and weak joint semi-supervised strategy for class label transfer, and use the artificially labeled prior information to calculate the strong supervised membership of the image
Figure BDA0003127498640000021
Weakly supervised membership
Figure BDA0003127498640000022
and initial intuitionistic fuzzy cluster centers
Figure BDA0003127498640000023

(5a)将人工标记的像素作为强标签YS,对强标签所在的超像素区域内的所有像素赋予与强标签相同的类别标签,作为区域标签传播后的弱标签YW,再将强标签YS和弱标签YW分别转化成强先验隶属度

Figure BDA0003127498640000031
和弱先验隶属度
Figure BDA0003127498640000032
(5a) The manually labeled pixels are taken as strong labels YS, and all pixels in the superpixel region where the strong labels are located are assigned the same category labels as the strong labels as the weak labels YW after regional label propagation. Then, the strong labels YS and the weak labels YW are respectively converted into strong prior membership
Figure BDA0003127498640000031
and weak prior membership
Figure BDA0003127498640000032

(5b)使用强先验隶属度

Figure BDA0003127498640000033
和弱先验隶属度
Figure BDA0003127498640000034
对无标记像素进行隶属度的估计,计算得到强估计隶属度
Figure BDA0003127498640000035
和弱估计隶属度
Figure BDA0003127498640000036
(5b) Using strong prior membership
Figure BDA0003127498640000033
and weak prior membership
Figure BDA0003127498640000034
Estimate the membership of unlabeled pixels and calculate the strong estimated membership
Figure BDA0003127498640000035
and weakly estimated membership
Figure BDA0003127498640000036

(5c)分别将强估计隶属度

Figure BDA0003127498640000037
和弱估计隶属度
Figure BDA0003127498640000038
与其各自对应的强先验隶属度
Figure BDA0003127498640000039
和弱先验隶属度
Figure BDA00031274986400000310
合并,作为类标签传递后的强监督隶属度
Figure BDA00031274986400000311
和弱监督隶属度
Figure BDA00031274986400000312
(5c) respectively
Figure BDA0003127498640000037
and weakly estimated membership
Figure BDA0003127498640000038
Their corresponding strong prior membership
Figure BDA0003127498640000039
and weak prior membership
Figure BDA00031274986400000310
Merge as strong supervision membership after class label transfer
Figure BDA00031274986400000311
and weakly supervised membership
Figure BDA00031274986400000312

(5d)将弱监督隶属度

Figure BDA00031274986400000313
带入
Figure BDA00031274986400000314
计算初始聚类中心ci(1),再对其做直觉模糊化处理得到初始直觉模糊聚类中心
Figure BDA00031274986400000323
(5d) The weak supervision membership
Figure BDA00031274986400000313
Bring in
Figure BDA00031274986400000314
Calculate the initial cluster center c i (1), and then perform intuitionistic fuzzy processing on it to obtain the initial intuitionistic fuzzy cluster center
Figure BDA00031274986400000323

(6)将核函数、强监督隶属度、弱监督隶属度引入到直觉模糊聚类目标函数中,设计强弱联合半监督直觉模糊聚类目标函数JLP-SKIFCM(6) The kernel function, strong supervision membership and weak supervision membership are introduced into the intuitionistic fuzzy clustering objective function, and the strong and weak joint semi-supervised intuitionistic fuzzy clustering objective function J LP-SKIFCM is designed:

Figure BDA00031274986400000315
Figure BDA00031274986400000315

其中,

Figure BDA00031274986400000316
表示一个具有N个像素点的彩色图像的直觉模糊集表示,
Figure BDA00031274986400000317
为第j个像素xj的直觉模糊集表示,k是聚类数目,uij表示像素xj对于第i类的隶属度,满足
Figure BDA00031274986400000318
Figure BDA00031274986400000319
表示第i类的直觉模糊聚类中心,μ(ci)表示聚类中心ci对应的隶属度、v(ci)表示聚类中心ci对应的非隶属度、π(ci)表示聚类中心ci对应的犹豫度,η1是强监督项的权重指数,η2是弱监督项的权重指数,
Figure BDA00031274986400000320
表示第j个像素点对于第i类的强监督隶属度,
Figure BDA00031274986400000321
表示像素xj对于第i类的弱监督隶属度,
Figure BDA00031274986400000322
表示引入核函数的直觉模糊距离度量;in,
Figure BDA00031274986400000316
Represents the intuitionistic fuzzy set representation of a color image with N pixels.
Figure BDA00031274986400000317
is the intuitive fuzzy set representation of the j-th pixel xj , k is the number of clusters, uij represents the membership of pixel xj to the i-th class, satisfying
Figure BDA00031274986400000318
Figure BDA00031274986400000319
represents the intuitionistic fuzzy cluster center of the i-th category, μ( ci ) represents the membership corresponding to the cluster center ci , v( ci ) represents the non-membership corresponding to the cluster center ci , π( ci ) represents the hesitation corresponding to the cluster center ci , η1 is the weight index of the strong supervision item, η2 is the weight index of the weak supervision item,
Figure BDA00031274986400000320
represents the strong supervision membership of the j-th pixel to the i-th category,
Figure BDA00031274986400000321
represents the weakly supervised membership of pixel xj to the i-th category,
Figure BDA00031274986400000322
Intuitionistic fuzzy distance metric that introduces kernel function;

(7)利用拉格朗日乘子法最小化目标函数JLP-SKIFCM,求出隶属度uij和直觉模糊聚类中心

Figure BDA0003127498640000041
的更新式,并根据更新式迭代计算隶属度uij和直觉模糊聚类中心
Figure BDA0003127498640000042
(7) Use the Lagrange multiplier method to minimize the objective function J LP-SKIFCM and find the membership degree u ij and the intuitionistic fuzzy clustering center
Figure BDA0003127498640000041
The update formula is used to iteratively calculate the membership degree u ij and the intuitionistic fuzzy clustering center according to the update formula
Figure BDA0003127498640000042

(8)判断迭代终止条件:若

Figure BDA0003127498640000043
或迭代次数t>T,则获得隶属度矩阵U和直觉模糊聚类中心
Figure BDA0003127498640000044
执行(9);否则,令t=t+1,返回迭代再次根据更新式计算隶属度uij和直觉模糊聚类中心
Figure BDA0003127498640000045
(8) Determine the iteration termination condition: If
Figure BDA0003127498640000043
Or the number of iterations t>T, then the membership matrix U and the intuitionistic fuzzy clustering center are obtained
Figure BDA0003127498640000044
Execute (9); otherwise, set t = t + 1, return to the iteration and calculate the membership degree u ij and the intuitionistic fuzzy cluster center again according to the updated formula
Figure BDA0003127498640000045

(9)利用获得的隶属度矩阵U根据最大隶属度原则对各个像素点进行分类,得到图像像素的聚类标签,输出图像X的分割结果。(9) Using the obtained membership matrix U, classify each pixel according to the maximum membership principle, obtain the clustering label of the image pixel, and output the segmentation result of the image X.

本发明与现有技术相比,具有以下有益的技术效果:Compared with the prior art, the present invention has the following beneficial technical effects:

第一,本发明设计了类标签传递的强弱联合半监督策略,将人工可以获得的先验信息进行充分地利用,使其对聚类过程进行有效指导,解决了直觉模糊聚类算法对初始值敏感且容易陷入局部最优的问题。First, the present invention designs a strong and weak joint semi-supervised strategy for class label transfer, which makes full use of the artificially obtainable prior information to effectively guide the clustering process, solving the problem that the intuitive fuzzy clustering algorithm is sensitive to initial values and easily falls into local optimality.

第二,本发明将核函数引入到直觉模糊聚类算法中,有效处理了直觉模糊聚类算法应用于图像分割时线性不可分的情况。Secondly, the present invention introduces the kernel function into the intuitionistic fuzzy clustering algorithm, which effectively handles the situation where the intuitionistic fuzzy clustering algorithm is linearly inseparable when applied to image segmentation.

第三,本发明利用核函数,强监督隶属度和弱监督隶属度构造了基于强弱联合半监督直觉模糊聚类目标函数,提高了搜索性和寻优性,使得分割效果更为理想。Third, the present invention uses kernel functions, strong supervised membership and weak supervised membership to construct a strong and weak joint semi-supervised intuitive fuzzy clustering objective function, which improves the search and optimization properties and makes the segmentation effect more ideal.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明的实现流程图;Fig. 1 is a flow chart of the implementation of the present invention;

图2为用本发明和现有方法对Berkeley图像数据库中的编号为124084的图像进行仿真分割的结果对比图;FIG2 is a comparison diagram of the results of simulating segmentation of the image No. 124084 in the Berkeley image database using the present invention and the prior art method;

图3为用本发明与现有方法对Weizmann图像数据库中的编号为nopeeking的图像进行仿真分割的结果对比图。FIG. 3 is a comparison diagram of the results of simulating segmentation of an image numbered “nopeeking” in the Weizmann image database using the present invention and the prior art method.

具体实施方式DETAILED DESCRIPTION

以下结合附图对发明的实施和效果作进一步详细描述:The implementation and effects of the invention are further described in detail below with reference to the accompanying drawings:

参见图1,本发明的实现步骤包括如下:Referring to FIG. 1 , the implementation steps of the present invention include the following:

步骤1:输入待分割图像X并设置初始参数值和人工划线标记。Step 1: Input the image to be segmented X and set the initial parameter values and manual markings.

1.1)输入待分割的图像X,设置聚类数目k,最大迭代次数T=100,终止阈值ε=10-51.1) Input the image to be segmented X, set the number of clusters k, the maximum number of iterations T = 100, and the termination threshold ε = 10 -5 ;

1.2)在待分割图像上根据要分割的类别数k,对各个类进行人工划线标记,获取人工先验信息。1.2) On the image to be segmented, manually mark each class according to the number of classes k to be segmented to obtain artificial prior information.

步骤2:对待分割的图像X进行直觉模糊化处理,求出图像各个像素点xj对应的隶属度μ(xj)、非隶属度v(xj)、犹豫度π(xj)。Step 2: Perform intuitive fuzzy processing on the image X to be segmented, and calculate the membership μ(x j ), non-membership v(x j ), and hesitation π(x j ) corresponding to each pixel point x j in the image.

2.1)求图像各个像素点xj对应的隶属度μ(xj),公式如下:2.1) Find the membership degree μ(x j ) corresponding to each pixel x j in the image. The formula is as follows:

μ(xj)=(μR(xj),μG(xj),μB(xj)),μ(x j )=(μ R (x j ),μ G (x j ),μ B (x j )),

其中,μR(xj)为彩色图像中像素点xj在R通道下的隶属度,其利用最大最小值归一化方法求出,

Figure BDA0003127498640000051
Figure BDA0003127498640000052
Figure BDA0003127498640000053
分别代表图像X在R分量下的最大值和最小值;Wherein, μ R (x j ) is the membership degree of pixel x j in the color image under the R channel, which is obtained by the maximum and minimum value normalization method.
Figure BDA0003127498640000051
Figure BDA0003127498640000052
and
Figure BDA0003127498640000053
Represent the maximum and minimum values of image X under R component respectively;

μG(xj)为彩色图像中像素点xj在G通道下的隶属度,其利用

Figure BDA0003127498640000054
计算,
Figure BDA0003127498640000055
Figure BDA0003127498640000056
分别代表图像X在G分量下的最大值和最小值;μ G (x j ) is the membership degree of pixel x j in the color image under the G channel.
Figure BDA0003127498640000054
calculate,
Figure BDA0003127498640000055
and
Figure BDA0003127498640000056
Represent the maximum and minimum values of image X under G component respectively;

μB(xj)为彩色图像中像素点xj在B通道下的隶属度,其利用

Figure BDA0003127498640000057
计算,
Figure BDA0003127498640000058
Figure BDA0003127498640000059
分别代表图像X在B分量下的最大值和最小值;μ B (x j ) is the membership degree of pixel x j in the color image under the B channel.
Figure BDA0003127498640000057
calculate,
Figure BDA0003127498640000058
and
Figure BDA0003127498640000059
Represent the maximum and minimum values of image X under B component respectively;

2.2)利用Segno直觉模糊生成算子求出图像各个像素点xj对应的非隶属度v(xj)和犹豫度π(xj):2.2) Use Segno intuitionistic fuzzy generation operator to find the non-membership v(x j ) and hesitation π(x j ) corresponding to each pixel x j in the image:

Figure BDA00031274986400000510
Figure BDA00031274986400000510

π(xj)=1-μ(xj)-v(xj),π(x j )=1-μ(x j )-v(x j ),

其中,δ为可变参数,其取值范围为(-1,∞)。Among them, δ is a variable parameter, and its value range is (-1,∞).

步骤3:利用SLIC算法对待分割图像X进行区域的划分。Step 3: Use the SLIC algorithm to divide the image X into regions.

利用SLIC算法将待分割图像X划分成Q个不同的子区域R={R1,R2,…,Ri,…,RQ},其中Ri表示第i个子区域,每个子区域内像素都具有不同程度的相似性。The SLIC algorithm is used to divide the image X to be segmented into Q different sub-regions R = {R 1 , R 2 , …, R i , …, R Q }, where R i represents the i-th sub-region, and the pixels in each sub-region have different degrees of similarity.

步骤4:设计类标签传递的强弱联合半监督策略,利用人工标记的先验信息求出图像的强监督隶属度

Figure BDA00031274986400000511
弱监督隶属度
Figure BDA00031274986400000512
及初始直觉模糊聚类中心
Figure BDA00031274986400000513
Step 4: Design a strong and weak joint semi-supervised strategy for class label transfer, and use the artificially labeled prior information to find the strong supervised membership of the image
Figure BDA00031274986400000511
Weakly supervised membership
Figure BDA00031274986400000512
and initial intuitionistic fuzzy cluster centers
Figure BDA00031274986400000513

4.1)将人工标记的像素作为强标签YS,对强标签YS所在的超像素区域内的所有像素赋予与强标签相同的类别标签,作为区域标签传播后的弱标签YW,再将强标签YS和弱标签YW分别转化成强先验隶属度

Figure BDA00031274986400000514
和弱先验隶属度
Figure BDA00031274986400000515
4.1) The manually labeled pixels are taken as strong labels Y S , and all pixels in the superpixel region where the strong label Y S is located are assigned the same category label as the strong label as the weak label Y W after regional label propagation. Then, the strong label Y S and the weak label Y W are respectively converted into strong prior membership
Figure BDA00031274986400000514
and weak prior membership
Figure BDA00031274986400000515

4.1.1)将强标签YS按两种不同像素转化成强先验隶属度

Figure BDA0003127498640000061
4.1.1) Convert the strong label Y S into strong prior membership according to two different pixels
Figure BDA0003127498640000061

对于没有强标签的像素xu,其对应的隶属度为0,即

Figure BDA0003127498640000062
其中,
Figure BDA0003127498640000063
为无强标签的像素xu对于第i类的强先验隶属度,i∈{1,2,…,k};For pixels xu without strong labels, the corresponding membership is 0, that is,
Figure BDA0003127498640000062
in,
Figure BDA0003127498640000063
is the strong prior membership of the pixel x u without strong label to the i-th category, i∈{1,2,…,k};

对于有强标签的像素xl且属于第i类,则

Figure BDA0003127498640000064
否则,
Figure BDA0003127498640000065
其中,
Figure BDA00031274986400000627
为有强标签的像素xl对于第i类的强先验隶属度,
Figure BDA0003127498640000066
为有强标签的像素xl对于第t类的强先验隶属度,t∈{1,2,…,k,t≠i};For a pixel xl with a strong label and belonging to the i-th class, then
Figure BDA0003127498640000064
otherwise,
Figure BDA0003127498640000065
in,
Figure BDA00031274986400000627
is the strong prior membership of the strongly labeled pixel xl to the i-th class,
Figure BDA0003127498640000066
is the strong prior membership of the strongly labeled pixel xl to the t-th class, t∈{1,2,…,k,t≠i};

4.1.2)将弱标签YW按如下两种不同像素转化成弱先验隶属度

Figure BDA0003127498640000067
4.1.2) Convert the weak label YW into weak prior membership according to the following two different pixels:
Figure BDA0003127498640000067

对于没有弱标签的像素x′u,其对应的隶属度为0,即

Figure BDA0003127498640000068
其中,
Figure BDA00031274986400000628
为无弱标签的像素x′u对于第i类的弱先验隶属度,i∈{1,2,…,k};For a pixel x′u without a weak label, its corresponding membership is 0, that is,
Figure BDA0003127498640000068
in,
Figure BDA00031274986400000628
is the weak prior membership of the pixel x′u without weak label to the i-th category, i∈{1,2,…,k};

对于有弱标签的像素x′l且属于第i类,则

Figure BDA0003127498640000069
否则,
Figure BDA00031274986400000610
其中,
Figure BDA00031274986400000611
为有弱标签的像素x′l对于第i类的弱先验隶属度,
Figure BDA00031274986400000612
为有弱标签的像素x′l对于第t类的弱先验隶属度,t∈{1,2,…,k,t≠i};For a pixel x′l with a weak label and belonging to the i-th class, then
Figure BDA0003127498640000069
otherwise,
Figure BDA00031274986400000610
in,
Figure BDA00031274986400000611
is the weak prior membership of the weakly labeled pixel x′ l to the i-th class,
Figure BDA00031274986400000612
is the weak prior membership of the weakly labeled pixel x′ l to the t-th class, t∈{1,2,…,k,t≠i};

4.2)使用强先验隶属度

Figure BDA00031274986400000613
和弱先验隶属度
Figure BDA00031274986400000614
对无标记像素进行隶属度的估计,计算得到强估计隶属度
Figure BDA00031274986400000615
和弱估计隶属度
Figure BDA00031274986400000616
4.2) Using strong prior membership
Figure BDA00031274986400000613
and weak prior membership
Figure BDA00031274986400000614
Estimate the membership of unlabeled pixels and calculate the strong estimated membership
Figure BDA00031274986400000615
and weakly estimated membership
Figure BDA00031274986400000616

4.2.1)使用强先验隶属度

Figure BDA00031274986400000617
求强估计隶属度
Figure BDA00031274986400000618
4.2.1) Using strong prior membership
Figure BDA00031274986400000617
Find the strong estimated membership
Figure BDA00031274986400000618

Figure BDA00031274986400000619
Figure BDA00031274986400000619

其中,

Figure BDA00031274986400000620
为有强标签的像素xl对于第i类的强先验隶属度,
Figure BDA00031274986400000621
无强标记的像素xu对于第i类的强估计隶属度,
Figure BDA00031274986400000622
l∈SL,SL表示有强标签的像素集合,
Figure BDA00031274986400000623
表示有强标记的像素xl与无强标记的像素xu之间的欧氏距离;in,
Figure BDA00031274986400000620
is the strong prior membership of the strongly labeled pixel xl to the i-th class,
Figure BDA00031274986400000621
The strong estimated membership of the pixel x u without strong label to the i-th class,
Figure BDA00031274986400000622
l∈SL, SL represents a set of pixels with strong labels,
Figure BDA00031274986400000623
represents the Euclidean distance between the strongly labeled pixel x l and the unlabeled pixel xu ;

4.2.2)使用弱先验隶属度

Figure BDA00031274986400000624
求弱估计隶属度
Figure BDA00031274986400000625
4.2.2) Using weak prior membership
Figure BDA00031274986400000624
Find the weak estimated membership
Figure BDA00031274986400000625

Figure BDA00031274986400000626
Figure BDA00031274986400000626

其中,

Figure BDA0003127498640000071
为有弱标签的像素x′l对于第i类的弱先验隶属度,
Figure BDA0003127498640000072
为无弱标记的像素x′u对于第i类的弱估计隶属度,
Figure BDA0003127498640000073
l∈WL,WL表示有弱标签的像素集合,
Figure BDA0003127498640000074
表示有弱标记的像素x′l与无弱标记的像素x′u之间的欧氏距离;in,
Figure BDA0003127498640000071
is the weak prior membership of the weakly labeled pixel x′ l to the i-th class,
Figure BDA0003127498640000072
is the weakly estimated membership of the pixel x′u without weak label to the i-th class,
Figure BDA0003127498640000073
l∈WL, WL represents the set of pixels with weak labels,
Figure BDA0003127498640000074
represents the Euclidean distance between the pixel x′ l with weak label and the pixel x′ u without weak label;

4.3)分别将强估计隶属度

Figure BDA0003127498640000075
和弱估计隶属度
Figure BDA0003127498640000076
与其各自对应的强先验隶属度
Figure BDA0003127498640000077
和弱先验隶属度
Figure BDA0003127498640000078
合并,作为类标签传递后的强监督隶属度
Figure BDA0003127498640000079
和弱监督隶属度
Figure BDA00031274986400000710
4.3) The strong estimated membership
Figure BDA0003127498640000075
and weakly estimated membership
Figure BDA0003127498640000076
Their corresponding strong prior membership
Figure BDA0003127498640000077
and weak prior membership
Figure BDA0003127498640000078
Merge as strong supervision membership after class label transfer
Figure BDA0003127498640000079
and weakly supervised membership
Figure BDA00031274986400000710

4.4)利用弱监督隶属度

Figure BDA00031274986400000711
计算初始聚类中心ci(1):4.4) Using weakly supervised membership
Figure BDA00031274986400000711
Calculate the initial cluster center c i (1):

Figure BDA00031274986400000712
Figure BDA00031274986400000712

4.5)对初始聚类中心ci(1)做直觉模糊化处理,得到初始直觉模糊聚类中心

Figure BDA00031274986400000713
4.5) Perform intuitive fuzzy processing on the initial cluster center c i (1) to obtain the initial intuitive fuzzy cluster center
Figure BDA00031274986400000713

步骤5:构造强弱联合半监督直觉模糊聚类目标函数JLP-SKIFCMStep 5: Construct the strong and weak joint semi-supervised intuitionistic fuzzy clustering objective function J LP-SKIFCM .

5.1)定义核函数k(x,y)为高斯核,其表示为:5.1) Define the kernel function k(x,y) as a Gaussian kernel, which is expressed as:

Figure BDA00031274986400000714
Figure BDA00031274986400000714

其中,

Figure BDA00031274986400000715
σ是尺度参数,控制径向作用范围;in,
Figure BDA00031274986400000715
σ is a scale parameter that controls the radial range of action;

5.2)定义直觉模糊聚类目标函数JIFCM为:5.2) Define the intuitionistic fuzzy clustering objective function J IFCM as:

Figure BDA00031274986400000716
Figure BDA00031274986400000716

其中,

Figure BDA00031274986400000717
为像素xj的直觉模糊集表示,k为聚类数目,N为数据个数,uij表示像素xj对于第i类的隶属度,m为模糊指数,
Figure BDA00031274986400000718
表示第i类的聚类中心ci的直觉模in,
Figure BDA00031274986400000717
is the intuitive fuzzy set representation of pixel xj , k is the number of clusters, N is the number of data, uij represents the membership of pixel xj to the i-th class, m is the fuzzy index,
Figure BDA00031274986400000718
The intuitive model representing the cluster center ci of the i-th class

糊集表示,

Figure BDA00031274986400000719
Figure BDA00031274986400000720
Figure BDA00031274986400000721
之间的直觉欧式距离,表示为:Fuzzy said,
Figure BDA00031274986400000719
yes
Figure BDA00031274986400000720
and
Figure BDA00031274986400000721
The intuitive Euclidean distance between is expressed as:

Figure BDA00031274986400000722
Figure BDA00031274986400000722

5.3)将核函数k(x,y)、强监督隶属度

Figure BDA00031274986400000723
弱监督隶属度
Figure BDA00031274986400000724
引入到直觉模糊聚类目标函数JIFCM中,得到强弱联合半监督直觉模糊聚类目标函数JLP-SKIFCM:5.3) The kernel function k(x,y) and the strong supervision membership
Figure BDA00031274986400000723
Weakly supervised membership
Figure BDA00031274986400000724
Introduced into the intuitive fuzzy clustering objective function J IFCM , the strong and weak joint semi-supervised intuitive fuzzy clustering objective function J LP-SKIFCM is obtained:

Figure BDA00031274986400000725
Figure BDA00031274986400000725

其中,

Figure BDA0003127498640000081
表示一个具有N个像素点的彩色图像的直觉模糊集表示,
Figure BDA0003127498640000082
为第j个像素xj的直觉模糊集表示,k是聚类数目,uij表示像素xj对于第i类的隶属度,满足
Figure BDA0003127498640000083
Figure BDA0003127498640000084
表示第i类的直觉模糊聚类中心,μ(ci)表示聚类中心ci对应的隶属度、v(ci)表示聚类中心ci对应的非隶属度、π(ci)表示聚类中心ci对应的犹豫度,η1是强监督项的权重指数,η2是弱监督项的权重指数,
Figure BDA0003127498640000085
表示第j个像素点对于第i类的强监督隶属度,
Figure BDA0003127498640000086
表示第j个像素点对于第i类的弱监督隶属度,
Figure BDA0003127498640000087
表示引入核函数的直觉模糊距离度量,定义如下:
Figure BDA0003127498640000088
是高斯径向基函数,
Figure BDA0003127498640000089
表示核函数的尺度参数。in,
Figure BDA0003127498640000081
Represents the intuitionistic fuzzy set representation of a color image with N pixels.
Figure BDA0003127498640000082
is the intuitive fuzzy set representation of the j-th pixel xj , k is the number of clusters, uij represents the membership of pixel xj to the i-th class, satisfying
Figure BDA0003127498640000083
Figure BDA0003127498640000084
represents the intuitionistic fuzzy cluster center of the i-th category, μ( ci ) represents the membership corresponding to the cluster center ci , v( ci ) represents the non-membership corresponding to the cluster center ci , π( ci ) represents the hesitation corresponding to the cluster center ci , η1 is the weight index of the strong supervision item, η2 is the weight index of the weak supervision item,
Figure BDA0003127498640000085
represents the strong supervision membership of the j-th pixel to the i-th category,
Figure BDA0003127498640000086
represents the weakly supervised membership of the j-th pixel to the i-th category,
Figure BDA0003127498640000087
It represents the intuitionistic fuzzy distance metric with the kernel function, which is defined as follows:
Figure BDA0003127498640000088
is the Gaussian radial basis function,
Figure BDA0003127498640000089
Represents the scale parameter of the kernel function.

步骤6:利用拉格朗日乘子法最小化目标函数JLP-SKIFCM,求出隶属度uij和直觉模糊聚类中心

Figure BDA00031274986400000810
的更新式。Step 6: Use the Lagrange multiplier method to minimize the objective function J LP-SKIFCM and find the membership u ij and the intuitionistic fuzzy cluster center.
Figure BDA00031274986400000810
The update style.

6.1)对目标函数JLP-SKIFCM求关于隶属度uij的偏导数,得到隶属度的更新公式,其表示如下:6.1) The partial derivative of the objective function J LP-SKIFCM with respect to the membership u ij is obtained to obtain the update formula of the membership, which is expressed as follows:

Figure BDA00031274986400000811
Figure BDA00031274986400000811

6.2)对目标函数JLP-SKIFCM求关于聚类中心

Figure BDA00031274986400000812
的偏导数,得到直觉模糊聚类中心
Figure BDA00031274986400000813
的更新公式,其表示如下:6.2) Find the cluster center of the objective function J LP-SKIFCM
Figure BDA00031274986400000812
The partial derivative of , we get the intuitionistic fuzzy cluster center
Figure BDA00031274986400000813
The update formula is as follows:

Figure BDA00031274986400000814
Figure BDA00031274986400000814

Figure BDA0003127498640000091
Figure BDA0003127498640000091

Figure BDA0003127498640000092
Figure BDA0003127498640000092

其中,

Figure BDA0003127498640000093
为像素xj对聚类中心ci隶属度下的核度量,in,
Figure BDA0003127498640000093
is the kernel measure of the membership of pixel xj to cluster center ci ,

Figure BDA0003127498640000094
为像素xj对聚类中心ci非隶属度下的核度量,
Figure BDA0003127498640000094
is the kernel measure of pixel xj under non-membership of cluster center ci ,

Figure BDA0003127498640000095
为像素xj对聚类中心ci犹豫度下的核度量。
Figure BDA0003127498640000095
is the kernel measure of the hesitation of pixel xj to cluster center ci .

步骤7:迭代计算隶属度uij和直觉模糊聚类中心

Figure BDA0003127498640000096
获得隶属度矩阵U和直觉模糊聚类中心
Figure BDA0003127498640000097
Step 7: Iteratively calculate the membership degree u ij and the intuitionistic fuzzy cluster center
Figure BDA0003127498640000096
Obtain membership matrix U and intuitionistic fuzzy clustering centers
Figure BDA0003127498640000097

7.1)初始化迭代次数t=17.1) Initialize the number of iterations t = 1

7.2)根据6.2)隶属度uij和直觉模糊聚类中心

Figure BDA0003127498640000098
的更新公式,迭代计算每次迭代下的隶属度uij和直觉模糊聚类中心
Figure BDA0003127498640000099
7.2) According to 6.2) membership u ij and intuitionistic fuzzy clustering center
Figure BDA0003127498640000098
The update formula is used to iteratively calculate the membership degree u ij and the intuitionistic fuzzy clustering center at each iteration.
Figure BDA0003127498640000099

7.3)计算

Figure BDA00031274986400000910
Figure BDA00031274986400000911
的差值:
Figure BDA00031274986400000912
其中
Figure BDA00031274986400000913
表示第t次迭代下的直觉模糊聚类中心,
Figure BDA00031274986400000914
表示第t-1次迭代下的直觉模糊聚类中心;7.3) Calculation
Figure BDA00031274986400000910
and
Figure BDA00031274986400000911
The difference:
Figure BDA00031274986400000912
in
Figure BDA00031274986400000913
represents the intuitive fuzzy cluster center at the t-th iteration,
Figure BDA00031274986400000914
represents the intuitive fuzzy cluster center at the t-1th iteration;

7.4)将7.3)的差值Z与终止阈值ε比较,或者将迭代次数t与最大迭代次数T进行比较,判断终止条件:7.4) Compare the difference Z in 7.3) with the termination threshold ε, or compare the number of iterations t with the maximum number of iterations T to determine the termination condition:

若满足Z<ε或t>T,则获得隶属度矩阵U和直觉模糊聚类中心

Figure BDA00031274986400000915
执行步骤8;If Z < ε or t > T is satisfied, the membership matrix U and the intuitionistic fuzzy clustering center are obtained.
Figure BDA00031274986400000915
Go to step 8.

否则,令t=t+1,返回7.2)。Otherwise, set t=t+1 and return to 7.2).

步骤8:输出图像X分割后的结果。Step 8: Output the result of image X segmentation.

对获得的隶属度矩阵U根据最大隶属度原则对各个像素点进行分类,即将隶属度矩阵U中,每一列隶属度最大值对应的类别标签作为该位置像素的类别,得到整幅图像的聚类标签,输出图像X的分割结果。The obtained membership matrix U is classified according to the maximum membership principle for each pixel point, that is, the category label corresponding to the maximum membership value in each column of the membership matrix U is used as the category of the pixel at that position, the clustering label of the entire image is obtained, and the segmentation result of the image X is output.

以下结合仿真实验,对本发明的技术效果作进一步说明:The following is a further description of the technical effects of the present invention in combination with simulation experiments:

1.仿真条件:1. Simulation conditions:

仿真实验在计算机Intel(R)Core(TM)i5-4258U CPU@2.40GHz 2.10GHz,8G内存,MATLAB R2019a软件环境下进行。The simulation experiments were carried out on a computer with Intel(R) Core(TM) i5-4258U CPU@2.40GHz 2.10GHz, 8G memory, and MATLAB R2019a software environment.

2.仿真内容:2. Simulation content:

仿真1,用本发明与现有KFCM方法、IFCM方法、sSFCM方法、SSFC-SC方法、eSFCM方法分别对Berkeley图像数据库中编号为124084的图像进行分割,结果如图2所示,其中:Simulation 1, using the present invention and the existing KFCM method, IFCM method, sSFCM method, SSFC-SC method, and eSFCM method to segment the image numbered 124084 in the Berkeley image database, the results are shown in FIG2, where:

2(a)是124084图像的原图;2(a) is the original image of 124084 image;

2(b)是124084图像的人工标记图;2(b) is the manually labeled image of 124084 images;

2(c)是124084图像的区域标签扩展图;2(c) is the region label expansion diagram of 124084 images;

2(d)是124084图像的标准分割图;2(d) is the standard segmentation map of the 124084 image;

2(e)是用现有KFCM方法对124084图像的分割结果;2(e) is the segmentation result of 124084 images using the existing KFCM method;

2(f)是用现有sSFCM方法对124084图像的分割结果;2(f) is the segmentation result of 124084 images using the existing sSFCM method;

2(g)是用现有SSFC-SC方法对124084图像的分割结果;2(g) is the segmentation result of 124084 images using the existing SSFC-SC method;

2(h)是用现有eSFCM方法对124084图像的分割结果;2(h) is the segmentation result of 124084 images using the existing eSFCM method;

2(i)是用本发明方法对124084图像的分割结果。2(i) is the segmentation result of the 124084 image using the method of the present invention.

从图2可以看出,本发明对于背景分布不均的图像可以将目标和背景完整地分离开,且对初始聚类中心不敏感,其分割效果明显优于现有KFCM方法、IFCM方法、sSFCM方法、SSFC-SC方法和eSFCM方法。As can be seen from Figure 2, the present invention can completely separate the target and the background for images with uneven background distribution, and is insensitive to the initial clustering center. Its segmentation effect is significantly better than the existing KFCM method, IFCM method, sSFCM method, SSFC-SC method and eSFCM method.

仿真2,用本发明和现有KFCM方法、IFCM方法、sSFCM方法、SSFC-SC方法、eSFCM方法,分别对Weizmann图像数据库中编号为nopeeking的图像进行分割,结果如图3所示,其中:Simulation 2, using the present invention and the existing KFCM method, IFCM method, sSFCM method, SSFC-SC method, and eSFCM method, the image numbered nopeeking in the Weizmann image database is segmented respectively, and the results are shown in FIG3, where:

3(a)是nopeeking图像的原图;3(a) is the original image of the nopeeking image;

3(b)是nopeeking图像的标准分割图;3(b) is the standard segmentation map of the nopeeking image;

3(c)是nopeeking图像的椒盐含噪图像,噪声强度为0.05;3(c) is the salt and pepper noisy image of the nopeeking image, with a noise intensity of 0.05;

3(d)是用现有KFCM方法对nopeeking图像的分割结果;3(d) is the segmentation result of the nopeeking image using the existing KFCM method;

3(e)是用现有IFCM方法对nopeeking图像的分割结果;3(e) is the segmentation result of the nopeeking image using the existing IFCM method;

3(f)是用现有sSFCM方法对nopeeking图像的分割结果;3(f) is the segmentation result of the nopeeking image using the existing sSFCM method;

3(g)是用现有SSFC-SC方法对nopeeking图像的分割结果;3(g) is the segmentation result of the nopeeking image using the existing SSFC-SC method;

3(h)是用现有eSFCM方法对nopeeking图像的分割结果;3(h) is the segmentation result of the nopeeking image using the existing eSFCM method;

3(i)是用本发明方法对nopeeking图像的分割结果。3(i) is the segmentation result of the nopeeking image using the method of the present invention.

从图3可以看出,本发明对于背景分布不均的图像可以将目标和背景完整地分离开,且对初始聚类中心不敏感,其分割效果明显优于现有KFCM方法、IFCM方法、sSFCM方法、SSFC-SC方法和eSFCM方法。As can be seen from Figure 3, the present invention can completely separate the target and the background for images with uneven background distribution and is insensitive to the initial clustering center. Its segmentation effect is significantly better than the existing KFCM method, IFCM method, sSFCM method, SSFC-SC method and eSFCM method.

Claims (9)

1. An image segmentation method based on strong and weak union semi-supervised intuitionistic fuzzy clustering is characterized by comprising the following steps:
(1) Inputting an image X to be segmented, and setting initial parameter values: the number of clusters k, the maximum number of iterations T =100, and the termination threshold e =10 -5
(2) Manually marking an image X to be segmented to obtain manual prior information;
(3) The image X to be divided is subjected to intuitive fuzzification processing to solve each pixel point X of the image j Corresponding degree of membership mu (x) j ) Degree of non-membership v (x) j ) Angle of hesitation pi (x) j );
(4) Dividing an image X to be segmented into Q different sub-regions R = { R ] by using SLIC algorithm 1 ,R 2 ,…,R i ,…,R Q In which R is i Representing the ith sub-region, wherein the pixels in each sub-region have different degrees of similarity;
(5) Designing a strong and weak combined semi-supervised strategy for class label transmission, and solving the strong supervision membership degree of an image by using the prior information of artificial markers
Figure FDA0004041893180000011
Degree of membership in weak supervision>
Figure FDA0004041893180000012
And an initial intuitive fuzzy cluster center>
Figure FDA0004041893180000013
(5a) Using artificially marked pixels as strong labels Y S Assigning the same class label as the strong label to all pixels in the super-pixel region where the strong label is located, and using the class label as the weak label Y after the region label propagation W Then, the strong label Y is added S And weak label Y W Respectively converted into strong prior membership
Figure FDA0004041893180000014
And weak a priori membership->
Figure FDA0004041893180000015
(5b) Using strong a priori membership
Figure FDA0004041893180000016
And weak a priori membership degree>
Figure FDA0004041893180000017
Evaluating the membership degree of the unmarked pixel, and calculating to obtain the strong evaluation membership degree->
Figure FDA0004041893180000018
And weak estimate membership &' s>
Figure FDA0004041893180000019
(5c) Respectively estimating the strong degree of membership
Figure FDA00040418931800000110
And weak estimate membership>
Figure FDA00040418931800000111
Strong prior affiliation with their respective counterpartsGenus degree->
Figure FDA00040418931800000112
And weak a priori membership->
Figure FDA00040418931800000113
Merging as strong supervision membership degree after class label transmission>
Figure FDA00040418931800000114
And weak supervision membership degree->
Figure FDA00040418931800000115
(5d) Degree of membership of weak supervision
Figure FDA00040418931800000116
Brings in and/or holds>
Figure FDA00040418931800000117
Calculating an initial clustering center c i (1) Then, the initial intuitive fuzzy clustering center is obtained by performing intuitive fuzzification processing on the cluster to obtain the initial intuitive fuzzy clustering center->
Figure FDA00040418931800000118
(6) Introducing kernel function, strong supervision membership degree and weak supervision membership degree into intuitive fuzzy clustering target function, and designing strong and weak combined semi-supervised intuitive fuzzy clustering target function J LP-SKIFCM
Figure FDA0004041893180000021
Wherein,
Figure FDA0004041893180000022
intuitive fuzzy set representation representing a color image having N pixels, based on the intensity of the image signal>
Figure FDA0004041893180000023
Is the jth pixel x j K is the number of clusters, u ij Representing a pixel x j For membership of the i-th class, satisfy +>
Figure FDA0004041893180000024
Figure FDA0004041893180000025
Intuitive fuzzy cluster center, μ (c), representing class i i ) Representing the center of the cluster c i Corresponding degree of membership, v (c) i ) Representing the center of the cluster c i Corresponding non-membership, π (c) i ) Representing the center of the cluster c i Corresponding degree of hesitation, η 1 Is a weight index, η, of a strong supervision term 2 Is a weight index of the weakly supervised term,. Sup.,>
Figure FDA0004041893180000026
represents the strong supervision membership degree of the jth pixel point to the ith class and is combined with the ith class>
Figure FDA0004041893180000027
Representing a pixel x j For weak supervision membership of class i, <' > based on>
Figure FDA0004041893180000028
An intuitive fuzzy distance metric representing the introduction of a kernel function;
the strong and weak combined semi-supervised intuitive fuzzy clustering objective function J LP-SKIFCM In (1), introducing an intuitive fuzzy distance metric of kernel function
Figure FDA0004041893180000029
The definition is as follows:
Figure FDA00040418931800000210
wherein,
Figure FDA00040418931800000211
is a gaussian radial basis function, the formula is as follows:
Figure FDA00040418931800000212
wherein σ represents a scale parameter of the kernel function, and the calculation formula is:
Figure FDA00040418931800000213
Figure FDA00040418931800000214
denotes the jth pixel x j To the ith cluster center c i The formula is as follows:
Figure FDA00040418931800000215
(7) Minimizing an objective function J using a Lagrange multiplier method LP-SKIFCM To find out the degree of membership u ij And intuitive fuzzy clustering center
Figure FDA00040418931800000216
And iteratively calculating the membership u according to the updated formula ij And intuitive fuzzy cluster center>
Figure FDA00040418931800000217
(8) Judging an iteration termination condition: if it is
Figure FDA00040418931800000218
Or the iteration times T is more than T, the membership matrix U and the intuitive fuzzy clustering center are obtained>
Figure FDA00040418931800000219
Executing (9); otherwise, let t = t +1, return iteration and calculate membership u again according to the updated formula ij And intuitive fuzzy cluster center>
Figure FDA0004041893180000031
(9) And classifying the obtained membership matrix U according to a maximum membership principle to obtain a clustering label of the image pixel, and outputting a segmentation result of the image X.
2. The method according to claim 1, wherein the step (3) of determining each pixel point x of the image j Corresponding degree of membership mu (x) j ) The formula is as follows:
μ(x j )=(μ R (x j ),μ G (x j ),μ B (x j )),
wherein, mu R (x j ) For pixel point x in a colour image j The membership degree under the R channel is calculated by using a maximum and minimum normalization method,
Figure FDA0004041893180000032
Figure FDA0004041893180000033
and &>
Figure FDA0004041893180000034
Respectively representing the maximum value and the minimum value of the image X under the R component;
μ G (x j ) For pixel point x in a colour image j Degree of membership under G channel, use thereof
Figure FDA0004041893180000035
The calculation is carried out according to the calculation,
Figure FDA0004041893180000036
and &>
Figure FDA0004041893180000037
Respectively representing the maximum value and the minimum value of the image X under the G component;
μ B (x j ) For pixel point x in a colour image j Degree of membership under B channel, use thereof
Figure FDA0004041893180000038
The calculation is carried out in such a way that,
Figure FDA0004041893180000039
and &>
Figure FDA00040418931800000310
Respectively representing the maximum and minimum values of image X under the B component.
3. The method according to claim 1, wherein in the step (3), each pixel point x of the image is obtained j Corresponding non-membership degree v (x) j ) And a degree of hesitation pi (x) j ) The method is solved by utilizing a Segno intuitive fuzzy generation operator, and the formulas are respectively as follows:
Figure FDA00040418931800000311
π(x j )=1-μ(x j )-v(x j ),
wherein, delta is a variable parameter, and the value range thereof is (-1, infinity).
4. The method according to claim 1, wherein the strong label Y is used in (5 a) S Conversion to strong prior membership
Figure FDA00040418931800000312
Two different pixel transformations were included:
for pixels x without strong labels u With a corresponding degree of membership of 0, i.e.
Figure FDA00040418931800000313
Wherein it is present>
Figure FDA00040418931800000314
Is a pixel x without a strong label u For strong prior membership of class i, i ∈ {1,2, \8230;, k };
for strongly labeled pixels x l And belong to the i-th class, then
Figure FDA00040418931800000315
Otherwise, is greater or less>
Figure FDA00040418931800000316
Wherein it is present>
Figure FDA00040418931800000317
For strongly labelled pixels x l For strong a priori membership of class i->
Figure FDA00040418931800000318
For strongly labelled pixels x l For strong prior membership of the t-th class, t belongs to {1,2, \8230 ≠ i }.
5. The method according to claim 1, wherein the weak label Y in (5 a) W Conversion to weak prior membership
Figure FDA0004041893180000041
Two different pixel transformations were included:
for pixels x without weak labels u ', with a corresponding degree of membership of 0, i.e.
Figure FDA0004041893180000042
Wherein it is present>
Figure FDA0004041893180000043
As a pixel x without a weak label u ' for weak prior membership of class i, i ∈ {1,2, \8230;, k };
for weakly labeled pixel x l And belong to the i-th class, then
Figure FDA0004041893180000044
Otherwise, is greater or less>
Figure FDA0004041893180000045
Wherein it is present>
Figure FDA0004041893180000046
As weakly labeled pixel x l ' for weak a priori membership of class i `, `>
Figure FDA0004041893180000047
As weakly labeled pixel x l ' for weak prior membership of class t, t ∈ {1,2, \ 8230;, k, t ≠ i }.
6. The method according to claim 1, wherein (5 b) uses strong a priori membership
Figure FDA0004041893180000048
Evaluating a strength estimate membership degree->
Figure FDA0004041893180000049
The formula is as follows:
Figure FDA00040418931800000410
wherein,
Figure FDA00040418931800000411
for strongly labelled pixels x l For the firstStrong prior membership in class i,. Sup.,>
Figure FDA00040418931800000412
pixel x without strong mark u For a strong evaluation membership of the i-th class, a degree of membership>
Figure FDA00040418931800000413
SL represents a set of pixels with a strong label, <' > or>
Figure FDA00040418931800000414
Indicating a strongly marked pixel x l And pixels x without strong marks u The euclidean distance between.
7. The method of claim 1, wherein weak a priori membership is used in (5 b)
Figure FDA00040418931800000415
Evaluating the degree of membership of the weak estimate>
Figure FDA00040418931800000416
The formula is as follows:
Figure FDA00040418931800000417
wherein,
Figure FDA00040418931800000418
as weakly labeled pixel x l ' for weak a priori membership of class i `, `>
Figure FDA00040418931800000419
Weak Mark free Pixel x u ' Weak estimate membership for class i `>
Figure FDA00040418931800000420
WL denotes the set of pixels with a weak label, <' >>
Figure FDA00040418931800000421
Indicating a pixel x with a weak mark l ' with pixels without weak marks x u The euclidean distance between'.
8. The method according to claim 1, wherein the degree of membership u in (7) ij Is represented as follows:
Figure FDA0004041893180000051
9. the method of claim 1, wherein the intuitive fuzzy clustering center in (7)
Figure FDA0004041893180000052
Respectively, as follows:
Figure FDA0004041893180000053
Figure FDA0004041893180000054
Figure FDA0004041893180000055
wherein,
Figure FDA0004041893180000056
is a pixel x j To the clustering center c i A measure of the membership degree->
Figure FDA0004041893180000057
Is a pixel x j To the clustering center c i The kernel metric at the non-membership level, device for selecting or keeping>
Figure FDA0004041893180000058
Is a pixel x j To the clustering center c i Nuclear metric at hesitation. />
CN202110693319.7A 2021-06-22 2021-06-22 Image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering Active CN113409335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110693319.7A CN113409335B (en) 2021-06-22 2021-06-22 Image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110693319.7A CN113409335B (en) 2021-06-22 2021-06-22 Image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering

Publications (2)

Publication Number Publication Date
CN113409335A CN113409335A (en) 2021-09-17
CN113409335B true CN113409335B (en) 2023-04-07

Family

ID=77682370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110693319.7A Active CN113409335B (en) 2021-06-22 2021-06-22 Image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering

Country Status (1)

Country Link
CN (1) CN113409335B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114266321A (en) * 2021-12-31 2022-04-01 广东泰迪智能科技股份有限公司 Weak supervision fuzzy clustering algorithm based on unconstrained prior information mode
CN115439688B (en) * 2022-09-01 2023-06-16 哈尔滨工业大学 Weak supervision object detection method based on surrounding area sensing and association
CN118397389B (en) * 2024-04-16 2024-10-29 常熟市第一人民医院 Semi-supervised clustering method for brain obstruction focus image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301644A (en) * 2017-06-09 2017-10-27 西安电子科技大学 Natural image non-formaldehyde finishing method based on average drifting and fuzzy clustering
CN108062757A (en) * 2018-01-05 2018-05-22 北京航空航天大学 It is a kind of to utilize the method for improving Intuitionistic Fuzzy Clustering algorithm extraction infrared target

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9519868B2 (en) * 2012-06-21 2016-12-13 Microsoft Technology Licensing, Llc Semi-supervised random decision forests for machine learning using mahalanobis distance to identify geodesic paths
CN103456017B (en) * 2013-09-08 2016-07-06 西安电子科技大学 Image partition method based on the semi-supervised weight Kernel fuzzy clustering of subset
US11205103B2 (en) * 2016-12-09 2021-12-21 The Research Foundation for the State University Semisupervised autoencoder for sentiment analysis
CN109070040B (en) * 2016-12-16 2023-07-28 布里格姆及妇女医院股份有限公司 Systems and methods of protein corona sensor arrays for early detection of disease
US11042814B2 (en) * 2017-03-17 2021-06-22 Visa International Service Association Mixed-initiative machine learning systems and methods for determining segmentations
CN109145921B (en) * 2018-08-29 2021-04-09 江南大学 An Image Segmentation Method Based on Improved Intuitive Fuzzy C-Means Clustering
CN109949314B (en) * 2019-02-23 2022-10-14 西安邮电大学 Multi-target fast fuzzy clustering color image segmentation method based on semi-supervised learning and histogram statistics
CN110211126B (en) * 2019-06-12 2022-06-03 西安邮电大学 Image segmentation method based on intuitive fuzzy C-means clustering
CN110473204A (en) * 2019-06-18 2019-11-19 常熟理工学院 A kind of interactive image segmentation method based on weak link constraint
US11416772B2 (en) * 2019-12-02 2022-08-16 International Business Machines Corporation Integrated bottom-up segmentation for semi-supervised image segmentation
CN112966779A (en) * 2021-03-29 2021-06-15 安徽大学 PolSAR image semi-supervised classification method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301644A (en) * 2017-06-09 2017-10-27 西安电子科技大学 Natural image non-formaldehyde finishing method based on average drifting and fuzzy clustering
CN108062757A (en) * 2018-01-05 2018-05-22 北京航空航天大学 It is a kind of to utilize the method for improving Intuitionistic Fuzzy Clustering algorithm extraction infrared target

Also Published As

Publication number Publication date
CN113409335A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN107679250B (en) A Multi-task Hierarchical Image Retrieval Method Based on Deep Autoencoder Convolutional Neural Networks
CN113409335B (en) Image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering
CN102073748B (en) Visual keyword based remote sensing image semantic searching method
CN105809672B (en) A Multi-object Collaborative Image Segmentation Method Based on Superpixels and Structural Constraints
CN107563444A (en) A kind of zero sample image sorting technique and system
CN112784869B (en) A fine-grained image recognition method based on attention perception and adversarial learning
CN108595636A (en) The image search method of cartographical sketching based on depth cross-module state correlation study
CN110728694B (en) Long-time visual target tracking method based on continuous learning
CN113592894A (en) Image segmentation method based on bounding box and co-occurrence feature prediction
CN106682598A (en) Multi-pose facial feature point detection method based on cascade regression
CN110443257B (en) A saliency detection method based on active learning
CN110633708A (en) Deep network significance detection method based on global model and local optimization
CN106446933A (en) Multi-target detection method based on context information
CN102542302A (en) Automatic complicated target identification method based on hierarchical object semantic graph
CN105740915B (en) A kind of collaboration dividing method merging perception information
CN104636755A (en) Face beauty evaluation method based on deep learning
Li et al. Statistical thresholding method for infrared images
CN108427919A (en) A kind of unsupervised oil tank object detection method guiding conspicuousness model based on shape
Wang et al. Pedestrian detection in infrared image based on depth transfer learning
CN107680099A (en) A kind of fusion IFOA and F ISODATA image partition method
CN118537600A (en) Data acquisition and reading method based on computer vision image
Zhang et al. Adaptive image segmentation based on color clustering for person re-identification
Wang et al. Unsupervised segmentation of greenhouse plant images based on modified latent dirichlet allocation
CN117876878A (en) Intelligent classification method for artificial building scenes in high-resolution remote sensing images
CN118072015A (en) Medical image segmentation method based on small sample learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant