[go: up one dir, main page]

CN110751662B - Image segmentation method and system for quantum-behaved particle swarm optimization fuzzy C-means - Google Patents

Image segmentation method and system for quantum-behaved particle swarm optimization fuzzy C-means Download PDF

Info

Publication number
CN110751662B
CN110751662B CN201910998269.6A CN201910998269A CN110751662B CN 110751662 B CN110751662 B CN 110751662B CN 201910998269 A CN201910998269 A CN 201910998269A CN 110751662 B CN110751662 B CN 110751662B
Authority
CN
China
Prior art keywords
image
value
particle
subset
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910998269.6A
Other languages
Chinese (zh)
Other versions
CN110751662A (en
Inventor
赵晶
王晓莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN201910998269.6A priority Critical patent/CN110751662B/en
Publication of CN110751662A publication Critical patent/CN110751662A/en
Application granted granted Critical
Publication of CN110751662B publication Critical patent/CN110751662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image segmentation method and system for a fuzzy C mean value optimized by quantum-behaved particle swarm, comprising the following steps: an acquisition step: acquiring an image to be processed, and converting the image to be processed into a Zhongzhi image; an image preprocessing step: denoising the intermediate intelligent image, and then performing image enhancement operation on a denoised result; and (3) information entropy calculation: calculating the element information entropy of the image set I for the result after image enhancement; an image segmentation step: if the ratio of the information entropies of the adjacent elements is smaller than a set threshold, carrying out segmentation on the mesology image by using a fuzzy C-means algorithm optimized by quantum-behaved particle swarm to obtain an image segmentation result; otherwise, returning to the image preprocessing step.

Description

量子行为粒子群优化模糊C均值的图像分割方法及系统Image Segmentation Method and System Based on Quantum Behavior Particle Swarm Optimization Fuzzy C-Means

技术领域technical field

本公开涉及图像分割技术领域,特别是涉及量子行为粒子群优化模糊C均值的图像分割方法及系统。The present disclosure relates to the technical field of image segmentation, and in particular, to an image segmentation method and system for optimizing fuzzy C-means by quantum behavior particle swarm.

背景技术Background technique

本部分的陈述仅仅是提到了与本公开相关的背景技术,并不必然构成现有技术。The statements in this section merely mention background related to the present disclosure and do not necessarily constitute prior art.

图像承载着生动丰富的信息,在多媒体信息时代起着极其重要的作用。图像可以直接模仿或真实地描述事物的客观存在。图像分割是图像识别和计算机视觉的重要预处理。没有正确的分割,就没有正确的识别。这是从图像处理到图像分析的关键步骤。图像分割在军事、遥感、气象学、通信、交通和医学图像应用中有着日益增长的需求。Images carry vivid and rich information and play an extremely important role in the era of multimedia information. Images can directly imitate or truly describe the objective existence of things. Image segmentation is an important preprocessing for image recognition and computer vision. Without proper segmentation, there is no proper identification. This is a critical step from image processing to image analysis. Image segmentation is in increasing demand in military, remote sensing, meteorology, communications, transportation, and medical image applications.

在实现本公开的过程中,发明人发现现有技术中存在以下技术问题:In the process of realizing the present disclosure, the inventor found that the following technical problems exist in the prior art:

一般来说,图像分割将一幅完整的图像分解成几个具有相同或不同特征的区域,并从这些区域中提取感兴趣的对象和技术流程。当计算机自动处理分割时,会遇到各种困难。例如,由于光照不均匀、噪声的影响、图像中不清晰部分的存在以及阴影等原因,分割错误经常发生。因此,图像分割是一项需要进一步研究的技术。Generally, image segmentation decomposes a complete image into several regions with the same or different features, and extracts objects of interest and technical processes from these regions. When a computer handles segmentation automatically, it encounters various difficulties. For example, segmentation errors often occur due to uneven lighting, the effect of noise, the presence of unclear parts in the image, and shadows. Therefore, image segmentation is a technique that requires further research.

发明内容SUMMARY OF THE INVENTION

为了解决现有技术的不足,本公开提供了量子行为粒子群优化模糊C均值的图像分割方法及系统;In order to solve the deficiencies of the prior art, the present disclosure provides an image segmentation method and system for optimizing fuzzy C-means by quantum behavior particle swarm;

第一方面,本公开提供了量子行为粒子群优化模糊C均值的图像分割方法;In a first aspect, the present disclosure provides an image segmentation method for quantum behavior particle swarm optimization fuzzy C-means;

量子行为粒子群优化模糊C均值的图像分割方法,包括:Quantum behavioral particle swarm optimization fuzzy C-means image segmentation method, including:

获取步骤:获取待处理图像,将待处理图像转换为中智学图像;Obtaining step: obtaining the image to be processed, and converting the image to be processed into a neutrosophic image;

图像预处理步骤:对中智学图像进行去噪处理,然后对去噪后的结果进行图像增强操作;Image preprocessing step: denoise the neutrosophic image, and then perform image enhancement on the denoised result;

信息熵计算步骤:对图像增强后的结果,计算图像集合I的元素信息熵;Information entropy calculation step: to the result after image enhancement, calculate the element information entropy of image set I;

图像分割步骤:如果相邻元素信息熵的比值小于设定阈值,则利用量子行为粒子群优化的模糊C均值算法进行中智学图像的分割,得到图像分割结果;否则,返回图像预处理步骤。Image segmentation step: If the ratio of the information entropy of adjacent elements is less than the set threshold, use the fuzzy C-means algorithm of quantum behavior particle swarm optimization to segment the neutrosophic image to obtain the image segmentation result; otherwise, return to the image preprocessing step.

第二方面,本公开还提供了量子行为粒子群优化模糊C均值的图像分割系统;In a second aspect, the present disclosure also provides an image segmentation system for quantum behavior particle swarm optimization fuzzy C-means;

量子行为粒子群优化模糊C均值的图像分割系统,包括:A quantum-behavioral particle swarm-optimized fuzzy C-means image segmentation system, including:

获取模块,其被配置为:获取待处理图像,将待处理图像转换为中智学图像;an acquisition module, which is configured to: acquire the to-be-processed image, and convert the to-be-processed image into a neutrosophic image;

图像预处理模块,其被配置为:对中智学图像进行去噪处理,然后对去噪后的结果进行图像增强操作;an image preprocessing module, which is configured to: perform denoising processing on the neutrosophic image, and then perform an image enhancement operation on the denoised result;

信息熵计算模块,其被配置为:对图像增强后的结果,计算图像集合I的元素信息熵;an information entropy calculation module, which is configured to: calculate the element information entropy of the image set I for the result after image enhancement;

图像分割模块,其被配置为:如果相邻元素信息熵的比值小于设定阈值,则利用量子行为粒子群优化的模糊C均值算法进行中智学图像的分割,得到图像分割结果;否则,返回图像预处理模块。The image segmentation module is configured to: if the ratio of the information entropy of adjacent elements is less than the set threshold, use the fuzzy C-means algorithm of quantum behavior particle swarm optimization to segment the neutrosophic image, and obtain the image segmentation result; otherwise, return the image preprocessing module.

第三方面,本公开还提供了一种电子设备,包括存储器和处理器以及存储在存储器上并在处理器上运行的计算机指令,所述计算机指令被处理器运行时,完成第一方面所述方法的步骤。In a third aspect, the present disclosure also provides an electronic device, including a memory, a processor, and computer instructions stored in the memory and executed on the processor, and when the computer instructions are executed by the processor, the first aspect is completed. steps of the method.

第四方面,本公开还提供了一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时,完成第一方面所述方法的步骤。In a fourth aspect, the present disclosure further provides a computer-readable storage medium for storing computer instructions, which, when executed by a processor, complete the steps of the method in the first aspect.

与现有技术相比,本公开的有益效果是:Compared with the prior art, the beneficial effects of the present disclosure are:

本发明提出了一种基于量子行为粒子群优化模糊C均值算法的中智学图像分割方法。通过模糊控制算法的简单有效性和QPSO的全局优化能力,对模糊控制算法进行了优化。通过实验验证了该算法是可行的,具有良好的抗噪声能力,该方法分割的整体效果比FCM更好,分割的边界更加清晰,而且不需要多次实验选取最优结果。该算法有效的解决了FCM对初始值依赖性强,容易陷入局部最优的问题。该算法具有较强的全局搜索能力和良好的图像分割效果。虽然可以准确的得到良好的分割效果,但是程序运行的单次时间没有明显缩短。The invention proposes a neutrosophic image segmentation method based on quantum behavior particle swarm optimization fuzzy C-mean algorithm. The fuzzy control algorithm is optimized by the simplicity and effectiveness of the fuzzy control algorithm and the global optimization ability of QPSO. Experiments show that the algorithm is feasible and has good anti-noise ability. The overall effect of this method is better than that of FCM, the boundary of the segmentation is clearer, and it does not require multiple experiments to select the optimal result. The algorithm effectively solves the problem that FCM has strong dependence on the initial value and is easy to fall into local optimum. The algorithm has strong global search ability and good image segmentation effect. Although a good segmentation effect can be accurately obtained, the single time of the program running is not significantly shortened.

附图说明Description of drawings

构成本申请的一部分的说明书附图用来提供对本申请的进一步理解,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。The accompanying drawings that form a part of the present application are used to provide further understanding of the present application, and the schematic embodiments and descriptions of the present application are used to explain the present application and do not constitute improper limitations on the present application.

图1是本公开实施例一的方法流程图;FIG. 1 is a flow chart of the method of Embodiment 1 of the present disclosure;

图2(a)是本公开实施例一的军舰图像(240×320);FIG. 2( a ) is an image of a warship (240×320) according to Embodiment 1 of the present disclosure;

图2(b)是本公开实施例一的用α均值运算和图像增强运算的中智学图像的效果图;FIG. 2(b) is an effect diagram of a neutrosophic image using α-mean operation and image enhancement operation according to Embodiment 1 of the present disclosure;

图2(c)是本公开实施例一的基于图2(b)的模糊C-均值分割效果图;FIG. 2(c) is an effect diagram of the fuzzy C-means segmentation based on FIG. 2(b) according to the first embodiment of the present disclosure;

图2(d)是本公开实施例一的基于图2(b)的改进粒子群优化模糊C均值分割效果图。Fig. 2(d) is an effect diagram of the improved particle swarm optimization fuzzy C-mean segmentation based on Fig. 2(b) according to the first embodiment of the present disclosure.

具体实施方式Detailed ways

应该指出,以下详细说明都是示例性的,旨在对本申请提供进一步的说明。除非另有指明,本公开使用的所有技术和科学术语具有与本申请所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the application. Unless otherwise defined, all technical and scientific terms used in this disclosure have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申请的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is for the purpose of describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, furthermore, it is to be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates that There are features, steps, operations, devices, components and/or combinations thereof.

实施例一,本实施例提供了量子行为粒子群优化模糊C均值的图像分割方法;Embodiment 1, this embodiment provides an image segmentation method for quantum behavior particle swarm optimization of fuzzy C-means;

如图1所示,量子行为粒子群优化模糊C均值的图像分割方法,包括:As shown in Figure 1, the image segmentation method of quantum behavior particle swarm optimization fuzzy C-means, including:

获取步骤:获取待处理图像,将待处理图像转换为中智学图像;Obtaining step: obtaining the image to be processed, and converting the image to be processed into a neutrosophic image;

图像预处理步骤:对中智学图像进行去噪处理,然后对去噪后的结果进行图像增强操作;Image preprocessing step: denoise the neutrosophic image, and then perform image enhancement on the denoised result;

信息熵计算步骤:对图像增强后的结果,计算图像集合I的元素信息熵;Information entropy calculation step: to the result after image enhancement, calculate the element information entropy of image set I;

图像分割步骤:如果相邻元素信息熵的比值小于设定阈值,则利用量子行为粒子群优化的模糊C均值算法进行中智学图像的分割,得到图像分割结果;否则,返回图像预处理步骤。Image segmentation step: If the ratio of the information entropy of adjacent elements is less than the set threshold, use the fuzzy C-means algorithm of quantum behavior particle swarm optimization to segment the neutrosophic image to obtain the image segmentation result; otherwise, return to the image preprocessing step.

作为一个或多个实施例,所述利用量子行为粒子群优化的模糊C均值算法进行中智学图像的分割,得到图像分割结果;具体步骤包括:As one or more embodiments, the fuzzy C-means algorithm using quantum behavior particle swarm optimization is used to segment a neutrosophic image to obtain an image segmentation result; the specific steps include:

S41:初始聚类类数C、模糊度参数m、粒子群大小N和最大迭代次数MaxIt;聚类中心的数量是每个粒子的维数;S41: the initial number of clusters C, the ambiguity parameter m, the particle swarm size N, and the maximum number of iterations MaxIt; the number of cluster centers is the dimension of each particle;

S42:对N个簇中心进行初始化编码,形成N个第一代粒子;聚类中心的个数相当于粒子的维度;每个粒子的pbest是它的当前位置,而gbest是当前总体中所有粒子的最佳位置;S42: Initialize and encode N cluster centers to form N first-generation particles; the number of cluster centers is equivalent to the particle dimension; the pbest of each particle is its current position, and gbest is all particles in the current population the best location;

S43:计算每个聚类中心C(k)以及隶属度的中心向量U(k);S43: Calculate the center vector U(k) of each cluster center C(k) and membership degree;

S44:计算各粒子的适应度;如果粒子的适应度优于粒子当前最佳位置的适应度,则更新单个粒子的最佳位置。如果当前全局最佳位置的适应度都优于所有粒子中的最佳位置的适应度,则更新全局最佳位置;S44: Calculate the fitness of each particle; if the fitness of the particle is better than the fitness of the current optimal position of the particle, update the optimal position of a single particle. If the fitness of the current global best position is better than the fitness of the best position in all particles, update the global best position;

S45:更新每个粒子的位置以生成新的粒子群;S45: Update the position of each particle to generate a new particle swarm;

S46:如果当前迭代次数达到先前设置的最大次数,则停止迭代;在最后一代中找到最佳解决方案,否则重复S43。S46: If the current number of iterations reaches the previously set maximum number, stop the iteration; find the best solution in the last generation, otherwise repeat S43.

进一步地,所述步骤S45,用等式(30)-(33)来更新每个粒子的位置以生成新的粒子群:Further, in the step S45, the position of each particle is updated with equations (30)-(33) to generate a new particle swarm:

Figure BDA0002240441220000051
Figure BDA0002240441220000051

Xi,j(t+1)=pi,j(t)±α·|Cj(t)-Xi,j(t)|·ln[1/ui,j(t)];ui,j(t)~U(0,1)(31)X i,j (t+1)=pi ,j (t)±α·|C j (t)-X i,j (t)|·ln[1/u i,j (t)]; u i,j (t)~U(0,1)(31)

公式(31)中的参数C被称为平均最优位置,可记为mbest,是所有粒子的个体最优位置的中心点:The parameter C in formula (31) is called the average optimal position, which can be denoted as mbest, which is the center point of the individual optimal positions of all particles:

Figure BDA0002240441220000052
Figure BDA0002240441220000052

其中,pi,j(t)是第t次迭代时,第i个粒子第j维的势阱(局部吸引点),它的位置实际是位于有个体最优位置pbestj(t)和群体最优位置gbest(t)为顶点的超矩形中,而且随着pbest和gbest的变化而变化。φj(t)和ui,j(t)都是第t次迭代,第j维在[0,1]上均匀分布的随机数,Xi,j(t+1)是第t次迭代时,第i个粒子第j维的位置,Cj(t)是C(t)中的一个向量,α为QPSO的收缩扩张系数,α的值由公式(33)决定:Among them, p i,j (t) is the potential well (local attraction point) of the j-th dimension of the i-th particle at the t-th iteration, and its position is actually located at the individual optimal position pbest j (t) and the group The optimal position gbest(t) is in the hyperrectangle of the vertices, and changes with the changes of pbest and gbest. φ j (t) and ui ,j (t) are both the t-th iteration, the j-th dimension is a random number uniformly distributed on [0,1], and X i,j (t+1) is the t-th iteration , the position of the i-th particle in the j-th dimension, C j (t) is a vector in C(t), α is the contraction and expansion coefficient of QPSO, and the value of α is determined by formula (33):

α=(α12)*(MaxIt-t)/MaxIt+α2 (33)α=(α 12 )*(MaxIt-t)/MaxIt+α 2 (33)

其中,α1和α2分别为参数α的初始值和最终值,t是当前迭代次数,MaxIt是允许迭代的最大次数,通过改变α的值,从搜索开始的1.0到搜索结束的0.5。Among them, α 1 and α 2 are the initial and final values of the parameter α, respectively, t is the current number of iterations, and MaxIt is the maximum number of iterations allowed. By changing the value of α, from 1.0 at the beginning of the search to 0.5 at the end of the search.

作为一个或多个实施例,所述中智学图像包括中智子集合图像T、中智子集合图像I和中智子集合图像F:As one or more embodiments, the neutrosophic images include a neutrosophic set image T, a neutrosophonic set image I, and a neutrosophic set image F:

所述中智子集合图像T表示为原图像真实性表述;The neutropen set image T is represented as the authenticity representation of the original image;

所述中智子集合图像I表示为原图像的不确定性表述;The neutrosophon set image I is represented as the uncertainty representation of the original image;

所述中智子集合图像F表示为原图像的非真实性表述。The neutropen set image F is represented as a non-authentic representation of the original image.

作为一个或多个实施例,所述将待处理图像转换为中智学图像;具体步骤包括:As one or more embodiments, converting the to-be-processed image into a neutrosophic image; the specific steps include:

根据原始图像像素值区域均值,计算所述中智子集合图像T;According to the average value of the pixel value area of the original image, calculate the neutropen set image T;

根据所述原始图像像素值与区域像素均值之差的绝对值,计算所述中智子集合图像I;According to the absolute value of the difference between the original image pixel value and the regional pixel mean value, calculate the neutropen set image I;

根据所述中智子集合图像T,计算所述中智子集合图像F。From the neutropen set image T, the neutropen set image F is calculated.

作为一个或多个实施例,所述将待处理图像转换为中智学图像;具体步骤包括:As one or more embodiments, converting the to-be-processed image into a neutrosophic image; the specific steps include:

PNS={T,I,F} (1)P NS = {T,I,F} (1)

Figure BDA0002240441220000061
Figure BDA0002240441220000061

Figure BDA0002240441220000071
Figure BDA0002240441220000071

Figure BDA0002240441220000072
Figure BDA0002240441220000072

Figure BDA0002240441220000073
Figure BDA0002240441220000073

F(i,j)=1-T(i,j) (6)F(i,j)=1-T(i,j) (6)

其中,PNS是图像在NS域中的像素点;Among them, P NS is the pixel point of the image in the NS domain;

T(i,j)为中智子集合图像T在点(i,j)的取值;T(i,j) is the value of the neutron set image T at point (i,j);

I(i,j)为中智子集合图像I在点(i,j)的取值;I(i,j) is the value of the neutron set image I at point (i,j);

F(i,j)为中智子集合图像F在点(i,j)的取值;F(i,j) is the value of the neutropen set image F at point (i,j);

Figure BDA0002240441220000074
是g(i,j)在w×w区域内的均值;
Figure BDA0002240441220000074
is the mean of g(i,j) in the w×w region;

Figure BDA0002240441220000075
表示
Figure BDA0002240441220000076
的最小值,
Figure BDA0002240441220000077
表示
Figure BDA0002240441220000078
的最大值;
Figure BDA0002240441220000075
express
Figure BDA0002240441220000076
the minimum value of ,
Figure BDA0002240441220000077
express
Figure BDA0002240441220000078
the maximum value of ;

g(m,n)为中智子集合图像T在点(m,n)的取值;g(m,n) is the value of the neutropen set image T at point (m,n);

δ(i,j)是像素点g(i,j)与g(i,j)在w×w区域内的均值

Figure BDA0002240441220000079
之差的绝对值。δ(i,j) is the mean of pixels g(i,j) and g(i,j) in the w×w area
Figure BDA0002240441220000079
The absolute value of the difference.

作为一个或多个实施例,所述对中智学图像进行去噪处理;具体步骤包括:As one or more embodiments, the denoising process is performed on the neutrosophic image; the specific steps include:

Figure BDA00022404412200000710
Figure BDA00022404412200000710

Figure BDA00022404412200000711
Figure BDA00022404412200000711

Figure BDA00022404412200000712
Figure BDA00022404412200000712

Figure BDA00022404412200000713
Figure BDA00022404412200000713

Figure BDA00022404412200000714
Figure BDA00022404412200000714

Figure BDA00022404412200000715
Figure BDA00022404412200000715

Figure BDA0002240441220000081
Figure BDA0002240441220000081

其中,

Figure BDA0002240441220000082
图像在NS域中的像素经过点α-均值的集合,
Figure BDA0002240441220000083
中智子集合图像T的α-均值子集合,
Figure BDA0002240441220000084
为中智子集合图像I的α-均值子集合,
Figure BDA0002240441220000085
为中智子集合图像F的α-均值子集合,T为原图像真实值的集合,
Figure BDA0002240441220000086
为待进行α-均值集合,I为原图像不确定值的集合,α取值0.85,w*w为限制区域大小,(i,j)为原始图像的像素点,(m,n)为w*w区域内领域像素信息点,T(m,n)为中智子集合图像T在点(m,n)的取值,
Figure BDA0002240441220000087
为中智子集合图像T进行α-均值的灰度平均强度值,
Figure BDA0002240441220000088
需要进行α-均值集合,F(m,n)为中智子集合图像F在点(m,n)的取值,
Figure BDA0002240441220000089
为中智子集合图像F进行α-均值的灰度平均强度值。
Figure BDA00022404412200000810
为中智子集合图像I的平均强度值,
Figure BDA00022404412200000811
是像素点(i,j)进行过α均值运算后的Tα(i,j)与
Figure BDA00022404412200000812
之差的绝对值,
Figure BDA00022404412200000813
Figure BDA00022404412200000814
Figure BDA00022404412200000815
的最小值和最大值。in,
Figure BDA0002240441220000082
The pixels of the image in the NS domain pass through the set of point α-means,
Figure BDA0002240441220000083
the α-means subset of the neutrosophonic set image T,
Figure BDA0002240441220000084
is the α-mean subset of the neutropen set image I,
Figure BDA0002240441220000085
is the α-mean subset of the neutrosophonic set image F, T is the set of true values of the original image,
Figure BDA0002240441220000086
For the α-mean set to be performed, I is the set of uncertain values of the original image, α is 0.85, w*w is the size of the restricted area, (i, j) is the pixel point of the original image, (m, n) is w *The domain pixel information points in the w area, T(m,n) is the value of the neutrosophon set image T at point (m,n),
Figure BDA0002240441220000087
The gray-scale mean intensity value of the α-mean for the neutrosophore set image T,
Figure BDA0002240441220000088
It is necessary to perform α-mean set, and F(m,n) is the value of the neutrosophon set image F at point (m,n),
Figure BDA0002240441220000089
Gray-scale mean intensity values for alpha-means for the neutropen ensemble image F.
Figure BDA00022404412200000810
is the average intensity value of the neutropen set image I,
Figure BDA00022404412200000811
is the T α (i, j) and the
Figure BDA00022404412200000812
The absolute value of the difference,
Figure BDA00022404412200000813
and
Figure BDA00022404412200000814
Yes
Figure BDA00022404412200000815
the minimum and maximum values.

应理解的,去噪步骤目的是使图像中的噪声点及不确定性高的像素点都减少,图像像素信息分布也更加合理且利于图像的后续处理。It should be understood that the purpose of the denoising step is to reduce noise points and pixel points with high uncertainty in the image, and the distribution of image pixel information is also more reasonable and facilitates the subsequent processing of the image.

作为一个或多个实施例,所述对去噪后的结果进行图像增强操作;具体步骤包括:As one or more embodiments, the image enhancement operation is performed on the denoised result; the specific steps include:

Figure BDA00022404412200000816
Figure BDA00022404412200000816

Figure BDA00022404412200000817
Figure BDA00022404412200000817

Figure BDA00022404412200000818
Figure BDA00022404412200000818

Figure BDA00022404412200000819
Figure BDA00022404412200000819

Figure BDA00022404412200000820
Figure BDA00022404412200000820

Figure BDA0002240441220000091
Figure BDA0002240441220000091

Figure BDA0002240441220000092
Figure BDA0002240441220000092

其中:

Figure BDA0002240441220000093
为中智集合经过β增强运算的真实性集合,
Figure BDA0002240441220000094
为当
Figure BDA0002240441220000095
中智集合经过β增强运算的真实性子集合,
Figure BDA0002240441220000096
为表示中智集合的不确定性智的子集合,β为取值0.85,
Figure BDA0002240441220000097
Figure BDA0002240441220000098
进行β增强运算的真实值,
Figure BDA0002240441220000099
为集合T经过均值运算后的值,
Figure BDA00022404412200000910
为中智集合经过β增强运算的非真实性集合,
Figure BDA00022404412200000911
为当
Figure BDA00022404412200000912
中智集合经过β增强运算的非真实性子集合,
Figure BDA00022404412200000913
Figure BDA00022404412200000914
进行β增强运算的非真实值,
Figure BDA00022404412200000915
为经过均值运算后的非真实像素点,
Figure BDA00022404412200000916
Figure BDA00022404412200000917
的不确定性的像素点子集合,δ'(i,j)是像素点(i,j)进行过图像增强操作后的值
Figure BDA00022404412200000918
与均值
Figure BDA00022404412200000919
之差的绝对值,δ'min为像素点(i,j)进行过图像增强操作后的值
Figure BDA00022404412200000920
与均值
Figure BDA00022404412200000921
之差的绝对值的最小值,δ'max为像素点(i,j)进行过图像增强操作后的值
Figure BDA00022404412200000922
与均值
Figure BDA00022404412200000923
之差的绝对值的最大值,
Figure BDA00022404412200000924
为子集合
Figure BDA00022404412200000925
在w*w区域内的平均强度,
Figure BDA00022404412200000926
为子集合
Figure BDA00022404412200000927
在w*w区域内经过β增强运算后的真实性集合。δ'(i,j)是像素点(i,j)进行过图像增强操作后的
Figure BDA00022404412200000928
Figure BDA00022404412200000929
之差的绝对值。in:
Figure BDA0002240441220000093
is the authenticity set of the neutrosophic set after β-enhanced operation,
Figure BDA0002240441220000094
for when
Figure BDA0002240441220000095
The reality subset of the neutrosophic set after β-enhanced operation,
Figure BDA0002240441220000096
is a subset representing the uncertainty intelligence of the neutrosophic set, β is 0.85,
Figure BDA0002240441220000097
for
Figure BDA0002240441220000098
the true value of the beta enhancement operation,
Figure BDA0002240441220000099
is the value of the set T after the mean operation,
Figure BDA00022404412200000910
is the unreal set of the neutrosophic set after β-enhanced operation,
Figure BDA00022404412200000911
for when
Figure BDA00022404412200000912
The unreal subset of the neutrosophic set after β-enhancing operation,
Figure BDA00022404412200000913
for
Figure BDA00022404412200000914
untrue values for beta enhancement operations,
Figure BDA00022404412200000915
is the unreal pixel point after mean operation,
Figure BDA00022404412200000916
Yes
Figure BDA00022404412200000917
The uncertainty subset of pixel points, δ'(i, j) is the value of pixel point (i, j) after image enhancement operation
Figure BDA00022404412200000918
with mean
Figure BDA00022404412200000919
The absolute value of the difference, δ' min is the value of the pixel point (i, j) after image enhancement operation
Figure BDA00022404412200000920
with mean
Figure BDA00022404412200000921
The minimum value of the absolute value of the difference, δ' max is the value of the pixel point (i, j) after image enhancement operation
Figure BDA00022404412200000922
with mean
Figure BDA00022404412200000923
The maximum value of the absolute value of the difference,
Figure BDA00022404412200000924
as a sub-collection
Figure BDA00022404412200000925
The average intensity in the w*w region,
Figure BDA00022404412200000926
as a sub-collection
Figure BDA00022404412200000927
The authenticity set after beta enhancement operation in the w*w region. δ'(i,j) is the pixel point (i,j) after image enhancement operation
Figure BDA00022404412200000928
and
Figure BDA00022404412200000929
The absolute value of the difference.

应理解的,增强处理步骤的目的是增强图像的视觉效果扩大图像中不同物体(比如目标和背景)之间的特征差异,改善图像的识别效果。It should be understood that the purpose of the enhancement processing step is to enhance the visual effect of the image, enlarge the feature difference between different objects (such as the target and the background) in the image, and improve the recognition effect of the image.

作为一个或多个实施例,对图像增强后的结果,计算图像集合I的元素信息熵;具体步骤包括:As one or more embodiments, for the result after image enhancement, the element information entropy of image set I is calculated; the specific steps include:

Figure BDA00022404412200000930
Figure BDA00022404412200000930

其中,EnI是中智子集合图像I的熵,pI(i)是元素i在中智子集合图像I中的概率。where En I is the entropy of the neutropen set image I, and p I (i) is the probability of element i in the neutropen set image I.

为了克服一般图像分割方法的局限性,提高图像不确定性信息的表达和处理能力,提出了区间值模糊集、直觉模糊集和区间值直觉模糊集的扩展模糊理论。模糊理论使用隶属度函数来描述元素属于某一类的程度,因此能很好的表达和处理模糊性和分割过程中的不确定性。其中,中智理论是一种新的扩展模糊理论,它总结了经典模糊理论和相关的扩展模糊理论。In order to overcome the limitations of general image segmentation methods and improve the expression and processing ability of image uncertainty information, interval-valued fuzzy sets, intuitionistic fuzzy sets and extended fuzzy theory of interval-valued intuitionistic fuzzy sets are proposed. Fuzzy theory uses membership function to describe the degree to which an element belongs to a certain class, so it can well express and deal with ambiguity and uncertainty in the segmentation process. Among them, neutrosophic theory is a new extended fuzzy theory, which summarizes classical fuzzy theory and related extended fuzzy theory.

中智学研究的是现实事物、理论、命题、概念、或实体“A”和它的对立“Anti-A”、它的否定“Non-A”和“Netu-A”(既不是“A”又不是“Anti-A”)三者之间的关系。中智学是中智学逻辑、中智学概率、中智学集合和中智学统计的基础。Neutrosophy studies actual things, theories, propositions, concepts, or entities "A" and its opposite "Anti-A", its negation "Non-A" and "Netu-A" (neither "A" nor "Netu-A". Not "Anti-A") the relationship between the three. Neutrosophy is the basis of neutrosophic logic, neutrosophic probability, neutrosophic ensemble, and neutrosophic statistics.

中子物理学的基本思想是任何观点都有t%的真值,i%的不确定性和f%的假值。The basic idea of neutron physics is that any opinion has t% true value, i% uncertainty and f% false value.

基于模糊理论产生的模糊分割技术,能够有效地描述图像的模糊性,非常适合于处理医学图像中需要精确分割的不确定性问题。随着模糊理论的不断完善,其在图像分割中的应用日益活跃,已成为一个研究热点。The fuzzy segmentation technology based on fuzzy theory can effectively describe the ambiguity of images, and is very suitable for dealing with the uncertainty of medical images that require accurate segmentation. With the continuous improvement of fuzzy theory, its application in image segmentation has become increasingly active and has become a research hotspot.

图像分割的主要依据是灰度的相似性和不连续性。图像分割算法可以分为基于区域的分割算法和基于边界的分割算法和结合特定理论工具的分割方法。基于区域的图像分割算法通过选择适当的阈值将对象与图像中的背景分离。该方法简单直观,但只考虑图像的灰度信息,不考虑图像的空间信息,对噪声和灰度不均匀性非常敏感。基于边缘的图像分割算法通过检测不同均匀区域之间的边缘来完成分割区域。边缘检测方法的难点在于抗噪声和检测精度之间的矛盾。现阶段已提出不少改进的多尺度边缘检测方法,但效果仍然不理想。结合特定理论工具的分割方法是将现阶段很多理论方法用于图像分割,如基于聚类分析的FCM方法。基于聚类分析的模糊C均值算法(FCM)由于其设计简单、求解范围广、易于计算机实现等特点,越来越受到人们的关注,并在各个领域得到了广泛的应用。此外,传统的FCM算法在图像分割过程中不考虑空间信息,对噪声和灰度不均匀性敏感。The main basis of image segmentation is the similarity and discontinuity of gray level. Image segmentation algorithms can be divided into region-based segmentation algorithms and boundary-based segmentation algorithms and segmentation methods combined with specific theoretical tools. Region-based image segmentation algorithms separate objects from the background in the image by choosing an appropriate threshold. This method is simple and intuitive, but it only considers the grayscale information of the image, not the spatial information of the image, and is very sensitive to noise and grayscale inhomogeneity. Edge-based image segmentation algorithms complete the segmentation of regions by detecting edges between different uniform regions. The difficulty of edge detection methods lies in the contradiction between anti-noise and detection accuracy. At this stage, many improved multi-scale edge detection methods have been proposed, but the effect is still not ideal. The segmentation method combined with specific theoretical tools is to use many theoretical methods at this stage for image segmentation, such as the FCM method based on cluster analysis. The fuzzy C-means algorithm (FCM) based on cluster analysis has attracted more and more attention and has been widely used in various fields due to its simple design, wide solution range, and easy computer implementation. In addition, traditional FCM algorithms do not consider spatial information during image segmentation and are sensitive to noise and grayscale inhomogeneity.

模糊C均值算法由于其实现简单,得到了广泛的应用;但模糊C均值算法对初始参数非常敏感,有时甚至对初始参数也非常敏感。需要人工干预参数来逼近全局最优解,提高分割速度。但它受初始聚类中心选择的限制,容易陷入局部最优,对噪声更敏感。FCM的经典改进主要有两个方面:一个是对FCM中目标函数的改进;另一个是在迭代求解过程中引入其他智能算法。近年来,许多学者尝试了多种方法来解决模糊C均值算法由于初值选择不当陷入局部极值的问题。如将FCM与粒子群优化算法相结合,从一定程度上改善了图像分割的质量。Fuzzy C-means algorithm has been widely used because of its simple implementation; however, fuzzy C-means algorithm is very sensitive to initial parameters, sometimes even very sensitive to initial parameters. Human intervention parameters are required to approximate the global optimal solution and improve the segmentation speed. However, it is limited by the selection of the initial cluster center, which is easy to fall into the local optimum and is more sensitive to noise. The classical improvement of FCM mainly has two aspects: one is the improvement of the objective function in FCM; the other is the introduction of other intelligent algorithms in the iterative solution process. In recent years, many scholars have tried various methods to solve the problem that fuzzy C-means algorithm falls into local extreme value due to improper selection of initial value. For example, the combination of FCM and particle swarm optimization algorithm can improve the quality of image segmentation to a certain extent.

粒子群(PSO)算法由美国的Kennedy和Eberhar于1995年提出的。PSO计算简单、易于实现,但是,由于在进化后期不能有效的控制粒子的飞行速度,导致算法易飞跃最优解,进而导致算法的收敛速度,准确度下降。针对以上缺点,孙俊等人从量子力学的角度出发,利用量子测不准原理来描述粒子的运动状态,建立了一种新的PSO算法模型,即量子行为粒子群优化(Quantum-behaved PSO,QPSO)算法模型.在QPSO算法中,QPSO将每个个体看作是D维搜索空间中的一个没有重量和体积的微粒,能够以一定的概率随机出现在空间中的任一位置,从而实现算法的全局收敛性。该算法不仅参数个数少,并且在搜索能力上优于已开发的PSO算法。The particle swarm (PSO) algorithm was proposed by Kennedy and Eberhar in 1995. The calculation of PSO is simple and easy to implement. However, due to the inability to effectively control the flying speed of particles in the later stage of evolution, the algorithm easily leaps over the optimal solution, which in turn leads to a decrease in the convergence speed and accuracy of the algorithm. In view of the above shortcomings, Sun Jun et al., from the perspective of quantum mechanics, used the quantum uncertainty principle to describe the motion state of particles, and established a new PSO algorithm model, namely quantum behavioral particle swarm optimization (Quantum-behaved PSO, QPSO) algorithm model. In the QPSO algorithm, QPSO regards each individual as a particle without weight and volume in the D-dimensional search space, which can randomly appear at any position in the space with a certain probability, thereby realizing the algorithm global convergence. The algorithm not only has a small number of parameters, but also outperforms the developed PSO algorithm in search ability.

为了解决FCM算法存在的问题,将量子行为粒子群优化算法(QPSO)应用于模糊C均值聚类中。In order to solve the problems of FCM algorithm, quantum behavior particle swarm optimization (QPSO) is applied to fuzzy C-means clustering.

FCM通过优化目标函数得到每个数据样本点对所有类中心的隶属度,从而决定样本点的归属以达到自动对数据样本进行分类的目的。由于FCM算法本质上是用梯度下降法寻找最优解,因此存在陷入局部最优解的问题。本公开针对以上情况,引入一种能够保证全局收敛的粒子群算法,即量子行为粒子群算法形成了基于进化计算的模糊聚类算法。FCM obtains the membership degree of each data sample point to all class centers by optimizing the objective function, so as to determine the attribution of sample points to achieve the purpose of automatically classifying data samples. Since the FCM algorithm essentially uses the gradient descent method to find the optimal solution, there is a problem of falling into the local optimal solution. In view of the above situation, the present disclosure introduces a particle swarm algorithm that can ensure global convergence, that is, the quantum behavior particle swarm algorithm forms a fuzzy clustering algorithm based on evolutionary computation.

(1)中智学图像(1) Neutrosophy images

由于经典的模糊分割算法在处理较复杂的和高不确定性问题时,难以获得较好的结果。因此将中智理论引入图像中,生成中智学图像,以增强图像中不确定信息的表达能力。Because the classical fuzzy segmentation algorithm is difficult to obtain better results when dealing with more complex and high uncertainty problems. Therefore, the neutrosophic theory is introduced into the image, and the neutrosophic image is generated to enhance the expression ability of the uncertain information in the image.

中智学图像就是一幅图像在中智学领域的表述,其基本方法是利用中智学理论将原始数字图像和图像相关特征转换到中智学区域,从而获得中智学图像。中智图像能更好地表达和描述图像的模糊性和不确定性,不仅可以利用灰色信息,还可以利用边缘和空间信息来解决模糊分割算法无法解决的问题。The neutrosophic image is the representation of an image in the field of neutrosophy. The basic method is to use the theory of neutrosophy to convert the original digital image and image-related features to the neutrosophic area, so as to obtain a neutrosophic image. Neutrophil images can better express and describe the ambiguity and uncertainty of images, not only can use gray information, but also use edge and spatial information to solve problems that cannot be solved by fuzzy segmentation algorithms.

定义1(中智学图像):已知图像为P,假设全论域为U,图像像素的全集为W,并且W为U的非空子集。PNS图像具有三个隶属集T、I和F。Definition 1 (neutrosophic image): Given that the image is P, suppose the universe of universe is U, the complete set of image pixels is W, and W is a non-empty subset of U. PNS images have three membership sets T, I and F.

PNS={T,I,F} (1)P NS = {T,I,F} (1)

其中,子集合图像T表示为原图像真实性表述,子集合图像I表示为原图像的不确定性表述,子集合图像F表示为原图像的非真实性表述。Among them, the subset image T represents the authenticity representation of the original image, the subset image I represents the uncertainty representation of the original image, and the subset image F represents the non-authenticity representation of the original image.

中智学图像所用到的特征信息为图像像素点的灰度值。图像中的像素P被描述为P(t,i,f),对于元素P(t,i,f)以下列方式属于W:t%的真、i%的不确定以及f%的假,其中,t∈T,i∈I,f∈F。图像域中的像素P(i,j)被转换为中智区域。The feature information used in the neutrosophic image is the gray value of the image pixel. A pixel P in the image is described as P(t,i,f) and belongs to W for the element P(t,i,f) in the following way: t% true, i% indeterminate, and f% false, where , t∈T, i∈I, f∈F. Pixels P(i,j) in the image domain are transformed into neutrosophic regions.

一般情况下,我们用标准差来表示数据的不确定性。由于目标区域和背景区域的不确定性远小于边缘的不确定性,本部分根据图像灰度的标准差和不连续性定义了I。标准差表示图像局部区域的差异,不连续性表示灰度的突变。标准差反映了图像局部区域的差异。图像中的背景和目标区域是均匀的,而模糊边缘是从目标到背景的逐渐变化,然后,我们使用像素值区域的平均值、标准差和极差来描述T、I、F。In general, we use the standard deviation to express the uncertainty of the data. Since the uncertainty of the target area and the background area is much smaller than that of the edge, this section defines I according to the standard deviation and discontinuity of the image gray level. The standard deviation represents the difference in the local area of the image, and the discontinuity represents the sudden change of gray level. The standard deviation reflects the difference in the local area of the image. The background and target area in the image are uniform, while the blurred edge is a gradual change from the target to the background, then, we use the mean, standard deviation, and range of the pixel value area to describe T, I, F.

在图像转化的过程中,我们将像素点(i,j)限制在一个小的区域中,取该区域大小为w×w。中智学图像的转化过程也就是对三个集合T,I,F的获取过程。该过程由如下公式算得:In the process of image transformation, we limit the pixel (i, j) to a small area, and take the size of this area as w×w. The transformation process of neutrosophic images is the acquisition process of the three sets T, I, F. This process is calculated by the following formula:

Figure BDA0002240441220000131
Figure BDA0002240441220000131

Figure BDA0002240441220000132
Figure BDA0002240441220000132

Figure BDA0002240441220000133
Figure BDA0002240441220000133

Figure BDA0002240441220000134
Figure BDA0002240441220000134

F(i,j)=1-T(i,j) (6)F(i,j)=1-T(i,j) (6)

其中:

Figure BDA0002240441220000135
是图像的局部平均值,δ(i,j)是像素点g(i,j)与其局部平均值
Figure BDA0002240441220000136
之差的绝对值。in:
Figure BDA0002240441220000135
is the local average of the image, δ(i,j) is the pixel g(i,j) and its local average
Figure BDA0002240441220000136
The absolute value of the difference.

(2)中智学图像的信息熵(2) Information entropy of neutrosophic images

图像的信息熵来描述图像像素点灰度分布的聚集特征。在中智学图像中,我们用图像的信息熵来描述图像像素点灰度分布的聚集特征。The information entropy of the image is used to describe the aggregation characteristics of the grayscale distribution of image pixels. In the neutrosophic image, we use the information entropy of the image to describe the aggregation characteristics of the gray distribution of image pixels.

定义2(中智学图像的信息熵)中智学图像的信息熵定义为三个集合T、I和F的熵之和,用于评价元素在中智学区域中的分布:Definition 2 (Information Entropy of Neutrosophic Image) The information entropy of a neutrosophic image is defined as the sum of the entropy of the three sets T, I and F, which is used to evaluate the distribution of elements in the neutrosophic region:

EnNS=EnT+EnI+EnF (7)En NS =En T +En I +En F (7)

Figure BDA0002240441220000141
Figure BDA0002240441220000141

Figure BDA0002240441220000142
Figure BDA0002240441220000142

Figure BDA0002240441220000143
Figure BDA0002240441220000143

其中:EnT,EnI,EnF分别是集合T、I和F的熵,以及分别是元素i在三个集合T、I和F中的概率。且EnT和EnF用来描述元素在中智学集合的分布;EnI用来描述元素不确定性的分布。where: En T , En I , En F are the entropy of sets T, I and F, respectively, and the probability of element i in the three sets T, I, and F, respectively. And En T and En F are used to describe the distribution of elements in the neutrosophic set; En I is used to describe the distribution of element uncertainty.

根据中智学的概念,元素在集合T和F中的变化能影响到集合I的不确定分布。集合T的熵越大且集合I的熵越小,说明图像的灰度分布越均匀,灰度值的分布合理性越大。为了增大集合T的熵,并降低集合I的熵,在中智学图像去噪中提出了一种新的运算——α-均值运算,该运算在中智学领域利用像素点在不确定性集合中的值来对当前像素值进行判断更新。According to the concept of neutrosophy, the change of elements in sets T and F can affect the uncertainty distribution of set I. The larger the entropy of the set T and the smaller the entropy of the set I, the more uniform the gray distribution of the image is, and the more reasonable the distribution of gray values is. In order to increase the entropy of the set T and reduce the entropy of the set I, a new operation-α-mean operation is proposed in the neutrosophic image denoising. to judge and update the current pixel value.

当图像中存在噪声时,图像分割容易受到噪声干扰,分割效果差,影响图像分析和理解。为了解决这一问题,本公开采用了α均值滤波的方法来消除图像噪声,方便后续的图像分割。When there is noise in the image, image segmentation is easily disturbed by noise, and the segmentation effect is poor, which affects image analysis and understanding. In order to solve this problem, the present disclosure adopts the method of alpha mean filtering to eliminate image noise, which is convenient for subsequent image segmentation.

定义3(α-均值运算):对PNS进行α-均值运算、

Figure BDA0002240441220000144
定义为:Definition 3 (α-mean operation): perform α-mean operation on P NS ,
Figure BDA0002240441220000144
defined as:

Figure BDA0002240441220000145
Figure BDA0002240441220000145

Figure BDA0002240441220000146
Figure BDA0002240441220000146

Figure BDA0002240441220000147
Figure BDA0002240441220000147

Figure BDA0002240441220000151
Figure BDA0002240441220000151

Figure BDA0002240441220000152
Figure BDA0002240441220000152

Figure BDA0002240441220000153
Figure BDA0002240441220000153

Figure BDA0002240441220000154
Figure BDA0002240441220000154

其中:

Figure BDA0002240441220000155
Figure BDA0002240441220000156
分别是T(i,j)和F(i,j)在w×w区域内的均值;δα(i,j)是像素点(i,j)进行过α均值运算后的Tα(i,j)与
Figure BDA0002240441220000157
之差的绝对值。in:
Figure BDA0002240441220000155
and
Figure BDA0002240441220000156
are the mean values of T(i,j) and F(i,j) in the w×w area respectively; δ α (i, j) is the T α (i ,j) with
Figure BDA0002240441220000157
The absolute value of the difference.

经过α-均值运算后,不确定的子集I的熵下降,I中的元素的分布变得更加不均匀,这种不均匀性使得中智集合PNS的不确定性减少。图像中的噪声点及不确定性高的像素点都减少,图像像素信息分布也更加合理且利于图像的后续处理。After the α-mean operation, the entropy of the uncertain subset I decreases, and the distribution of elements in I becomes more uneven. This unevenness reduces the uncertainty of the neutrosophic set PNS. The noise points and pixels with high uncertainty in the image are reduced, and the distribution of image pixel information is more reasonable, which is beneficial to the subsequent processing of the image.

(4)图像增强操作(4) Image enhancement operation

在中智学图像分割过程中,将图像变换成中智学图像,并进行α均值运算,会使图像轮廓变得模糊,因此在处理过程中利用图像增强来解决这个问题。图像增强的主要目的是提高图像的对比度,突出感兴趣的信息。图像增强可以改善图像模糊和图像质量。它可以进一步增强物体的边缘信息,削弱非边缘信息,从而减少图像的模糊。利用模糊集对集合元素增强运算原理,对中智图像PNS中的T图像进行β-增强运算获得

Figure BDA0002240441220000158
使模糊的图像轮廓变得更加清晰,其增强公式为In the process of neutrosophic image segmentation, transforming the image into a neutrosophic image and performing α-mean operation will make the image contour blurred, so image enhancement is used to solve this problem in the processing process. The main purpose of image enhancement is to improve the contrast of the image and highlight the information of interest. Image enhancement can improve image blur and image quality. It can further enhance the edge information of the object and weaken the non-edge information, thereby reducing the blurring of the image. Using the principle of fuzzy set enhancement operation on set elements, the β-enhancement operation is performed on the T image in the neutrosophic image P NS to obtain
Figure BDA0002240441220000158
Make the blurred image outlines clearer, and the enhancement formula is

Figure BDA0002240441220000159
Figure BDA0002240441220000159

Figure BDA00022404412200001510
Figure BDA00022404412200001510

Figure BDA0002240441220000161
Figure BDA0002240441220000161

Figure BDA0002240441220000162
Figure BDA0002240441220000162

Figure BDA0002240441220000163
Figure BDA0002240441220000163

Figure BDA0002240441220000164
Figure BDA0002240441220000164

Figure BDA0002240441220000165
Figure BDA0002240441220000165

其中:δ'(i,j)是像素点(i,j)进行过图像增强操作后的

Figure BDA0002240441220000166
Figure BDA0002240441220000167
之差的绝对值。Among them: δ'(i,j) is the pixel point (i,j) after image enhancement operation
Figure BDA0002240441220000166
and
Figure BDA0002240441220000167
The absolute value of the difference.

(5)模糊C均值算法(5) Fuzzy C-means algorithm

基于FCM聚类算法的图像分割关键在于如何将图像的数学形式与FCM聚类算法的数学形式相统一。为此,可以将图像视为一个样本集,图像中的每一个像素视为一个聚类样本,像素的特征视为样本的特征向量,在特征空间中对像素进行聚类。将具有相同或相近特征的像素尽可能的划分到一类中;然后对每个像素的类别进行标记,从而完成对图像的分割。The key to image segmentation based on FCM clustering algorithm is how to unify the mathematical form of the image with that of the FCM clustering algorithm. To this end, the image can be regarded as a sample set, each pixel in the image can be regarded as a clustering sample, the feature of the pixel can be regarded as the feature vector of the sample, and the pixels are clustered in the feature space. Divide the pixels with the same or similar features into one category as much as possible; then mark the category of each pixel to complete the image segmentation.

针对中智子集,定义了一种新的聚类方法,处理

Figure BDA0002240441220000168
即α均值和图像增强操作后的中智子集。For neutrosophic subsets, a new clustering method is defined to deal with
Figure BDA0002240441220000168
That is, the neutrosophic subset after alpha-means and image enhancement operations.

考虑到不确定性的影响,我们将两个集合T和I组合为一个新的聚类值。Considering the effect of uncertainty, we combine the two sets T and I into a new cluster value.

Figure BDA0002240441220000169
Figure BDA0002240441220000169

Figure BDA00022404412200001610
Figure BDA00022404412200001610

提出了一种改进的中智集模糊c-均值算法。目标函数定义为:An improved fuzzy c-means algorithm for neutrosophic sets is proposed. The objective function is defined as:

Figure BDA0002240441220000171
Figure BDA0002240441220000171

Figure BDA0002240441220000172
Figure BDA0002240441220000172

Figure BDA0002240441220000173
Figure BDA0002240441220000173

FCM算法通过对目标函数进行迭代优化,得到样本集的模糊分类。The FCM algorithm obtains the fuzzy classification of the sample set by iterative optimization of the objective function.

该算法对初始值敏感,很大程度上取决于初始聚类中心的选择。当初始聚类中心严重偏离全局最优聚类中心时,FCM容易陷入局部最小。当集群数量较多时,缺点更加明显。The algorithm is sensitive to the initial value and largely depends on the choice of initial cluster centers. When the initial cluster center is seriously deviated from the global optimal cluster center, FCM is prone to fall into a local minimum. When the number of clusters is large, the disadvantage is more obvious.

(6)改进的PSO-FCM(6) Improved PSO-FCM

基于梯度下降的FCM算法本质上是一种局部搜索算法,容易陷入局部极小值,对初始值敏感,即不同的初始值可能导致不同的聚类结果。QPSO算法具有全局搜索能力,不易陷入局部区域,因此,本公开提出了一种新的中智学图像分割混合聚类算法(QPSO-FCM),它将QPSO算法与FCM结合起来应用到中智学图像上进行分割。The FCM algorithm based on gradient descent is essentially a local search algorithm, which is easy to fall into a local minimum value and is sensitive to the initial value, that is, different initial values may lead to different clustering results. The QPSO algorithm has a global search ability and is not easy to fall into a local area. Therefore, the present disclosure proposes a new neutrosophic image segmentation hybrid clustering algorithm (QPSO-FCM), which combines the QPSO algorithm and FCM and applies it to neutrosophic images. to split.

QPSO使用“种群”和“进化”的概念,根据个体(粒子)的适合度进行操作。假设在一个多目标搜索空间中,m是粒子的粒子数,第一粒子的位置被表示为向量Xi=(xi1,xi2,…xiD)在每次迭代中,粒子通过跟踪两个最佳位置来更新自身。一种是粒子本身找到的最优解,称为单个最优位置pi=(pi1,pi2,…pid);另一种是目前整个粒子群优化找到的最优解,称为全局最优位置pg。在找到上述两个极值后,将平均最佳位置(mbest)或C(t)作为所有粒子的最佳位置的平均值。根据以下公式移动以搜索问题的最佳解决方案。QPSO operates on the fitness of individuals (particles) using the concepts of "population" and "evolution". Assuming that in a multi-objective search space, m is the particle number of particles, the position of the first particle is represented as a vector Xi = (xi1,xi2,...xiD) In each iteration, the particle is obtained by tracking the two best positions. update itself. One is the optimal solution found by the particle itself, called a single optimal position pi=(pi1, pi2, ... pid); the other is the optimal solution found by the entire particle swarm optimization at present, called the global optimal position pg . After finding the above two extremes, take the average best position (mbest) or C(t) as the average of the best positions of all particles. Move according to the formula below to search for the best solution to the problem.

Figure BDA0002240441220000181
Figure BDA0002240441220000181

Xi,j(t+1)=pi,j(t)±α·|Cj(t)-Xi,j(t)|·ln[1/ui,j(t)] ui,j(t)~U(0,1)(31)X i,j (t+1)=pi ,j (t)±α·|C j (t)-X i,j (t)|·ln[1/u i,j (t)] u i ,j (t)~U(0,1)(31)

Figure BDA0002240441220000182
Figure BDA0002240441220000182

pi,j(t)是第t次迭代时,第i个粒子的个体最优位置(1<=j<=D)。α为QPSO的收缩扩张系数。它是QPSO收敛的一个重要参数,α的值由以下公式决定:p i,j (t) is the individual optimal position of the i-th particle at the t-th iteration (1<=j<=D). α is the contraction expansion coefficient of QPSO. It is an important parameter for QPSO convergence, and the value of α is determined by the following formula:

α=(α12)*(MaxIt-t)/MaxIt+α2 (33)α=(α 12 )*(MaxIt-t)/MaxIt+α 2 (33)

式中,α1和α2分别为参数的初始值和最终值,t是当前迭代次数,MaxIt是允许迭代的最大次数,通过改变的值,从搜索开始的1.0到搜索结束的0.5.In the formula, α 1 and α 2 are the initial and final values of the parameters, respectively, t is the current number of iterations, and MaxIt is the maximum number of iterations allowed, by changing the value from 1.0 at the beginning of the search to 0.5 at the end of the search.

QPSO-FCM算法的应用中,QPSO中每个个体的适应度函数定义如下:In the application of QPSO-FCM algorithm, the fitness function of each individual in QPSO is defined as follows:

Figure BDA0002240441220000183
Figure BDA0002240441220000183

其中,Jm(U,C)是图像的适应度函数。k是灰度,k=0-C,C是灰度的最大值。where J m (U, C) is the fitness function of the image. k is the grayscale, k=0-C, C is the maximum value of the grayscale.

基于QPSO的FCM算法随机生成下一代解,因此,很容易搜索全局最优值。而且,每一代的解都具有自我提升和向他人学习的双重优势;因此,它具有快速的收敛速度。The QPSO-based FCM algorithm randomly generates next-generation solutions, so it is easy to search for the global optimum. Moreover, each generation of the solution has the dual advantages of self-improvement and learning from others; therefore, it has a fast convergence rate.

(7)基于QPSO-FCM算法的中智学图像分割(7) Neutrosopic Image Segmentation Based on QPSO-FCM Algorithm

提出了一种新的中智学图像分割算法—QPSO-FCM算法,该算法包括以下步骤:A new neutrosophic image segmentation algorithm—QPSO-FCM algorithm is proposed, which includes the following steps:

步骤1:输入图像;Step 1: Input image;

步骤2:使用等式(1)-(6)将图像变换为NS域;Step 2: Transform the image into the NS domain using equations (1)-(6);

步骤3:使用等式(11)-(17)执行α均值运算;Step 3: Perform an alpha mean operation using equations (11)-(17);

步骤4:使用等式(18)-(24)执行图像增强操作;Step 4: Perform an image enhancement operation using equations (18)-(24);

步骤5:使用等式(9)计算不定子集I的熵;Step 5: Calculate the entropy of the indeterminate subset I using equation (9);

步骤6:如果

Figure BDA0002240441220000191
进入步骤7;否则进入步骤3;Step 6: If
Figure BDA0002240441220000191
Go to step 7; otherwise go to step 3;

步骤7:将改进的FCM算法应用于中智集:Step 7: Apply the improved FCM algorithm to the neutrosophic set:

Figure BDA0002240441220000192
初始聚类类数C,模糊度参数m,粒子群大小N和最大迭代次数MaxIt。聚类中心的数量是每个粒子的维数。
Figure BDA0002240441220000192
The initial clustering class number C, the ambiguity parameter m, the particle swarm size N and the maximum number of iterations MaxIt. The number of cluster centers is the dimension of each particle.

Figure BDA0002240441220000193
对N个簇中心进行初始化编码,形成N个第一代粒子。聚类中心的个数相当于粒子的维度。每个粒子的pbest是它的当前位置,而gbest是当前总体中所有粒子的最佳位置。
Figure BDA0002240441220000193
Initialize the N cluster centers to form N first-generation particles. The number of cluster centers is equivalent to the dimension of the particle. The pbest of each particle is its current position, and the gbest is the best position of all particles in the current population.

Figure BDA0002240441220000194
利用等式(28)-(29)在k步计算每个聚类中心C(k)以及隶属度的中心向量U(k);
Figure BDA0002240441220000194
Calculate each cluster center C(k) and the membership center vector U(k) in k steps using equations (28)-(29);

Figure BDA0002240441220000195
根据等式(34)计算各粒子的适应度。如果粒子的适应度优于粒子当前最佳位置的适应度,则更新单个粒子的最佳位置。如果当前全局最佳位置的适应度都优于所有粒子中的最佳位置的适应度,则更新全局最佳位置。
Figure BDA0002240441220000195
The fitness of each particle is calculated according to equation (34). If the fitness of the particle is better than the fitness of the particle's current best position, the best position of a single particle is updated. If the fitness of the current global best position is better than the fitness of the best position among all particles, update the global best position.

Figure BDA0002240441220000196
用等式(30)-(33)来更新每个粒子的位置以生成新的粒子群。
Figure BDA0002240441220000196
Equations (30)-(33) are used to update the position of each particle to generate a new particle swarm.

Figure BDA0002240441220000197
如果当前迭代次数达到先前设置的最大次数,则停止迭代。在最后一代中找到最佳解决方案,否则重复步骤(3)。
Figure BDA0002240441220000197
Stops iterating if the current iteration count reaches the previously set maximum. Find the best solution in the last generation, otherwise repeat step (3).

步骤8:得到图像分割结果。Step 8: Get the image segmentation result.

将QPSO-FCM分割算法应用于各种真实图像去噪后的中智学图像,并与基于模糊C均值的中智去噪图像分割方法进行了比较。The QPSO-FCM segmentation algorithm is applied to denoised neutrosophic images of various real images, and compared with the neutrosophic denoising image segmentation method based on fuzzy C-means.

在FCM图像分割方法和QPSO-FCM图像分割方法中,本发明设置的聚类中心都为5,以此来相互比较。In the FCM image segmentation method and the QPSO-FCM image segmentation method, the cluster centers set in the present invention are both 5, so as to compare with each other.

从图2(a)-图2(d)可以看出,图2(d)与图2(c)相比,军舰边界更清晰。尤其是在船的前部,图2(d)分段的边界更清晰。图2(c)中的军舰边界模糊。It can be seen from Fig. 2(a)-Fig. 2(d) that the warship boundary is clearer in Fig. 2(d) compared with Fig. 2(c). Especially at the front of the boat, the boundary of the segment in Fig. 2(d) is clearer. The warship boundary in Fig. 2(c) is blurred.

从图2(a)-图2(d)的实验数据可以看出QPSO与FCM结合应用到中智学图像上,比FCM结合应用到中智学图像,分割效果上更加清晰。From the experimental data in Figure 2(a)-Figure 2(d), it can be seen that the combination of QPSO and FCM applied to neutrosophic images has a clearer segmentation effect than the combination of FCM applied to neutrosophic images.

图像分割的最终目标是通过一次运行来精确地获取目标图像。因此,我们需要不断地寻找新的方法来测试这个方法是好是坏。从大量实验中得到的数据可以看出,FCM受初始值的影响很大,因此我们只能测试是否可以找到全局最优值。在许多实验中,我们可以找到效果最好的图像(边界比较清晰)。将QPSO-FCM方法应用于中智学图像中,可以通过一次运行找到全局最优,分割边界更清晰,模糊区域的处理得到了很大的改善,即使重复实验,全局最优仍然是相同的。The ultimate goal of image segmentation is to obtain the target image precisely in one run. Therefore, we need to constantly find new ways to test whether this method is good or bad. It can be seen from the data obtained in a large number of experiments that the FCM is greatly affected by the initial value, so we can only test whether the global optimal value can be found. In many experiments, we can find the best performing images (with sharper borders). Applying the QPSO-FCM method to neutrosophic images, the global optimum can be found in one run, the segmentation boundaries are clearer, the handling of blurred regions is greatly improved, and the global optimum remains the same even if the experiment is repeated.

但是由于该方法的数据需要重复计算,因此程序的运行时间没有明显缩短。根据无免费午餐定理,我们以花费更多的运行时间为代价,得到了良好和准确的分割效果。However, since the data of this method needs to be repeatedly calculated, the running time of the program is not significantly shortened. According to the no-free lunch theorem, we get good and accurate segmentation at the cost of more running time.

实施例二,本实施例还提供了量子行为粒子群优化模糊C均值的图像分割系统;Embodiment 2, this embodiment also provides an image segmentation system for quantum behavior particle swarm optimization of fuzzy C-means;

量子行为粒子群优化模糊C均值的图像分割系统,包括:A quantum-behavioral particle swarm-optimized fuzzy C-means image segmentation system, including:

获取模块,其被配置为:获取待处理图像,将待处理图像转换为中智学图像;an acquisition module, which is configured to: acquire the to-be-processed image, and convert the to-be-processed image into a neutrosophic image;

图像预处理模块,其被配置为:对中智学图像进行去噪处理,然后对去噪后的结果进行图像增强操作;an image preprocessing module, which is configured to: perform denoising processing on the neutrosophic image, and then perform an image enhancement operation on the denoised result;

信息熵计算模块,其被配置为:对图像增强后的结果,计算图像集合I的元素信息熵;an information entropy calculation module, which is configured to: calculate the element information entropy of the image set I for the result after image enhancement;

图像分割模块,其被配置为:如果相邻元素信息熵的比值小于设定阈值,则利用量子行为粒子群优化的模糊C均值算法进行中智学图像的分割,得到图像分割结果;否则,返回图像预处理模块。The image segmentation module is configured to: if the ratio of the information entropy of adjacent elements is less than the set threshold, use the fuzzy C-means algorithm of quantum behavior particle swarm optimization to segment the neutrosophic image, and obtain the image segmentation result; otherwise, return the image preprocessing module.

实施例三,本实施例还提供了一种电子设备,包括存储器和处理器以及存储在存储器上并在处理器上运行的计算机指令,所述计算机指令被处理器运行时,完成实施例一所述方法的步骤。Embodiment 3, this embodiment also provides an electronic device, including a memory, a processor, and computer instructions stored in the memory and run on the processor, and when the computer instructions are run by the processor, the first embodiment is completed. steps of the method described.

实施例四,本实施例还提供了一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时,完成实施例一所述方法的步骤。In Embodiment 4, this embodiment further provides a computer-readable storage medium for storing computer instructions. When the computer instructions are executed by a processor, the steps of the method in Embodiment 1 are completed.

以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above descriptions are only preferred embodiments of the present application, and are not intended to limit the present application. For those skilled in the art, the present application may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included within the protection scope of this application.

Claims (4)

1. The image segmentation method for the fuzzy C mean value by quantum-behaved particle swarm optimization is characterized by comprising the following steps of:
an acquisition step: acquiring an image to be processed, and converting the image to be processed into a Zhongzhi image;
converting the image to be processed into a mesology image; the method comprises the following specific steps:
P NS ={T,I,F} (1)
Figure FDA0003840148860000011
Figure FDA0003840148860000012
Figure FDA0003840148860000013
Figure FDA0003840148860000014
F(i,j)=1-T(i,j) (6)
wherein, P NS Is a pixel point of the image in the NS domain;
t (i, j) is the value of the point (i, j) of the Zhongzhi subset image T;
i (I, j) is the value of the point (I, j) of the image I of the Zhongzhi subset;
f (i, j) is the value of the point (i, j) of the Zhongzhi subset image F;
Figure FDA0003840148860000015
is the mean of g (i, j) over the w × w region;
Figure FDA0003840148860000016
to represent
Figure FDA0003840148860000017
The minimum value of (a) is calculated,
Figure FDA0003840148860000018
to represent
Figure FDA0003840148860000019
The maximum value of (a);
g (m, n) is the value of the point (m, n) of the Zhongzhi subset image T;
δ (i, j) is the mean value of pixel points g (i, j) and g (i, j) in the region w × w
Figure FDA00038401488600000110
The absolute value of the difference;
an image preprocessing step: denoising the intermediate intelligent image, and then performing image enhancement operation on a denoised result;
denoising the mesology image; the method comprises the following specific steps:
Figure FDA00038401488600000111
Figure FDA0003840148860000021
Figure FDA0003840148860000022
Figure FDA0003840148860000023
Figure FDA0003840148860000024
Figure FDA0003840148860000025
Figure FDA0003840148860000026
wherein,
Figure FDA0003840148860000027
the pixels of the image in the NS domain pass through a set of point alpha-means,
Figure FDA0003840148860000028
is the alpha-mean of the image T of the wisdom subset,
Figure FDA0003840148860000029
is an alpha-means subset of the image I of the wisdom subset,
Figure FDA00038401488600000210
for the noon-child set image FT is the set of real values of the original image,
Figure FDA00038401488600000211
for the alpha-mean set to be performed, I is the set of the uncertain values of the original image, alpha takes a value of 0.85, w W is the size of the limited region, (I, j) is the pixel point of the original image, (m, n) is the leading region pixel information point in the w W region, T (m, n) is the value of the point (m, n) of the image T of the Zhongzhi subset,
Figure FDA00038401488600000212
a gray-scale average intensity value of alpha-means is performed for the image T of the wisdom subset,
Figure FDA00038401488600000213
alpha-mean value set is needed, F (m, n) is the value of the point (m, n) of the image F of the mesogen-junction set,
Figure FDA00038401488600000214
carrying out alpha-mean gray level average intensity value on the Zhongzhi subset image F;
Figure FDA00038401488600000215
for the average intensity value of the image I of the wisdom subset,
Figure FDA00038401488600000216
is T after alpha mean operation of pixel point (i, j) α (i, j) and
Figure FDA00038401488600000217
the absolute value of the difference between the two values,
Figure FDA00038401488600000218
and
Figure FDA00038401488600000219
is that
Figure FDA00038401488600000220
Minimum and maximum values of;
performing image enhancement operation on the denoised result; the method comprises the following specific steps:
Figure FDA00038401488600000221
Figure FDA00038401488600000222
Figure FDA0003840148860000031
Figure FDA0003840148860000032
Figure FDA0003840148860000033
Figure FDA0003840148860000034
Figure FDA0003840148860000035
wherein:
Figure FDA0003840148860000036
for the set of realisms of the wisdom set subjected to the beta enhancement operation,
Figure FDA0003840148860000037
is that when
Figure FDA0003840148860000038
The authenticity subset of the noon set after beta enhancement operation,
Figure FDA0003840148860000039
is a subset of uncertainty values representing the medium intelligence set, beta is a value of 0.85,
Figure FDA00038401488600000310
is composed of
Figure FDA00038401488600000311
The true value of the beta-enhancement operation is performed,
Figure FDA00038401488600000312
is the value of the set T after the mean operation,
Figure FDA00038401488600000313
for the non-authentic set of the wisdom set subjected to the beta enhancement operation,
Figure FDA00038401488600000314
is as follows
Figure FDA00038401488600000315
The non-reality subset of the noon set is subjected to beta enhancement operation,
Figure FDA00038401488600000316
is composed of
Figure FDA00038401488600000317
A non-true value of the beta boost operation is performed,
Figure FDA00038401488600000318
the non-real pixel points after the mean value operation,
Figure FDA00038401488600000319
is that
Figure FDA00038401488600000320
Of uncertainty of pixel point subset, δ' min Value after image enhancement operation for pixel point (i, j)
Figure FDA00038401488600000321
And mean value
Figure FDA00038401488600000322
Minimum value of absolute value of the difference, δ' max Value of pixel point (i, j) after image enhancement operation
Figure FDA00038401488600000323
And mean value
Figure FDA00038401488600000324
The maximum value of the absolute value of the difference between,
Figure FDA00038401488600000325
as a subset
Figure FDA00038401488600000326
The average intensity in the region w x w,
Figure FDA00038401488600000327
as a subset
Figure FDA00038401488600000328
Performing beta enhancement operation on the authenticity set in the w x w region; delta' (i, j) is the pixel (i, j) after the image enhancement operation
Figure FDA00038401488600000329
And
Figure FDA00038401488600000330
the absolute value of the difference;
and (3) information entropy calculation: calculating the element information entropy of the image set I for the result after image enhancement;
calculating the element information entropy of the image set I for the result after image enhancement; the method comprises the following specific steps:
Figure FDA00038401488600000331
wherein, en I Is the entropy, p, of the image of the Zhongzhi subset I I (i) Is the probability of element I in the wisdom subset image I;
an image segmentation step: if the ratio of the information entropies of the adjacent elements is smaller than a set threshold, carrying out segmentation on the mesology image by using a fuzzy C-means algorithm optimized by quantum-behaved particle swarm to obtain an image segmentation result; otherwise, returning to the image preprocessing step;
the objective function is defined as:
Figure FDA0003840148860000041
Figure FDA0003840148860000042
Figure FDA0003840148860000043
the fuzzy C-means algorithm optimized by quantum-behaved particle swarm is used for segmenting the mesology image to obtain an image segmentation result; the method comprises the following specific steps:
s41: the method comprises the steps of firstly, obtaining an initial clustering class number C, an ambiguity parameter m, a particle swarm size N and a maximum iteration number MaxIt; the number of cluster centers is the dimension of each particle;
s42: carrying out initialization coding on the N cluster centers to form N first-generation particles; the number of the clustering centers is equivalent to the dimension of the particles; pbest for each particle is its current location, and gbest is the best location for all particles in the current population;
s43: calculating each clustering center C (k) and a center vector U (k) of the membership degree;
s44: calculating the fitness of each particle; if the fitness of the particle is better than the fitness of the current optimal position of the particle, updating the optimal position of a single particle; if the fitness of the current global optimal position is better than the fitness of the optimal positions in all the particles, updating the global optimal position;
s45: updating the location of each particle to generate a new population of particles;
the step S45, updating the position of each particle with equations (30) - (33) to generate a new particle population:
Figure FDA0003840148860000051
X i,j (t+1)=p i,j (t)±α·|C j (t)-X i,j (t)|·ln[1/u i,j (t)];u i,j (t)~U(0,1) (31)
Figure FDA0003840148860000052
wherein p is i,j (t) is the potential well in the jth dimension of the ith particle at the tth iteration, whose location is actually at the individual optimal location pbest j (t) and the group optimal position gbest (t) are in the hyper-rectangle of the vertex and vary with the variation of pbest and gbest; phi is a j (t) and u i,j (t) are all t iterations, j dimension is [0,1]Random numbers uniformly distributed, X i,j (t + 1) is the position of the ith particle in the jth dimension, C, at the tth iteration j (t) is a vector in C (t), α is the contraction-expansion coefficient of QPSO, and α has a value defined byEquation (33) determines:
α=(α 12 )*(MaxIt-t)/MaxIt+α 2 (33)
wherein alpha is 1 And alpha 2 Respectively an initial value and a final value of the parameter alpha, t is the current iteration time, maxIt is the maximum time allowed to iterate, and the value of alpha is changed from 1.0 at the beginning of searching to 0.5 at the end of searching;
the fitness function for each individual is defined as follows:
Figure FDA0003840148860000053
wherein, J m (U, C) is the fitness function of the image; k is the gray scale, k =0-C, C is the maximum value of the gray scale;
s46: stopping iteration if the current iteration times reach the maximum times set previously; the best solution is found in the last generation, otherwise S43 is repeated.
2. The image segmentation system of the fuzzy C-means with the quantum-behaved particle swarm optimization, which adopts the image segmentation method of the fuzzy C-means with the quantum-behaved particle swarm optimization as claimed in claim 1, is characterized by comprising:
an acquisition module configured to: acquiring an image to be processed, and converting the image to be processed into a Zhongzhi image;
an image pre-processing module configured to: denoising the intermediate intelligent image, and then performing image enhancement operation on a denoised result;
an information entropy calculation module configured to: calculating the element information entropy of the image set I for the result after image enhancement;
an image segmentation module configured to: if the ratio of the information entropies of the adjacent elements is smaller than a set threshold, carrying out segmentation on the mesology image by using a fuzzy C-means algorithm optimized by quantum-behaved particle swarm to obtain an image segmentation result; otherwise, returning to the image preprocessing module;
the fuzzy C-means algorithm optimized by quantum-behaved particle swarm is used for segmenting the mesology image to obtain an image segmentation result; the method comprises the following specific steps:
s41: the method comprises the steps of firstly, obtaining an initial clustering class number C, an ambiguity parameter m, a particle swarm size N and a maximum iteration number MaxIt; the number of cluster centers is the dimension of each particle;
s42: carrying out initialization coding on the N cluster centers to form N first-generation particles; the number of the clustering centers is equivalent to the dimension of the particles; pbest for each particle is its current location, and gbest is the best location for all particles in the current population;
s43: calculating each clustering center C (k) and a center vector U (k) of the membership degree;
s44: calculating the fitness of each particle; if the fitness of the particle is better than the fitness of the current optimal position of the particle, updating the optimal position of a single particle; if the fitness of the current global optimal position is better than the fitness of the optimal positions in all the particles, updating the global optimal position;
s45: updating the location of each particle to generate a new population of particles;
s46: stopping iteration if the current iteration times reach the maximum times set previously; the best solution is found in the last generation, otherwise S43 is repeated.
3. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the method of claim 1.
4. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the method of claim 1.
CN201910998269.6A 2019-10-17 2019-10-17 Image segmentation method and system for quantum-behaved particle swarm optimization fuzzy C-means Active CN110751662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910998269.6A CN110751662B (en) 2019-10-17 2019-10-17 Image segmentation method and system for quantum-behaved particle swarm optimization fuzzy C-means

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910998269.6A CN110751662B (en) 2019-10-17 2019-10-17 Image segmentation method and system for quantum-behaved particle swarm optimization fuzzy C-means

Publications (2)

Publication Number Publication Date
CN110751662A CN110751662A (en) 2020-02-04
CN110751662B true CN110751662B (en) 2022-10-25

Family

ID=69278997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910998269.6A Active CN110751662B (en) 2019-10-17 2019-10-17 Image segmentation method and system for quantum-behaved particle swarm optimization fuzzy C-means

Country Status (1)

Country Link
CN (1) CN110751662B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114943701B (en) * 2022-05-20 2024-06-14 南通鼎彩新材料科技有限公司 Intelligent control system of granulating equipment for heat-shrinkable tube

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898602A (en) * 2018-06-27 2018-11-27 南京邮电大学 A kind of FCM medical image cutting method based on improvement QPSO
CN110111343A (en) * 2019-05-07 2019-08-09 齐鲁工业大学 A kind of middle intelligence image partition method and device based on improvement fuzzy C-mean algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10402623B2 (en) * 2017-11-30 2019-09-03 Metal Industries Research & Development Centre Large scale cell image analysis method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898602A (en) * 2018-06-27 2018-11-27 南京邮电大学 A kind of FCM medical image cutting method based on improvement QPSO
CN110111343A (en) * 2019-05-07 2019-08-09 齐鲁工业大学 A kind of middle intelligence image partition method and device based on improvement fuzzy C-mean algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
聚类算法的研究与应用;李引;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑(月刊)》;20131215;第22-24页 *

Also Published As

Publication number Publication date
CN110751662A (en) 2020-02-04

Similar Documents

Publication Publication Date Title
Kendall et al. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics
Shan et al. Automatic facial expression recognition based on a deep convolutional-neural-network structure
CN107680120A (en) Tracking Method of IR Small Target based on rarefaction representation and transfer confined-particle filtering
CN109559328B (en) Bayesian estimation and level set-based rapid image segmentation method and device
WO2023142602A1 (en) Image processing method and apparatus, and computer-readable storage medium
Yang A CNN-based broad learning system
CN108416347A (en) Well-marked target detection algorithm based on boundary priori and iteration optimization
CN108427919A (en) A kind of unsupervised oil tank object detection method guiding conspicuousness model based on shape
Li et al. Statistical thresholding method for infrared images
Yuan et al. Half-CNN: a general framework for whole-image regression
Zhang et al. Part-based visual tracking with spatially regularized correlation filters
CN112132806A (en) A Method of Extracting Changed Regions Based on Fuzzy Space Markov Random Fields
Yang et al. Overfitting reduction of pose estimation for deep learning visual odometry
CN107392926B (en) Remote sensing image feature selection method based on previous land thematic map
CN110751662B (en) Image segmentation method and system for quantum-behaved particle swarm optimization fuzzy C-means
Xie et al. 3D surface segmentation from point clouds via quadric fits based on DBSCAN clustering
CN110135435A (en) A method and device for saliency detection based on extensive learning system
Shu et al. Wasserstein distributional harvesting for highly dense 3D point clouds
CN103295236B (en) Markov multiple features random field models construction method and brain MR image cutting techniques thereof
Sun et al. Robust visual tracking based on convolutional neural network with extreme learning machine
CN104573727B (en) A kind of handwriting digital image dimension reduction method
CN107492101B (en) Multi-modal nasopharyngeal tumor segmentation algorithm based on self-adaptive constructed optimal graph
WO2022121545A1 (en) Graph convolutional network-based grid segmentation method
Wang et al. Sparse least squares support vector machines based on Meanshift clustering method
Xiang et al. Range image segmentation based on split-merge clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 250353 University Road, Changqing District, Ji'nan, Shandong Province, No. 3501

Patentee after: Qilu University of Technology (Shandong Academy of Sciences)

Country or region after: China

Address before: 250353 University Road, Changqing District, Ji'nan, Shandong Province, No. 3501

Patentee before: Qilu University of Technology

Country or region before: China

CP03 Change of name, title or address