[go: up one dir, main page]

CN115512144A - Automatic XRF spectrogram classification method based on convolution self-encoder - Google Patents

Automatic XRF spectrogram classification method based on convolution self-encoder Download PDF

Info

Publication number
CN115512144A
CN115512144A CN202211053356.2A CN202211053356A CN115512144A CN 115512144 A CN115512144 A CN 115512144A CN 202211053356 A CN202211053356 A CN 202211053356A CN 115512144 A CN115512144 A CN 115512144A
Authority
CN
China
Prior art keywords
data
xrf
input
sample
convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211053356.2A
Other languages
Chinese (zh)
Inventor
李福生
王欣然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211053356.2A priority Critical patent/CN115512144A/en
Publication of CN115512144A publication Critical patent/CN115512144A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/22Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material
    • G01N23/223Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by measuring secondary emission from the material by irradiating the sample with X-rays or gamma-rays and by measuring X-ray fluorescence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7753Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/07Investigating materials by wave or particle radiation secondary emission
    • G01N2223/076X-ray fluorescence
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/10Different kinds of radiation or particles
    • G01N2223/101Different kinds of radiation or particles electromagnetic radiation
    • G01N2223/1016X-ray

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an XRF spectrogram automatic classification method based on a convolution self-encoder, and belongs to the field of XRF spectrum analysis processing. The method provided by the invention has the advantages that the acquired XRF spectrum data to be detected is subjected to normalization processing, and the one-dimensional spectrum data vector is converted into a two-dimensional spectrum information matrix form after the normalization processing, so that the XRF spectrum data can be processed by adopting a picture processing mode, and the classification precision is improved. On the basis, a neural network model of a convolution self-encoder is built, and the XRF spectrum spectrogram to be detected is subjected to feature compression to obtain a spectrum data spectrogram after the feature compression; by designing a kmeans classification network, unsupervised classification is realized, and the problems of low classification precision and efficiency caused by complicated XRF spectrum characteristic indexes are effectively solved.

Description

一种基于卷积自编码器的XRF谱图自动分类方法A method for automatic classification of XRF spectra based on convolutional autoencoder

技术领域technical field

本发明涉及XRF光谱分析处理领域,特别涉及一种基于卷积自编码器的XRF谱图自动分类方法。The invention relates to the field of XRF spectrum analysis and processing, in particular to an automatic classification method for XRF spectrums based on a convolutional autoencoder.

背景技术Background technique

XRF(X Ray Fluorescence,X射线荧光光谱法),是利用初级X射线光子来激发待测物质中的原子,使之产生次级X射线以进行物质成分分析和化学研究分析的方法。其具有前处理简便、检测无污染、检测成本低、检测精度高、检测元素范围广、分析速度快、适用性强、稳定性好等诸多优点,在工业、地矿、农田环境评价、医药和卫生等技术领域,取得了广泛的社会效益和经济效益。XRF光谱谱图中包含信息多,面对多领域下测得的未知不同样本的XRF谱图,其光谱分类一直是研究的热点。XRF (X Ray Fluorescence, X-ray fluorescence spectroscopy) is a method that uses primary X-ray photons to excite atoms in the substance to be measured to generate secondary X-rays for material composition analysis and chemical research analysis. It has many advantages such as simple pretreatment, no pollution in detection, low detection cost, high detection accuracy, wide range of detection elements, fast analysis speed, strong applicability, and good stability. Health and other technical fields have achieved extensive social and economic benefits. The XRF spectrum contains a lot of information. Facing the XRF spectrum of unknown and different samples measured in multiple fields, its spectral classification has always been a research hotspot.

目前XRF光谱分类主要采用机器学习的方法,如:反向传播算法(BP)、支持向量机(SVM)、极限学习机(ELM)等,这些算法都展现出比较强大的性能。但这些算法也有自身的缺陷:如BP算法容易陷入局部最小,SVM分类性能不稳定,计算复杂度高,ELM虽然能极大提升学习速度但是效果不稳定。At present, XRF spectrum classification mainly adopts machine learning methods, such as: backpropagation algorithm (BP), support vector machine (SVM), extreme learning machine (ELM), etc., and these algorithms show relatively powerful performance. However, these algorithms also have their own defects: for example, the BP algorithm is easy to fall into the local minimum, the SVM classification performance is unstable, and the calculation complexity is high. Although the ELM can greatly improve the learning speed, the effect is unstable.

近年来,基于深度学习的图像分类方法得到了广泛的关注。深度卷积神经网络(Convolutional Neural Network,CNN)可直接对输入的二维图像进行处理,具有强大的特征学习能力,广泛应用于多分类和大规模数据的计算机视觉和语音识别等领域并取得了巨大成功。但在XRF光谱上的应用还是偏少,因为XRF光谱本质是一维向量,且数据集往往都不大,深度学习虽然有很强的学习能力,但是容易过拟合;且传统的深度学习网络结构不太适合处理一维数据。In recent years, image classification methods based on deep learning have received extensive attention. The deep convolutional neural network (Convolutional Neural Network, CNN) can directly process the input two-dimensional image, has a powerful feature learning ability, and is widely used in the fields of computer vision and speech recognition of multi-category and large-scale data, and has achieved Great success. However, the application of XRF spectrum is still relatively small, because XRF spectrum is essentially a one-dimensional vector, and the data set is often not large. Although deep learning has a strong learning ability, it is easy to overfit; and the traditional deep learning network Structs are not well suited for working with one-dimensional data.

因此,有必要研究一种新的XRF光谱分类方法,使其在面对大量未知XRF谱图样本能够实现自动分类,提升分类精度和效率。Therefore, it is necessary to study a new XRF spectrum classification method, so that it can realize automatic classification in the face of a large number of unknown XRF spectrum samples, and improve the classification accuracy and efficiency.

发明内容Contents of the invention

本发明的目的在于提供一种基于卷积自编码器的XRF谱图自动分类方法,以克服XRF光谱谱图特征指标繁杂带来的分类精度和效率低的问题。The purpose of the present invention is to provide an automatic classification method of XRF spectrum based on convolutional autoencoder, so as to overcome the problems of low classification accuracy and efficiency caused by complex feature indexes of XRF spectrum.

为实现上述目的,本发明采用如下技术方案:To achieve the above object, the present invention adopts the following technical solutions:

一种基于卷积自编码器的XRF谱图自动分类方法,包括以下步骤:A method for automatic classification of XRF spectrograms based on convolutional autoencoders, comprising the following steps:

步骤1、使用手持式X荧光光谱分析仪对未知样本进行检测,作为待测的XRF光谱谱图数据;Step 1. Use a handheld X-ray fluorescence spectrometer to detect unknown samples as the XRF spectrogram data to be tested;

步骤2、对步骤1得到的待测XRF光谱谱图数据进行归一化处理,将其转化为二维光谱信息矩阵形式,转换到二维空间后的每个光谱信息矩阵包含512×512个特征;然后根据转换后的XRF光谱信息创建训练样本集;Step 2. Normalize the XRF spectrum data to be measured obtained in step 1, and convert it into a two-dimensional spectral information matrix form, and each spectral information matrix after conversion into a two-dimensional space contains 512×512 features ; Then create a training sample set according to the converted XRF spectrum information;

步骤3、根据步骤2创建的训练样本集生成训练集和测试集;Step 3, generate a training set and a test set according to the training sample set created in step 2;

步骤4、卷积自编码器神经网络模型的搭建和训练:Step 4. Construction and training of convolutional autoencoder neural network model:

步骤4.1、基于卷积自编码器神经网络对训练集进行迭代训练,构建卷积自编码器神经网络模型;Step 4.1, iteratively training the training set based on the convolutional autoencoder neural network, and constructing a convolutional autoencoder neural network model;

步骤4.2、将测试集输入步骤4.1得到的卷积自编码器神经网络模型中得到压缩后的XRF光谱谱图;Step 4.2, input the test set into the convolutional self-encoder neural network model obtained in step 4.1 to obtain the compressed XRF spectrogram;

步骤5、搭建Kmeans无监督分类模型,然后将步骤4.2得到的特征压缩后的XRF光谱数据图输入到Kmeans无监督分类模型中,通过训练得到未知样本的XRF光谱谱图分类结果。Step 5. Build the Kmeans unsupervised classification model, and then input the feature-compressed XRF spectrum data map obtained in step 4.2 into the Kmeans unsupervised classification model, and obtain the XRF spectrum classification result of the unknown sample through training.

进一步的,所述步骤2中进行归一化处理采用的公式为:Further, the formula used for normalization in step 2 is:

Figure BDA0003824130820000021
Figure BDA0003824130820000021

式(1)中,Xmax表示输入同一种类样本的一维光谱数据的最大值,Xmin表示输入同一种类样本的一维光谱数据的最小值,Xnorm表示归一化后的该种类样本的光谱数据;In formula (1), X max represents the maximum value of the one-dimensional spectral data of the same type of sample input, X min represents the minimum value of the one-dimensional spectral data of the same type of sample input, and X norm represents the normalized value of the sample of this type spectral data;

进一步的,所述步骤4.1构建的卷积自编码器神经网络模型包括编码器和解码器;其中编码器由1个输入层和m个编码隐藏层组成;解码器由m-1个解码隐藏层和1个输出层组成;输入层的输入数据为步骤3得到的训练集,用于将输入数据输出至m个编码隐藏层;m个编码隐藏层用于对输入数据进行压缩处理得到输入数据的核心特征,并将输入数据的核心特征输出至m-1个解码隐藏层进行解压后传递至输出层,输出层对接收到的核心数据进行重构后得到特征压缩后的XRF光谱谱图输出。Further, the convolutional self-encoder neural network model constructed in step 4.1 includes an encoder and a decoder; wherein the encoder consists of 1 input layer and m encoding hidden layers; the decoder consists of m-1 decoding hidden layers and 1 output layer; the input data of the input layer is the training set obtained in step 3, which is used to output the input data to m coded hidden layers; m coded hidden layers are used to compress the input data to obtain the input data The core features of the input data are output to m-1 decoding hidden layers for decompression and then passed to the output layer. The output layer reconstructs the received core data and obtains the output of the XRF spectrum after feature compression.

更进一步的,所述编码器和解码器的隐藏层均采用Tanh非线性函数作为激活函数,解码器的输出层采用LeakyReLU非线性函数作为激活函数。Further, the hidden layers of the encoder and the decoder both use the Tanh nonlinear function as the activation function, and the output layer of the decoder uses the LeakyReLU nonlinear function as the activation function.

更进一步的,所述步骤4.2利用卷积自编码器神经网络模型得到压缩后的XRF光谱谱图的详细过程为:Furthermore, the step 4.2 uses the convolutional self-encoder neural network model to obtain the detailed process of the compressed XRF spectrogram as follows:

步骤4.2.1、将步骤3得到的测试集作为输入数据,输入到卷积自编码器的输入层,并将其分别映射到编码器的m个隐藏层进行特征压缩并得到输入数据的核心特征,完成卷积自编码器编码的过程,卷积自编码器编码的表达式如下:Step 4.2.1. Take the test set obtained in step 3 as the input data, input it to the input layer of the convolutional autoencoder, and map it to the m hidden layers of the encoder for feature compression and obtain the core features of the input data , to complete the process of convolutional autoencoder encoding, the expression of convolutional autoencoder encoding is as follows:

y=f(wyx+by) (5)y=f(w y x+b y ) (5)

其中,x为装载好的输入数据,y表示中间隐藏层学习到的特征,wy代表隐藏层输入的权重,by表示隐藏单元的偏移系数,f代表卷积操作;Among them, x is the loaded input data, y represents the features learned by the middle hidden layer, w y represents the weight of the hidden layer input, b y represents the offset coefficient of the hidden unit, and f represents the convolution operation;

步骤4.2.2、将步骤4.2.1得到核心特征传输至解码器的m-1个隐藏层进行解压后,再传递到一个输出层进行重构后,得到一个与输入数据相近的输出数据,即特征压缩后的压缩的XRF光谱谱图,从而完成自编码器的解码过程,解码过程的表达式如下:Step 4.2.2, transfer the core features obtained in step 4.2.1 to the m-1 hidden layers of the decoder for decompression, and then transfer to an output layer for reconstruction to obtain an output data similar to the input data, namely The compressed XRF spectrogram after feature compression is used to complete the decoding process of the autoencoder. The expression of the decoding process is as follows:

z=f(wzy+bz) (6)z=f(w z y+b z ) (6)

其中,y表示中间隐藏层学习到的特征,z为通过隐藏特征y重建后的数据,wz代表隐藏层输出的权重,bz表示输出单元的偏移系数,f代表卷积操作;Among them, y represents the feature learned by the middle hidden layer, z is the data reconstructed by the hidden feature y, w z represents the weight of the hidden layer output, b z represents the offset coefficient of the output unit, and f represents the convolution operation;

卷积自编码器的约束条件如下所示:The constraints of the convolutional autoencoder are as follows:

wy=w′z=w (7)w y =w' z =w (7)

其中,w′z表示wz的转置,表示卷积自编码具有相同的绑定权重w,有助于使模型的参数量减半;Among them, w′ z represents the transpose of w z , which means that the convolutional autoencoder has the same binding weight w, which helps to halve the parameter amount of the model;

卷积自编码器训练的目标使输入数据与重建数据的误差不断缩小,数学表达式如下:The goal of convolutional autoencoder training is to continuously reduce the error between input data and reconstructed data. The mathematical expression is as follows:

Figure BDA0003824130820000031
Figure BDA0003824130820000031

在此,自动编码器需要训练的参数为:w,by,bz,其中,x为装载好的样本数据,z表示重建后的输出数据,c(x,z)代表输入数据与重建数据之间的误差;Here, the parameters that the autoencoder needs to train are: w, b y , b z , where x is the loaded sample data, z represents the reconstructed output data, and c(x,z) represents the input data and reconstructed data the error between

其权值更新规则由如下公式进行表示:Its weight update rule is expressed by the following formula:

Figure BDA0003824130820000032
Figure BDA0003824130820000032

Figure BDA0003824130820000033
Figure BDA0003824130820000033

Figure BDA0003824130820000034
Figure BDA0003824130820000034

其中,cost(x,z)为输入数据与重建数据之间的误差损失,η为学习率,

Figure BDA0003824130820000035
表示对权重W求偏导,
Figure BDA0003824130820000036
表示对隐藏单元的偏移系数by求偏导,
Figure BDA0003824130820000037
表示对输出单元的偏移系数bz求偏导。Among them, cost(x,z) is the error loss between the input data and the reconstructed data, η is the learning rate,
Figure BDA0003824130820000035
Represents the partial derivative of the weight W,
Figure BDA0003824130820000036
Represents the partial derivative of the offset coefficient b y of the hidden unit,
Figure BDA0003824130820000037
Represents the partial derivative of the offset coefficient b z of the output unit.

进一步的,所述步骤5中Kmeans无监督分类模型的搭建和训练过程如下:Further, the construction and training process of the Kmeans unsupervised classification model in the step 5 are as follows:

步骤5.1、Kmeans无监督分类模型的搭建Step 5.1, Kmeans unsupervised classification model construction

步骤5.1.1、给定一个样本数据集x={x1,x2,…,xn},对该数据集中任意一个数据样本求其到k(k≤n)个中心的距离,通过指定该数据集中的k(k≤n)数据样本作为初始中心,将该数据样本归属到数据样本距离中心最短的类;Step 5.1.1. Given a sample data set x={x1,x2,…,xn}, calculate the distance from any data sample in the data set to k (k≤n) centers, and specify The k (k≤n) data sample of k (k≤n) is used as the initial center, and the data sample is assigned to the class with the shortest distance from the data sample to the center;

步骤5.1.2、针对步骤5.1.1得到的类中的数据样本,以求类的均值等方法更新每类中心;Step 5.1.2, for the data samples in the class obtained in step 5.1.1, update each class center by methods such as finding the mean value of the class;

步骤5.1.3、重复迭代上述两个步骤更新类中心,如果类中心不变或者类中心变化小于预设阈值,则更新结束,形成类簇从而完成Kmeans无监督分类模型的搭建;否则继续;Step 5.1.3. Repeat the above two steps to update the class center. If the class center remains unchanged or the change of the class center is less than the preset threshold, the update ends, and the clusters are formed to complete the construction of the Kmeans unsupervised classification model; otherwise, continue;

迭代实现样本与其归属的类中心的距离为最小的目标,目标函数如下所示:Iteratively achieve the goal of minimizing the distance between the sample and the center of the class to which it belongs. The objective function is as follows:

Figure BDA0003824130820000041
Figure BDA0003824130820000041

其中,μi表示集合Si的均值,类中所有元素和均值距离的和为VarSiAmong them, μ i represents the mean value of the set S i , and the sum of all elements in the class and the mean distance is VarS i ;

选取如下所示的距离度量方法:Choose a distance metric as follows:

Figure BDA0003824130820000042
Figure BDA0003824130820000042

其中Xi表示第i个数据样本;where Xi represents the i -th data sample;

步骤5.2、将特征压缩后的XRF光谱谱图数据输入到kmeans无监督分类模型中进行训练,得到训练好的kmeans分类网络模型。Step 5.2, input the feature compressed XRF spectrogram data into the kmeans unsupervised classification model for training, and obtain the trained kmeans classification network model.

本发明的有益效果为:本发明对获取的待测XRF光谱谱图数据进行归一化处理,并在归一化处理后,将其由一维光谱数据向量转换为二维光谱信息矩阵形式;使其可以采用图片处理的方式对XRF光谱谱图谱图进行处理,提升了分类的精度。在此基础上,通过搭建卷积自编码器神经网络模型,对待测XRF光谱谱图进行特征压缩,得到特征压缩后的XRF光谱数据谱图;通过设计kmeans分类网络,实现无监督的进行分类,更加有效的解决在存在大量未知样本下其特征指标繁杂影响分类精度和效率的问题。同时,填补了利用样本的XRF光谱谱图进行未知样本分类方法研究这一方向的空缺。此外,在卷积自编码器神经网络模型中,其编码器和解码器的隐藏层均采用Tanh非线性函数作为激活函数,解码器的输出层采用LeakyReLU非线性函数作为激活函数避免过拟合现象。本模型将深度学习神经网络与XRF光谱谱图进行结合,打破光谱谱图与深度学习做定量分析的局限,更好的将XRF光谱与深度学习融合对样本进行定性分析。The beneficial effects of the present invention are as follows: the present invention performs normalization processing on the acquired XRF spectrum data to be measured, and after the normalization processing, converts it from a one-dimensional spectral data vector into a two-dimensional spectral information matrix form; It makes it possible to process the XRF spectrogram in the way of image processing, which improves the classification accuracy. On this basis, by building a convolutional autoencoder neural network model, the XRF spectrum spectrum to be measured is compressed to obtain the XRF spectrum data spectrum after feature compression; by designing the kmeans classification network, unsupervised classification is realized. It is more effective to solve the problem that the complexity of its characteristic indicators affects the classification accuracy and efficiency in the presence of a large number of unknown samples. At the same time, it fills in the vacancy in the research direction of the unknown sample classification method using the XRF spectrum of the sample. In addition, in the convolutional autoencoder neural network model, the hidden layers of the encoder and decoder both use the Tanh nonlinear function as the activation function, and the output layer of the decoder uses the LeakyReLU nonlinear function as the activation function to avoid overfitting. . This model combines deep learning neural network with XRF spectrogram, breaks the limitation of quantitative analysis of spectrogram and deep learning, and better integrates XRF spectrum and deep learning for qualitative analysis of samples.

附图说明Description of drawings

图1为本发明的方法顺序图;Fig. 1 is a method sequence diagram of the present invention;

图2为实施例金属样本光谱一维谱图数据维度变化后生成图像;Fig. 2 is the image generated after the data dimension of the one-dimensional spectrogram of the metal sample spectrum in the embodiment changes;

图3为实施例合金样本光谱一维谱图数据维度变化后生成图像;Fig. 3 is the image generated after the dimension change of the one-dimensional spectrogram data of the alloy sample spectrum in the embodiment;

图4为实施例土壤样本光谱一维谱图数据维度变化后生成图像;Fig. 4 is the image generated after the embodiment soil sample spectrum one-dimensional spectrogram data dimension changes;

图5为实施例的分类结果图。Fig. 5 is a diagram of classification results of the embodiment.

具体实施方式detailed description

为使本发明的目的、技术方案和优点更加清晰,结合以下具体实施例,并参照附图,对本发明做进一步的说明,说明如下:In order to make the purpose of the present invention, technical solutions and advantages clearer, in conjunction with the following specific embodiments, and with reference to the accompanying drawings, the present invention is further described, as follows:

如图1所示,本发明所述的一种基于卷积自编码器的XRF谱图自动分类方法,,包括以下步骤:As shown in Figure 1, a kind of XRF spectrogram automatic classification method based on convolution autoencoder of the present invention, comprises the following steps:

步骤1、获取待测XRF光谱谱图数据:Step 1. Obtain the XRF spectrum data to be tested:

使用手持式X荧光光谱分析仪对未知样本进行检测,作为待测的光谱谱图数据;Use a handheld X-ray fluorescence spectrometer to detect unknown samples as the spectrogram data to be tested;

第二步,待测光谱谱图数据预处理:对步骤1得到的待测XRF光谱谱图数据进行归一化处理后,将一维光谱数据向量转化为二维光谱信息矩阵形式,换到二维空间。详细过程如下:The second step is the preprocessing of the spectral data to be measured: After normalizing the XRF spectral data to be measured obtained in step 1, the one-dimensional spectral data vector is converted into a two-dimensional spectral information matrix form, and then converted into a two-dimensional dimensional space. The detailed process is as follows:

1.1、对步骤1得到的待测XRF光谱谱图数据采用归一化公式进行归一化处理,其归一化公式如下:1.1, adopt the normalization formula to carry out normalization processing to the XRF spectrum data to be measured that step 1 obtains, and its normalization formula is as follows:

Figure BDA0003824130820000051
Figure BDA0003824130820000051

其中,Xmax表示输入同一种类样本的二维光谱数据的最大值,Xmin表示输入同一种类样本的二维光谱数据的最小值,Xnorm表示归一化后的该种类样本的光谱数据。Among them, X max represents the maximum value of the two-dimensional spectral data of the same type of sample input, X min represents the minimum value of the two-dimensional spectral data of the same type of sample input, and X norm represents the normalized spectral data of this type of sample.

1.2、将步骤1.1归一化处理后的XRF光谱谱图数据转化为二维特征矩阵的形式;二维特征矩阵中的每列表示光谱维度、每行表示每个待测数据的所有光谱信息。换到二维空间,每个光谱信息矩阵包含512×512个特征。本实施例分别提供了明XRF光谱转换为二维空间后金属样本、合金样本以及土壤样本光谱一维谱图数据维度变化后生成的图像;参阅图2、图3、图4可知,将其进行维度转换后能够采用图片的处理方式对光谱数据进行处理。1.2. Convert the XRF spectrum data normalized in step 1.1 into the form of a two-dimensional feature matrix; each column in the two-dimensional feature matrix represents a spectral dimension, and each row represents all spectral information of each data to be measured. Switching to two-dimensional space, each spectral information matrix contains 512×512 features. This embodiment respectively provides the images generated after the data dimension of the metal sample, alloy sample and soil sample spectrum is changed after the bright XRF spectrum is converted into two-dimensional space; refer to Fig. 2, Fig. 3 and Fig. 4. After the dimension conversion, the spectral data can be processed by image processing.

1.3、通过batch_size参数设置装载时每个包中图片数据的个数;然后将转换好维度后的光谱图像按照设定的装载个数进行封装,再将封装后的数据进行装载得到训练样本集。1.3. Set the number of picture data in each package when loading through the batch_size parameter; then package the spectral image after dimension conversion according to the set loading number, and then load the packaged data to obtain the training sample set.

步骤3、根据步骤2得到的训练样本集生成训练集和测试集。本实施根据得到训练样本集生成的训练集和测试集数据如表1所示:Step 3. Generate a training set and a test set according to the training sample set obtained in step 2. In this implementation, the training set and test set data generated according to the training sample set are shown in Table 1:

表1 试验采集的未知样本XRF光谱谱图Table 1 XRF spectra of unknown samples collected in the experiment

Figure BDA0003824130820000061
Figure BDA0003824130820000061

步骤4、卷积自编码器神经网络模型的搭建和训练:Step 4. Construction and training of convolutional autoencoder neural network model:

4.1、基于卷积自编码器神经网络对训练集进行迭代训练,搭建卷积自编码器神经网络模型,并设定该模型的激活函数、损失函数。4.1. Perform iterative training on the training set based on the convolutional autoencoder neural network, build a convolutional autoencoder neural network model, and set the activation function and loss function of the model.

所述步骤4.1构建的卷积自编码器神经网络模型包括编码器和解码器;其中编码器由1个输入层和m个编码隐藏层组成;解码器由m-1个解码隐藏层和1个输出层组成;输入层的输入数据为步骤3得到的训练集,用于将输入数据输出至m个编码隐藏层;m个编码隐藏层用于对输入数据进行压缩处理得到输入数据的核心特征,并将输入数据的核心特征输出至m-1个解码隐藏层进行解压后传递至输出层,输出层对接收到的核心数据进行重构后得到特征压缩后的XRF光谱谱图输出。The convolutional self-encoder neural network model constructed in step 4.1 includes an encoder and a decoder; wherein the encoder consists of 1 input layer and m encoding hidden layers; the decoder consists of m-1 decoding hidden layers and 1 The output layer is composed of; the input data of the input layer is the training set obtained in step 3, which is used to output the input data to m coded hidden layers; the m coded hidden layers are used to compress the input data to obtain the core features of the input data, And the core features of the input data are output to m-1 decoding hidden layers for decompression and then passed to the output layer, and the output layer reconstructs the received core data to obtain the output of the characteristic compressed XRF spectrum.

由于XRF光谱谱图数据呈现非线性,且未知特征过多。传统的降维方法并不能很好的解决存在的非线性问题,且XRF光谱谱图中不同种类的未知样本在后续分类时特征过多,导致分类效率下降。针对这些问题,本实施例编码器和解码器的隐藏层均采用Tanh非线性函数作为激活函数,所述解码器的输出层采用LeakyReLU非线性函数作为激活函数。采用Tanh非线性函数和LeakyReLU非线性函数作为激活函数后,一方面提升了卷积自编码器神经网络模型的性能,使其既可以做线性变换,也可以做非线性变换,同时还能够处理更复杂的数据。另一方面,卷积自编码器神经网络模型的特点在于以重建为导向的训练形式,采用Tanh和LeakyReLU非线性函数作为激活函数后,该模型能很好的恢复出原始输入,这就说明中间隐藏层存储的特征保留了足够多的输入信息。Due to the non-linearity of the XRF spectrum data, and too many unknown features. Traditional dimensionality reduction methods cannot solve the existing nonlinear problems well, and different types of unknown samples in the XRF spectrum have too many features in subsequent classification, resulting in a decline in classification efficiency. To solve these problems, both the hidden layers of the encoder and the decoder in this embodiment use the Tanh nonlinear function as the activation function, and the output layer of the decoder uses the LeakyReLU nonlinear function as the activation function. After using the Tanh nonlinear function and the LeakyReLU nonlinear function as the activation function, on the one hand, the performance of the convolutional autoencoder neural network model is improved, so that it can do both linear transformation and nonlinear transformation, and can also handle more complex data. On the other hand, the convolutional autoencoder neural network model is characterized by a reconstruction-oriented training form. After using Tanh and LeakyReLU nonlinear functions as activation functions, the model can restore the original input well, which shows that the intermediate The features stored in the hidden layer retain enough input information.

本实施例中,Tanh非线性激活函数表达式如下:In this embodiment, the Tanh nonlinear activation function expression is as follows:

Figure BDA0003824130820000062
Figure BDA0003824130820000062

其中,e为自然对数函数的底数,为常数;x为自编码器的输入数据,f(x)为自编码器经Tanh激活函数处理后的输出数据;Among them, e is the base number of the natural logarithm function, which is a constant; x is the input data of the autoencoder, and f(x) is the output data of the autoencoder after being processed by the Tanh activation function;

LeakyReLU非线性激活函数表达式如下:The expression of the LeakyReLU nonlinear activation function is as follows:

Figure BDA0003824130820000063
Figure BDA0003824130820000063

其中,xend为自编码器最后一层的输出数据,f(xend)为经LeakyReLU激活函数处理后的输出数据。Among them, x end is the output data of the last layer of the autoencoder, and f(x end ) is the output data processed by the LeakyReLU activation function.

设定采用均方根误差损失函数衡量重建数据与真实数据的偏差,并使用自适应时刻估计方法优化算法训练网络参数,均方根误差损失函数计算公式如下:It is set to use the root mean square error loss function to measure the deviation between the reconstructed data and the real data, and use the adaptive time estimation method to optimize the algorithm training network parameters. The root mean square error loss function calculation formula is as follows:

Figure BDA0003824130820000071
Figure BDA0003824130820000071

其中,∑为求和操作,

Figure BDA0003824130820000074
为开根号操作,YRMSE为均方误差的算术平方根值,
Figure BDA0003824130820000072
为输入数据真实标签分布,
Figure BDA0003824130820000073
为自编码器模型的预测值,上标l为样本中类别的总个数,用于指明是哪个真实值与预测值在进行损失计算,N为每一批次的总样本数。Among them, ∑ is the summation operation,
Figure BDA0003824130820000074
It is the square root operation, Y RMSE is the arithmetic square root value of the mean square error,
Figure BDA0003824130820000072
is the true label distribution of the input data,
Figure BDA0003824130820000073
is the predicted value of the autoencoder model, the superscript l is the total number of categories in the sample, which is used to indicate which real value and predicted value are performing loss calculations, and N is the total number of samples in each batch.

4.2、将步骤3得到的测试集输入到的卷积自编码器神经网络模型中通过训练得到压缩后的XRF光谱谱图。详细步骤如下:4.2. Input the test set obtained in step 3 into the convolutional autoencoder neural network model to obtain a compressed XRF spectrum through training. The detailed steps are as follows:

4.2.1、将步骤3得到的测试集作为输入数据,输入到卷积自编码器神经网络的输入层,并将其分别映射到编码器中的m个隐藏层进行特征压缩得到输入数据的核心特征,完成卷积自编码器神经网络模型的编码过程,卷积自编码器神经网络模型的编码的表达式如下:4.2.1. Take the test set obtained in step 3 as the input data, input it to the input layer of the convolutional autoencoder neural network, and map it to the m hidden layers in the encoder for feature compression to obtain the core of the input data Features, to complete the encoding process of the convolutional autoencoder neural network model, the encoding expression of the convolutional autoencoder neural network model is as follows:

y=f(wyx+by) (4)y=f(w y x+b y ) (4)

其中,x为装载好的输入数据,y表示中间隐藏层学习到的特征,wy代表隐藏层输入的权重,by表示隐藏单元的偏移系数,f代表卷积操作;Among them, x is the loaded input data, y represents the features learned by the middle hidden layer, w y represents the weight of the hidden layer input, b y represents the offset coefficient of the hidden unit, and f represents the convolution operation;

4.1.2、将经过步骤4.1.1得到的核心特征传输至解码器的m-1个隐藏层进行解压后,再传递到一个输出层进行重构后,得到一个与输入数据相近的输出数据,即特征压缩后的压缩的XRF光谱谱图,从而完成自编码器的解码过程。本实施例中,编码器中的输入层与解码器中的输出层尺度大小相同,解码过程的表达式如下:4.1.2. Transfer the core features obtained in step 4.1.1 to the m-1 hidden layers of the decoder for decompression, and then transfer them to an output layer for reconstruction, and obtain an output data similar to the input data. That is, the compressed XRF spectrogram after feature compression, thus completing the decoding process of the autoencoder. In this embodiment, the scale of the input layer in the encoder is the same as that of the output layer in the decoder, and the expression of the decoding process is as follows:

z=f(wzy+bz) (5)z=f(w z y+b z ) (5)

其中,y表示中间隐藏层学习到的特征,z为通过隐藏特征y重建后的数据,wz代表隐藏层输出的权重,bz表示输出单元的偏移系数,f代表卷积操作;Among them, y represents the feature learned by the middle hidden layer, z is the data reconstructed by the hidden feature y, w z represents the weight of the hidden layer output, b z represents the offset coefficient of the output unit, and f represents the convolution operation;

wy=w′z=w (6)w y =w' z =w (6)

其中,w′z表示wz的转置,该式为卷积自编码器的约束条件,表示卷积自编码具有相同的绑定权重w,有助于使模型的参数量减半;Among them, w′ z represents the transpose of w z , which is the constraint condition of the convolutional autoencoder, which means that the convolutional autoencoder has the same binding weight w, which helps to halve the parameter amount of the model;

卷积自编码器训练的目标使输入数据与重建数据的误差不断缩小,数学表达式如下:The goal of convolutional autoencoder training is to continuously reduce the error between input data and reconstructed data. The mathematical expression is as follows:

Figure BDA0003824130820000081
Figure BDA0003824130820000081

在此,自动编码器需要训练的参数为:w,by,bz,其中,x为装载好的样本数据,z表示重建后的输出数据,c(x,z)代表输入数据与重建数据之间的误差;Here, the parameters that the autoencoder needs to train are: w, b y , b z , where x is the loaded sample data, z represents the reconstructed output data, and c(x,z) represents the input data and reconstructed data the error between

其权值更新规则由如下公式进行表示:Its weight update rule is expressed by the following formula:

Figure BDA0003824130820000082
Figure BDA0003824130820000082

Figure BDA0003824130820000083
Figure BDA0003824130820000083

Figure BDA0003824130820000084
Figure BDA0003824130820000084

其中,cost(x,z)为输入数据与重建数据之间的误差损失,η为学习率,

Figure BDA0003824130820000085
表示对权重W求偏导,
Figure BDA0003824130820000086
表示对隐藏单元的偏移系数by求偏导,
Figure BDA0003824130820000087
表示对输出单元的偏移系数bz求偏导。Among them, cost(x,z) is the error loss between the input data and the reconstructed data, η is the learning rate,
Figure BDA0003824130820000085
Represents the partial derivative of the weight W,
Figure BDA0003824130820000086
Represents the partial derivative of the offset coefficient b y of the hidden unit,
Figure BDA0003824130820000087
Represents the partial derivative of the offset coefficient b z of the output unit.

总的来讲,本实施例的卷积自编码器神经网络模型是将步骤3创建的未知样本的训练集或测试集作为输入数据,输入到卷积自编码器后,利用卷积操作作为自编码器编码层的激活函数,并采用无监督方式训练卷积自动编码器,得到训练好的卷积自编码器神经网络模型,并得到特征压缩后的训练光谱数据谱图。Generally speaking, the convolutional autoencoder neural network model of this embodiment uses the training set or test set of unknown samples created in step 3 as input data, and after inputting it to the convolutional autoencoder, the convolution operation is used as the autoencoder. The activation function of the encoder coding layer, and the convolutional autoencoder is trained in an unsupervised manner, and the trained convolutional autoencoder neural network model is obtained, and the training spectral data spectrogram after feature compression is obtained.

步骤5、搭建Kmeans无监督分类模型,然后将步骤4.2得到的特征压缩后的XRF光谱数据图输入到Kmeans无监督分类模型中,通过训练得到未知样本的XRF光谱谱图分类结果。详细过程如下:Step 5. Build the Kmeans unsupervised classification model, and then input the feature-compressed XRF spectrum data map obtained in step 4.2 into the Kmeans unsupervised classification model, and obtain the XRF spectrum classification result of the unknown sample through training. The detailed process is as follows:

5.1、给定一个样本数据集x={x1,x2,…,xn},对该数据集中任意一个数据样本求其到k(k≤n)个中心的距离,通过指定该数据集中的k(k≤n)数据样本作为初始中心,将该数据样本归属到数据样本距离中心最短的类;5.1. Given a sample data set x={x1,x2,…,xn}, find the distance from any data sample in the data set to k(k≤n) centers, and specify k( k≤n) The data sample is used as the initial center, and the data sample is assigned to the class with the shortest distance from the data sample to the center;

5.1、Kmeans无监督分类模型的搭建5.1. Construction of Kmeans unsupervised classification model

5.1.2、针对步骤5.1.1得到的类中的数据样本,以求类的均值等方法更新每类中心;5.1.2. For the data samples in the class obtained in step 5.1.1, update each class center by seeking the mean value of the class, etc.;

5.1.3、重复迭代上述两个步骤更新类中心,如果类中心不变或者类中心变化小于某一个阈值,则更新结束,形成类簇,形成类簇从而完成Kmeans无监督分类模型的搭建;否则继续;5.1.3. Repeat the above two steps to update the class center. If the class center remains unchanged or the change of the class center is less than a certain threshold, the update ends, and the clusters are formed to complete the construction of the Kmeans unsupervised classification model; otherwise continue;

迭代实现样本与其归属的类中心的距离为最小的目标,目标函数如下所示:Iteratively achieve the goal of minimizing the distance between the sample and the center of the class to which it belongs. The objective function is as follows:

Figure BDA0003824130820000091
Figure BDA0003824130820000091

其中,μi表示集合Si的均值,类中所有元素和均值距离的和为VarSiAmong them, μ i represents the mean value of the set S i , and the sum of all elements in the class and the mean distance is VarS i ;

选取如下所示的距离度量方法:Choose a distance metric as follows:

Figure BDA0003824130820000092
Figure BDA0003824130820000092

其中Xi表示第i个数据样本;where Xi represents the i -th data sample;

5.2、将特征压缩后的XRF光谱谱图数据输入到kmeans网络中进行训练,得到训练好的kmeans分类网络模型。5.2. Input the feature-compressed XRF spectrum data into the kmeans network for training, and obtain the trained kmeans classification network model.

为验证上述方法的可行性及效果,本实施例对其进行了应用。In order to verify the feasibility and effect of the above method, it is applied in this embodiment.

步骤1、获取待分类的未知样本的XRF光谱谱图数据;Step 1. Obtain the XRF spectrum data of the unknown sample to be classified;

步骤2、按照步骤二的方法对待分类XRF光谱谱图数据进行预处理和特征压缩。具体的:先将待分类的XRF光谱谱图进行归一化后转换为二维光谱信息矩阵形式,换到二维空间。并根据转换后的XRF光谱信息创建训练样本集;Step 2. Perform preprocessing and feature compression on the XRF spectrum data to be classified according to the method in step 2. Specifically: the XRF spectrum to be classified is first normalized and then converted into a two-dimensional spectral information matrix form, and then converted to a two-dimensional space. And create a training sample set according to the converted XRF spectrum information;

步骤3、将步骤2创建的训练样本集输入卷积自编码器神经网络模型中进行特征压缩,得到压缩特征后的XRF光谱谱图数据,压缩后的数据大小为8×8×3。Step 3. Input the training sample set created in step 2 into the convolutional self-encoder neural network model for feature compression, and obtain the XRF spectrogram data after the compressed features. The compressed data size is 8×8×3.

步骤4、特征压缩后的XRF光谱数据图输入训练好的Kmeans网络中,得到未知样本的XRF光谱谱图分类结果。Step 4, input the XRF spectrum data graph after feature compression into the trained Kmeans network, and obtain the XRF spectrum graph classification result of the unknown sample.

通过上述步骤,最终获得未知样本的XRF光谱谱图分类结果如图5所示。混淆矩阵它可以直观地了解分类模型在每一类样本里面表现,常作为模型评估的一部分,并且非常容易地表明多个类别是否有混淆。在混线矩阵中,以对角线为分界线。如图5所示,对角线的位置表示预测正确,对角线以外的位置表示把样本错误的预测为其他样本,由此可以看出,仅有土壤与金属和合金之间存在着两个误分,其余分类效果很好,证明了实施例方法的有效性。Through the above steps, the XRF spectrum classification results of unknown samples are finally obtained, as shown in Figure 5. The confusion matrix can intuitively understand the performance of the classification model in each type of sample, often as part of model evaluation, and it is very easy to indicate whether multiple categories are confused. In the mixed line matrix, the diagonal line is used as the dividing line. As shown in Figure 5, the position of the diagonal line indicates that the prediction is correct, and the position outside the diagonal line indicates that the sample is wrongly predicted as other samples. It can be seen that there are only two differences between soil and metal and alloy. Misclassification, the rest of the classification effect is very good, which proves the effectiveness of the embodiment method.

Claims (6)

1.一种基于卷积自编码器的XRF谱图自动分类方法,其特征在于:包括以下步骤:1. A method for automatic classification of XRF spectrograms based on convolutional self-encoders, characterized in that: comprising the following steps: 步骤1、使用手持式X荧光光谱分析仪对未知样本进行检测,作为待测的XRF光谱谱图数据;Step 1. Use a handheld X-ray fluorescence spectrometer to detect unknown samples as the XRF spectrogram data to be tested; 步骤2、对步骤1得到的待测XRF光谱谱图数据进行归一化处理,将其转化为二维光谱信息矩阵形式,转换到二维空间后的每个光谱信息矩阵包含512×512个特征;然后根据转换后的XRF光谱信息创建训练样本集;Step 2. Normalize the XRF spectrum data to be measured obtained in step 1, and convert it into a two-dimensional spectral information matrix form, and each spectral information matrix after conversion into a two-dimensional space contains 512×512 features ; Then create a training sample set according to the converted XRF spectrum information; 步骤3、根据步骤2创建的训练样本集生成训练集和测试集;Step 3, generate a training set and a test set according to the training sample set created in step 2; 步骤4、卷积自编码器神经网络模型的搭建和训练:Step 4. Construction and training of convolutional autoencoder neural network model: 步骤4.1、基于卷积自编码器神经网络对训练集进行迭代训练,构建卷积自编码器神经网络模型;Step 4.1, iteratively training the training set based on the convolutional autoencoder neural network, and constructing a convolutional autoencoder neural network model; 步骤4.2、将测试集输入步骤4.1得到的卷积自编码器神经网络模型中得到压缩后的XRF光谱谱图;Step 4.2, input the test set into the convolutional self-encoder neural network model obtained in step 4.1 to obtain the compressed XRF spectrogram; 步骤5、搭建Kmeans无监督分类模型,然后将步骤4.2得到的特征压缩后的XRF光谱数据图输入到Kmeans无监督分类模型中,通过训练得到未知样本的XRF光谱谱图分类结果。Step 5. Build the Kmeans unsupervised classification model, and then input the feature-compressed XRF spectrum data map obtained in step 4.2 into the Kmeans unsupervised classification model, and obtain the XRF spectrum classification result of the unknown sample through training. 2.根据权利要求1所述的一种基于卷积自编码器的XRF谱图自动分类方法,其特征在于:所述步骤2中进行归一化处理采用的公式为:2. A kind of XRF spectrogram automatic classification method based on convolutional self-encoder according to claim 1, is characterized in that: the formula that carries out normalization processing in described step 2 is:
Figure FDA0003824130810000011
Figure FDA0003824130810000011
式(1)中,Xmax表示输入同一种类样本的一维光谱数据的最大值,Xmin表示输入同一种类样本的一维光谱数据的最小值,Xnorm表示归一化后的该种类样本的光谱数据。In formula (1), X max represents the maximum value of the one-dimensional spectral data of the same type of sample input, X min represents the minimum value of the one-dimensional spectral data of the same type of sample input, and X norm represents the normalized value of the sample of this type spectral data.
3.根据权利要求1所述的一种基于卷积自编码器的XRF光谱谱图自动分类方法,其特征在于:所述步骤4.1构建的卷积自编码器神经网络模型包括编码器和解码器;其中编码器由1个输入层和m个编码隐藏层组成;解码器由m-1个解码隐藏层和1个输出层组成;输入层的输入数据为步骤3得到的训练集,用于将输入数据输出至m个编码隐藏层;m个编码隐藏层用于对输入数据进行压缩处理得到输入数据的核心特征,并将输入数据的核心特征输出至m-1个解码隐藏层进行解压后传递至输出层,输出层对接收到的核心数据进行重构后得到特征压缩后的XRF光谱谱图输出。3. A method for automatic classification of XRF spectrograms based on convolutional self-encoders according to claim 1, characterized in that: the convolutional self-encoder neural network model constructed in the step 4.1 includes an encoder and a decoder ; where the encoder is composed of 1 input layer and m encoding hidden layers; the decoder is composed of m-1 decoding hidden layers and 1 output layer; the input data of the input layer is the training set obtained in step 3, which is used to The input data is output to m encoding hidden layers; the m encoding hidden layers are used to compress the input data to obtain the core features of the input data, and output the core features of the input data to m-1 decoding hidden layers for decompression and transmission To the output layer, the output layer reconstructs the received core data to obtain the output of the characteristic compressed XRF spectrum. 4.根据权利要求3所述的一种基于卷积自编码器的XRF谱图自动分类方法,其特征在于:所述编码器和解码器的隐藏层均采用Tanh非线性函数作为激活函数,所述解码器的输出层采用LeakyReLU非线性函数作为激活函数。4. a kind of XRF spectrogram automatic classification method based on convolution self-encoder according to claim 3, is characterized in that: the hidden layer of described encoder and decoder all adopts Tanh nonlinear function as activation function, so The output layer of the decoder uses the LeakyReLU non-linear function as the activation function. 5.根据权利要求4所述的一种基于卷积自编码器的XRF谱图自动分类方法,其特征在于:所述步骤4.2利用卷积自编码器神经网络模型得到压缩后的XRF光谱谱图的详细过程为:5. A method for automatic classification of XRF spectra based on convolutional self-encoders according to claim 4, characterized in that: said step 4.2 uses the convolutional self-encoder neural network model to obtain compressed XRF spectrums The detailed process is: 步骤4.2.1、将步骤3得到的测试集作为输入数据,输入到卷积自编码器的输入层,并将其分别映射到编码器的m个隐藏层进行特征压缩并得到输入数据的核心特征,完成卷积自编码器编码的过程,卷积自编码器编码的表达式如下:Step 4.2.1. Take the test set obtained in step 3 as the input data, input it to the input layer of the convolutional autoencoder, and map it to the m hidden layers of the encoder for feature compression and obtain the core features of the input data , to complete the process of convolutional autoencoder encoding, the expression of convolutional autoencoder encoding is as follows: y=f(wyx+by) (5)y=f(w y x+b y ) (5) 其中,x为装载好的输入数据,y表示中间隐藏层学习到的特征,wy代表隐藏层输入的权重,by表示隐藏单元的偏移系数,f代表卷积操作;Among them, x is the loaded input data, y represents the features learned by the middle hidden layer, w y represents the weight of the hidden layer input, b y represents the offset coefficient of the hidden unit, and f represents the convolution operation; 步骤4.2.2、将步骤4.2.1得到核心特征传输至解码器的m-1个隐藏层进行解压后,再传递到一个输出层进行重构后,得到一个与输入数据相近的输出数据,即特征压缩后的压缩的XRF光谱谱图,从而完成自编码器的解码过程,解码过程的表达式如下:Step 4.2.2, transfer the core features obtained in step 4.2.1 to the m-1 hidden layers of the decoder for decompression, and then transfer to an output layer for reconstruction to obtain an output data similar to the input data, namely The compressed XRF spectrogram after feature compression is used to complete the decoding process of the autoencoder. The expression of the decoding process is as follows: z=f(wzy+bz) (6)z=f(w z y+b z ) (6) 其中,y表示中间隐藏层学习到的特征,z为通过隐藏特征y重建后的数据,wz代表隐藏层输出的权重,bz表示输出单元的偏移系数,f代表卷积操作;Among them, y represents the feature learned by the middle hidden layer, z is the data reconstructed by the hidden feature y, w z represents the weight of the hidden layer output, b z represents the offset coefficient of the output unit, and f represents the convolution operation; 卷积自编码器的约束条件如下所示:The constraints of the convolutional autoencoder are as follows: wy=wz′=w (7)w y =w z '=w (7) 其中,wz′表示wz的转置,表示卷积自编码具有相同的绑定权重w,有助于使模型的参数量减半;Among them, w z ′ represents the transpose of w z , which means that the convolutional autoencoder has the same binding weight w, which helps to halve the parameter amount of the model; 卷积自编码器训练的目标使输入数据与重建数据的误差不断缩小,数学表达式如下:The goal of convolutional autoencoder training is to continuously reduce the error between input data and reconstructed data. The mathematical expression is as follows:
Figure FDA0003824130810000021
Figure FDA0003824130810000021
在此,自动编码器需要训练的参数为:w,by,bz,其中,x为装载好的样本数据,z表示重建后的输出数据,c(x,z)代表输入数据与重建数据之间的误差;Here, the parameters that the autoencoder needs to train are: w, b y , b z , where x is the loaded sample data, z represents the reconstructed output data, and c(x,z) represents the input data and reconstructed data the error between 其权值更新规则由如下公式进行表示:Its weight update rule is expressed by the following formula:
Figure FDA0003824130810000022
Figure FDA0003824130810000022
Figure FDA0003824130810000023
Figure FDA0003824130810000023
Figure FDA0003824130810000031
Figure FDA0003824130810000031
其中,cost(x,z)为输入数据与重建数据之间的误差损失,η为学习率,
Figure FDA0003824130810000032
表示对权重W求偏导,
Figure FDA0003824130810000033
表示对隐藏单元的偏移系数by求偏导,
Figure FDA0003824130810000034
表示对输出单元的偏移系数bz求偏导。
Among them, cost(x,z) is the error loss between the input data and the reconstructed data, η is the learning rate,
Figure FDA0003824130810000032
Represents the partial derivative of the weight W,
Figure FDA0003824130810000033
Represents the partial derivative of the offset coefficient b y of the hidden unit,
Figure FDA0003824130810000034
Represents the partial derivative of the offset coefficient b z of the output unit.
6.根据权利要求1所述的一种基于卷积自编码器的XRF谱图自动分类方法,其特征在于:所述步骤5中Kmeans无监督分类模型的搭建和训练过程如下:6. A kind of XRF spectrogram automatic classification method based on convolution self-encoder according to claim 1, it is characterized in that: in the described step 5, the construction and training process of Kmeans unsupervised classification model are as follows: 步骤5.1、Kmeans无监督分类模型的搭建Step 5.1, Kmeans unsupervised classification model construction 步骤5.1.1、给定一个样本数据集x={x1,x2,…,xn},对该数据集中任意一个数据样本求其到k(k≤n)个中心的距离,通过指定该数据集中的k(k≤n)数据样本作为初始中心,将该数据样本归属到数据样本距离中心最短的类;Step 5.1.1. Given a sample data set x={x1,x2,…,xn}, calculate the distance from any data sample in the data set to k (k≤n) centers, and specify The k (k≤n) data sample of k (k≤n) is used as the initial center, and the data sample is assigned to the class with the shortest distance from the data sample to the center; 步骤5.1.2、针对步骤5.1.1得到的类中的数据样本,以求类的均值方法更新每类中心;Step 5.1.2, for the data samples in the class obtained in step 5.1.1, update each class center by seeking the mean value of the class; 步骤5.1.3、重复迭代上述两个步骤更新类中心,如果类中心不变或者类中心变化小于预设阈值,则更新结束,形成类簇从而完成Kmeans无监督分类模型的搭建;否则继续;Step 5.1.3. Repeat the above two steps to update the class center. If the class center remains unchanged or the change of the class center is less than the preset threshold, the update ends, and the clusters are formed to complete the construction of the Kmeans unsupervised classification model; otherwise, continue; 迭代实现样本与其归属的类中心的距离为最小的目标,目标函数如下所示:Iteratively achieve the goal of minimizing the distance between the sample and the center of the class to which it belongs. The objective function is as follows:
Figure FDA0003824130810000035
Figure FDA0003824130810000035
其中,μi表示集合Si的均值,类中所有元素和均值距离的和为VarSiAmong them, μ i represents the mean value of the set S i , and the sum of all elements in the class and the mean distance is VarS i ; 选取如下所示的距离度量方法:Choose a distance metric as follows:
Figure FDA0003824130810000036
Figure FDA0003824130810000036
其中Xi表示第i个数据样本;where Xi represents the i -th data sample; 步骤5.2、将特征压缩后的XRF光谱谱图数据输入到kmeans无监督分类模型中进行训练,得到训练好的kmeans分类网络模型。Step 5.2, input the feature compressed XRF spectrogram data into the kmeans unsupervised classification model for training, and obtain the trained kmeans classification network model.
CN202211053356.2A 2022-08-31 2022-08-31 Automatic XRF spectrogram classification method based on convolution self-encoder Pending CN115512144A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211053356.2A CN115512144A (en) 2022-08-31 2022-08-31 Automatic XRF spectrogram classification method based on convolution self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211053356.2A CN115512144A (en) 2022-08-31 2022-08-31 Automatic XRF spectrogram classification method based on convolution self-encoder

Publications (1)

Publication Number Publication Date
CN115512144A true CN115512144A (en) 2022-12-23

Family

ID=84502166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211053356.2A Pending CN115512144A (en) 2022-08-31 2022-08-31 Automatic XRF spectrogram classification method based on convolution self-encoder

Country Status (1)

Country Link
CN (1) CN115512144A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116911354A (en) * 2023-09-14 2023-10-20 首都信息发展股份有限公司 Encoder neural network model construction method and data processing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116911354A (en) * 2023-09-14 2023-10-20 首都信息发展股份有限公司 Encoder neural network model construction method and data processing method

Similar Documents

Publication Publication Date Title
CN110569566B (en) Method for predicting mechanical property of plate strip
CN109902411B (en) Soil heavy metal content detection modeling method and device, detection method and device
CN115436407B (en) Element content quantitative analysis method combining random forest regression with principal component analysis
CN116935384A (en) Intelligent detection method for cell abnormality sample
CN112289391A (en) Anode aluminum foil performance prediction system based on machine learning
CN117574274B (en) A PSO-XGBoost system construction method with hybrid feature screening and hyperparameter optimization
CN115512144A (en) Automatic XRF spectrogram classification method based on convolution self-encoder
JP2021092467A (en) Data analysis system and data analysis method
CN112801936B (en) Self-adaptive background subtraction method for X-ray fluorescence spectrum
CN115984113A (en) Spectrum-air hypergraph regularization sparse self-representation hyperspectral waveband selection method
Zeng et al. From pixels to predictions: Spectrogram and vision transformer for better time series forecasting
Polanska et al. Learned harmonic mean estimation of the Bayesian evidence with normalizing flows
CN114036298B (en) Node classification method based on graph convolution neural network and word vector
Xie et al. Large-scale spectral analysis for element quantification using deep neural networks
CN118137485A (en) Informer-Bi-LSTM model-based power load prediction method and system
Sakaue et al. Learning tensor trains from noisy functions with application to quantum simulation
CN109540292B (en) A spectral preprocessing method
CN114994109B (en) XRF trace element quantitative analysis method based on ISOMAP-ELM
Xie et al. Enabling real-time low-cost spectral analysis on edge devices with deep neural networks: a robust hybrid approach
CN110032762A (en) Heavy metal content in soil detects modeling method and device, detection method and device
CN118655116B (en) Automatic analysis method for full algae based on fluorescent quantum dots
CN119028576B (en) Method for constructing a maxillary sinus cyst prediction model based on machine learning algorithm, maxillary sinus cyst prediction method and device, and storage medium
EP4485292A1 (en) Method and system for solving problems with multiple conflicting objectives
CN117805161A (en) An element identification method, system and equipment for X-ray fluorescence spectrum
CN117635744A (en) Snapshot spectrum compression imaging method based on L1 norm and low rank technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination