[go: up one dir, main page]

CN111325112B - Cutter wear state monitoring method based on depth gate control circulation unit neural network - Google Patents

Cutter wear state monitoring method based on depth gate control circulation unit neural network Download PDF

Info

Publication number
CN111325112B
CN111325112B CN202010077631.9A CN202010077631A CN111325112B CN 111325112 B CN111325112 B CN 111325112B CN 202010077631 A CN202010077631 A CN 202010077631A CN 111325112 B CN111325112 B CN 111325112B
Authority
CN
China
Prior art keywords
time
neural network
network
signal
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010077631.9A
Other languages
Chinese (zh)
Other versions
CN111325112A (en
Inventor
袁庆霓
陈启鹏
蓝伟文
杜飞龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou University
Original Assignee
Guizhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou University filed Critical Guizhou University
Priority to CN202010077631.9A priority Critical patent/CN111325112B/en
Publication of CN111325112A publication Critical patent/CN111325112A/en
Application granted granted Critical
Publication of CN111325112B publication Critical patent/CN111325112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • G06F2218/06Denoising by applying a scale-space analysis, e.g. using wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Feedback Control In General (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于深度门控循环单元神经网络的刀具磨损状态监测方法,包括:利用传感器实时采集刀具加工过程中产生的振动信号,经小波阈值去噪后输入一维卷积神经网络中进行单个时间步时序信号局部特征提取,然后输入改进的深度门控循环单元神经网络CABGRUs中进行时序信号时间序列特征提取,引入Attention机制计算网络权重并对其进行合理分配,最后,将不同权重的信号特征信息放入Softmax分类器对刀具磨损状态进行分类,避免因人工提取特征带来的复杂性和局限性;同时有效解决了单卷积神经网络忽略时序信号前后关联问题,通过引入了Attention机制提高了模型的准确率。因此,本发明具有提高刀具磨损状态监测的实时性和准确性的特点。

Figure 202010077631

The invention discloses a tool wear state monitoring method based on a depth-gated cyclic unit neural network, which includes: using a sensor to collect vibration signals generated during tool processing in real time, and inputting them into a one-dimensional convolutional neural network after wavelet threshold denoising Carry out local feature extraction of time-series signals at a single time step, and then input the improved deep gated recurrent unit neural network CABGRUs for time-series feature extraction of time-series signals, introduce the Attention mechanism to calculate network weights and allocate them reasonably, and finally, the different weights The signal feature information is put into the Softmax classifier to classify the tool wear state, avoiding the complexity and limitations caused by manual feature extraction; at the same time, it effectively solves the problem of single convolutional neural network ignoring the front and rear correlation of time series signals, by introducing the Attention mechanism Improve the accuracy of the model. Therefore, the present invention has the characteristics of improving the real-time and accuracy of tool wear state monitoring.

Figure 202010077631

Description

基于深度门控循环单元神经网络的刀具磨损状态监测方法Tool wear state monitoring method based on deep gated recurrent unit neural network

技术领域Technical Field

本发明属于制造过程监测领域,具体涉及一种基于深度门控循环单元神经网络的刀具磨损状态监测方法。The present invention belongs to the field of manufacturing process monitoring, and in particular relates to a tool wear state monitoring method based on a deep gated recurrent unit neural network.

背景技术Background Art

在机械加工过程中,切削加工是零件成形最主要的加工方式,刀具的磨损状态将直接影响零件的加工精度、表面质量以及生产效率,因此,刀具状态监测(Tool ConditionMonitoring,TCM)技术对于保证加工质量、实现连续自动化加工具有非常重要的意义。刀具状态监测方法目前主要采用间接测量法,该方法能在刀具切削过程中通过传感器实时采集信号,经过数据处理和特征提取后,采用机器学习(Machine Learning,ML)模型对刀具磨损量进行监测。In the process of mechanical processing, cutting is the most important processing method for part forming. The wear state of the tool will directly affect the processing accuracy, surface quality and production efficiency of the parts. Therefore, tool condition monitoring (TCM) technology is of great significance for ensuring processing quality and realizing continuous automated processing. The tool condition monitoring method currently mainly adopts the indirect measurement method, which can collect signals in real time through sensors during the tool cutting process. After data processing and feature extraction, the machine learning (ML) model is used to monitor the tool wear.

现有技术中,张安思等提出基于迁移学习及长短期记忆网络(LSTM)的设备剩余寿命预测模型,通过预先在不同但相关的设备剩余寿命预测数据集上进行训练,然后针对目标数据集微调网络结构及训练参数。实验结果表明,迁移学习方法可以在只拥有少量的样本的前提下提升模型的预测精度。金棋等提出基于深度学习多样性特征提取与信息融合的行星齿轮箱故障诊断方法,利用多目标优化算法优化多个堆栈去噪自动编码器(SDAE),同时提取多样性的故障特征,然后采用多响应线性回归模型集成多样性故障特征实现信息融合。实验结果表明多样性特征提取与信息融合的方法能有效提高故障诊断精度与稳定性,具有较强的泛化能力。张存吉等提出将加工过程中刀具的振动信号通过小波包转换(Wavelet Packet Transform,WPT)转变为能量频谱图,然后输入到卷积神经网络中自动提取特征并进行准确分类,实验结果表明深度卷积神经网络的准确率优于传统的神经网络模型。曹大理等提出利用密集连接的方式构建深度神经网络DenseNet,从原始的时序信号中自适应地提取刀具加工信号中隐藏的微小特征,实验结果表明加深网络层数有助于挖掘加工信号中隐含的高维特征,提高刀具磨损监测模型的精度。以上方法均采用深度学习的方式自适应的提取特征,但其使用的卷积神经网络过多依赖于高维度特征提取,卷积层数过多容易出现梯度弥散,卷积层数过少无法把握全局,且并未考虑到刀具加工时产生时序信号样本间的相关性这一重要特征。In the prior art, Zhang Ansi et al. proposed a model for predicting the remaining life of equipment based on transfer learning and long short-term memory network (LSTM). The model was trained on different but related equipment remaining life prediction data sets in advance, and then the network structure and training parameters were fine-tuned for the target data set. Experimental results show that the transfer learning method can improve the prediction accuracy of the model with only a small number of samples. Jin Qi et al. proposed a planetary gearbox fault diagnosis method based on deep learning diversity feature extraction and information fusion. The multi-objective optimization algorithm was used to optimize multiple stack denoising autoencoders (SDAE), and the diversity of fault features was extracted at the same time. Then, the multi-response linear regression model was used to integrate the diversity of fault features to achieve information fusion. The experimental results show that the method of diversity feature extraction and information fusion can effectively improve the accuracy and stability of fault diagnosis, and has strong generalization ability. Zhang Cunji et al. proposed to transform the vibration signal of the tool during the machining process into an energy spectrum through wavelet packet transform (WPT), and then input it into the convolutional neural network to automatically extract features and accurately classify them. The experimental results show that the accuracy of the deep convolutional neural network is better than that of the traditional neural network model. Cao Dali et al. proposed to construct a deep neural network DenseNet using dense connections to adaptively extract the hidden subtle features in the tool processing signal from the original time series signal. The experimental results show that deepening the network layers helps to mine the high-dimensional features hidden in the processing signal and improve the accuracy of the tool wear monitoring model. The above methods all use deep learning to adaptively extract features, but the convolutional neural network they use relies too much on high-dimensional feature extraction. Too many convolution layers are prone to gradient diffusion, too few convolution layers cannot grasp the overall situation, and do not take into account the important feature of the correlation between the time series signal samples generated during tool processing.

发明内容Summary of the invention

本发明的目的在于克服上述缺点而提出了一种能提高刀具磨损状态监测实时性和准确性的基于深度门控循环单元神经网络的刀具磨损状态监测方法。The purpose of the present invention is to overcome the above-mentioned shortcomings and propose a tool wear state monitoring method based on a deep gated recurrent unit neural network that can improve the real-time and accuracy of tool wear state monitoring.

本发明的一种基于深度门控循环单元神经网络的刀具磨损状态监测方法,包括如下步骤:A tool wear state monitoring method based on a deep gated recurrent unit neural network of the present invention comprises the following steps:

步骤一:用加速度传感器采集刀具铣削时产生的三轴振动信号,用小波阀值去噪方法对采集到的原始信号进行去噪处理,将每次刀具进给产生的振动信号裁剪为多个长度为2000的短序列时序信号,其具体步骤:截取每个采样信号中连续的100000个点,以2000为样本数量将截取点划分为50个样本,这50个样本均对应相同的磨损状态标签。Step 1: Use an acceleration sensor to collect the three-axis vibration signal generated by the tool during milling, use the wavelet threshold denoising method to denoise the collected original signal, and cut the vibration signal generated by each tool feed into multiple short sequence time series signals with a length of 2000. The specific steps are: intercept 100,000 consecutive points in each sampling signal, and divide the intercepted points into 50 samples with a sample number of 2000. These 50 samples correspond to the same wear status label.

步骤二:单个时间步时序信号局部特征提取:采用一维卷积神经网络处理上述刀具加工过程中产生的短序列时序信号,其卷积神经网络部分包括2层卷积层(Convolutional Layer,CONV)和1层池化层(Pooling Layer,POOL),卷积层通过一维卷积运算的方式将每一维度的时序信号进行邻域滤波以生成特征映射,每个特征图可以被看作不同滤波器对当前时间步时序信号的卷积操作;所述一维卷积层运算的计算公式如下:Step 2: Extraction of local features of single time step timing signal: A one-dimensional convolutional neural network is used to process the short sequence timing signal generated in the above tool processing process. The convolutional neural network part includes two layers of convolutional layers (CONV) and one layer of pooling layer (POOL). The convolutional layer performs neighborhood filtering on the timing signal of each dimension through a one-dimensional convolution operation to generate a feature map. Each feature map can be regarded as a convolution operation of different filters on the timing signal of the current time step; the calculation formula of the one-dimensional convolutional layer operation is as follows:

Figure BDA0002378980750000021
Figure BDA0002378980750000021

其中:

Figure BDA0002378980750000022
表示第l层的第j个特征映射,f表示激活函数,M表示输入特征映射的数量,
Figure BDA0002378980750000023
表示第l-1层的第i个特征映射,
Figure BDA0002378980750000024
表示可训练的卷积核,
Figure BDA0002378980750000025
表示偏置参数;所述激活函数采用Relu激活函数;in:
Figure BDA0002378980750000022
represents the jth feature map of the lth layer, f represents the activation function, M represents the number of input feature maps,
Figure BDA0002378980750000023
represents the i-th feature map of the l-1th layer,
Figure BDA0002378980750000024
represents a trainable convolution kernel,
Figure BDA0002378980750000025
represents the bias parameter; the activation function adopts the Relu activation function;

所述池化层采用最大值池化对邻域内的特征点取最大值,公式如下:The pooling layer uses maximum pooling to take the maximum value of the feature points in the neighborhood. The formula is as follows:

Figure BDA0002378980750000026
Figure BDA0002378980750000026

其中:

Figure BDA0002378980750000027
表示第l层的第i个特征矢量中的第t个神经元的值,t∈[(j-1)w+1,jw];w为池化区域的宽度;Pi l+1(j)表示第l+1层神经元对应的值。in:
Figure BDA0002378980750000027
represents the value of the tth neuron in the i-th feature vector of the l-th layer, t∈[(j-1)w+1,jw]; w is the width of the pooling area; Pi l +1 (j) represents the value corresponding to the neuron in the l+1-th layer.

步骤三:刀具加工过程中产生的振动信号存在时序关系,为了挖掘时间序列中相对较长间隔的时序变化规律,采用改进的门控循环单元提取时序信号时间序列特征,学习时序信号间时间序列特征的依赖关系;Step 3: The vibration signals generated during tool processing have a time series relationship. In order to explore the time series variation rules of relatively long intervals in the time series, an improved gated recurrent unit is used to extract the time series features of the time series signals and learn the dependency relationship between the time series features of the time series signals.

所述改进的门控循环单元,通过构建两个双向BiGRU网络共同叠加组成CABGRUs网络,同时,在CABGRUs网络中引入Attention机制,增加Attention层,使模型既获得了能从正向和反向同时提取时序信号特征的能力,又获得了选择性的学习信号特征中关键信息的能力;The improved gated recurrent unit constructs two bidirectional BiGRU networks and superimposes them to form a CABGRUs network. At the same time, an Attention mechanism is introduced into the CABGRUs network and an Attention layer is added, so that the model can not only extract time series signal features from both the forward and reverse directions, but also obtain the ability to selectively learn key information in signal features.

所述改进的深度门控循环单元神经网络CABGRUs中每个双向的BiGRU网络包含256个神经元,正向和反向BiGRU网络均由128个神经元组成,每个BiGRU神经元包括更新门和重置门,分别使用zt和rt来表示,其中,

Figure BDA0002378980750000035
表示在时间步t时的候选隐藏状态,ht表示在时间步t时的隐藏状态,xt表示在时间步t时的输入向量。更新门zt用于控制当前状态更新了多少状态信息,zt越趋近于1,表示当前状态对先前时刻的信息利用的越多。重置门rt用于控制从先前状态中移除哪些状态信息,rt越趋近于0,表示从先前时刻的输出状态所占的比例越小。公式如下:Each bidirectional BiGRU network in the improved deep gated recurrent unit neural network CABGRUs contains 256 neurons, and the forward and reverse BiGRU networks are composed of 128 neurons. Each BiGRU neuron includes an update gate and a reset gate, which are represented by z t and r t respectively, where,
Figure BDA0002378980750000035
represents the candidate hidden state at time step t, h t represents the hidden state at time step t, and x t represents the input vector at time step t. The update gate z t is used to control how much state information is updated in the current state. The closer z t is to 1, the more the current state uses the information of the previous moment. The reset gate r t is used to control which state information is removed from the previous state. The closer r t is to 0, the smaller the proportion of the output state from the previous moment. The formula is as follows:

rt=σ(xtWxr+ht-1Whr+br)r t =σ(x t W xr +h t-1 W hr +b r )

zt=σ(xtWxz+ht-1Whz+bz)z t =σ(x t W xz +h t-1 W hz +b z )

Figure BDA0002378980750000031
Figure BDA0002378980750000031

Figure BDA0002378980750000032
Figure BDA0002378980750000032

其中:Wxr和Whr表示重置门的权重向量,Wxz和Whz表示更新门的权重向量,Wxh和Whh表示候选隐藏状态的权重向量,br、bz、bh表示偏置向量,□表示Hadamard乘积,即矩阵的点乘,σ(·)表示Sigmod函数,tanh函数表示双曲正切激活函数。Wherein: Wxr and Whr represent the weight vector of the reset gate, Wxz and Whz represent the weight vector of the update gate, Wxh and Whh represent the weight vector of the candidate hidden state, br , bz , bh represent bias vectors, □ represents the Hadamard product, that is, the dot product of the matrix, σ(·) represents the Sigmod function, and the tanh function represents the hyperbolic tangent activation function.

输入时序信号的高维特征,经前向BiGRU网络输出隐藏状态

Figure BDA0002378980750000033
后向BiGRU网络输出隐藏状态
Figure BDA0002378980750000034
在时间步t时CABGRUs网络输出隐藏状态Pt。The high-dimensional features of the input time series signal are output through the forward BiGRU network to generate hidden states
Figure BDA0002378980750000033
The backward BiGRU network outputs the hidden state
Figure BDA0002378980750000034
At time step t, the CABGRUs network outputs the hidden state Pt .

公式如下:The formula is as follows:

Figure BDA0002378980750000041
Figure BDA0002378980750000041

Figure BDA0002378980750000042
Figure BDA0002378980750000042

Figure BDA0002378980750000043
Figure BDA0002378980750000043

步骤四:引入Attention机制计算连续时间步时序信号特征的重要性分布,所述引入的Attention机制通过分配不同的初始化概率权重与BiGRU层的各个时间步输出向量进行加权求和,最后经过Sigmod函数计算得到数值;所述Attention机制的计算公式如下:Step 4: Introduce the Attention mechanism to calculate the importance distribution of the features of the time series signal at consecutive time steps. The introduced Attention mechanism performs weighted summation on the output vectors of each time step of the BiGRU layer by assigning different initialization probability weights, and finally calculates the value through the Sigmod function. The calculation formula of the Attention mechanism is as follows:

ut=tanh(WsPt+bs)u t = tanh(W s P t + b s )

Figure BDA0002378980750000044
Figure BDA0002378980750000044

ν=∑αtPt ν=∑α t P t

其中,Pt表示BiGRU层在时间步t时的输出特征向量,ut表示Pt通过神经网络层得到的隐层表示,us表示随机初始化的上下文向量,αt表示ut通过Softmax函数归一化得到的重要性权重,ν表示最终文本信息的特征向量,即us在训练过程中随机生成,最后经由Softmax函数将Attention层输出值ν进行映射,得到刀具磨损状态的实时分类结果。Among them, Pt represents the output feature vector of the BiGRU layer at time step t, ut represents the hidden layer representation of Pt obtained through the neural network layer, us represents the randomly initialized context vector, αt represents the importance weight obtained by normalizing ut through the Softmax function, ν represents the feature vector of the final text information, that is, us is randomly generated during the training process, and finally the output value ν of the Attention layer is mapped through the Softmax function to obtain the real-time classification result of the tool wear status.

步骤五:网络模型的训练:引入了Dropout技术,用以防止模型在训练过程中发生过拟合;网络模型的激活函数采用Softmax,损失函数采用Categorical_crossentropy,对上述步骤所获得的时序信号特征进行磨损分类,得到分类结果。Step 5: Training of the network model: Dropout technology is introduced to prevent the model from overfitting during the training process; the activation function of the network model uses Softmax, and the loss function uses Categorical_crossentropy. The time series signal features obtained in the above steps are used for wear classification to obtain the classification results.

所述输入数据为时序信号,时序信号的特征提取与表达通过卷积层(C1和C2)、Dropout层、池化层(P1)、Flatten层、BiGRU(B1和B2)层、Attention层A1和全连接层(F1和F2)实现;其中,将尺寸为(2000,3)的时序信号作为输入数据输入到深度学习神经网络,卷积层C1采用3×1的卷积核对时序信号进行卷积,卷积核步长为1,生成(20,98,128)特征图,由卷积层C1输入到卷积层C2,卷积层C2采用3×1的卷积核对时序信号进行卷积,卷积核步长为1,生成(20,96,128)特征图,由卷积层C2经Dropout输入到池化层P1,Dropout为0.5,采用最大值池化的方式,生成(20,48,128)特征图,由池化层P1输入到Flatten层L1,生成(20,6144)特征,由Flatten层L1输入到BiGRU层B1,生成(20,256)特征,由BiGRU层B1输入到BiGRU层B2,生成(20,256)特征,由BiGRU层B2经Dropout输入到Attention层A1,Dropout为0.5,生成256×1特征图,由Attention层A1输入到全连接层F1,输出128×1特征图,全连接层F1输入到全连接层F2,最终输出3类刀具磨损状态值,用以确认当前时刻刀具的磨损状态。The input data is a time series signal, and the feature extraction and expression of the time series signal are realized through convolutional layers (C1 and C2), Dropout layers, pooling layers (P1), Flatten layers, BiGRU (B1 and B2) layers, Attention layers A1 and fully connected layers (F1 and F2); wherein, the time series signal of size (2000,3) is input into the deep learning neural network as input data, the convolutional layer C1 uses a 3×1 convolution kernel to convolve the time series signal, the convolution kernel step size is 1, and a (20,98,128) feature map is generated, which is input from the convolutional layer C1 to the convolutional layer C2, the convolutional layer C2 uses a 3×1 convolution kernel to convolve the time series signal, the convolution kernel step size is 1, and a (20,96,128) feature map is generated, which is input from the convolutional layer C2 to the pooling layer P1 through Dropout, D The ropout is 0.5, and the maximum value pooling method is used to generate a (20, 48, 128) feature map, which is input from the pooling layer P1 to the Flatten layer L1 to generate (20, 6144) features, which is input from the Flatten layer L1 to the BiGRU layer B1 to generate (20, 256) features, which is input from the BiGRU layer B1 to the BiGRU layer B2 to generate (20, 256) features, which is input from the BiGRU layer B2 to the Attention layer A1 via Dropout, and the Dropout is 0.5 to generate a 256×1 feature map, which is input from the Attention layer A1 to the fully connected layer F1 to output a 128×1 feature map, which is input from the fully connected layer F1 to the fully connected layer F2, and finally outputs three types of tool wear status values to confirm the wear status of the tool at the current moment.

本发明与现有技术的相比,具有明显的有益效果,由以上方案可知,对现有的门控循环单元(GRU)神经网络进行改进,通过构建两个双向BiGRU网络共同叠加组成CABGRUs网络,同时,在CABGRUs网络中引入Attention机制,增加Attention层,使模型既获得了能从正向和反向同时提取时序信号特征的能力,又获得了选择性的学习信号特征中关键信息的能力。其中,CABGRUs网络中,每个双向GRU网络包含256个神经元,正向和反向GRU网络均由128个神经元组成,每个GRU神经元包括更新门和重置门。使刀具磨损状态实时监测模型更好的学习到时序信号间时间序列特征的依赖关系,提高模型分类的准确率。此外,将Attention机制的引入其中,Attention机制是一种类似人类视觉所特有的大脑信号处理机制,通过分配不同的初始化概率权重与BiGRU层的各个时间步输出向量进行加权求和,然后带入Sigmod函数中,进行最终的计算并得到数值。实现从大量信号特征中有选择地过滤出部分关键信息并进行聚焦,聚焦的过程体现在权重系数的计算上,对不同的关键信息分配不同的权重,以提升权重的方式来强化关键信息的比重,减少长序列时序信号关键信息的丢失。Compared with the prior art, the present invention has obvious beneficial effects. It can be seen from the above scheme that the existing gated recurrent unit (GRU) neural network is improved, and two bidirectional BiGRU networks are constructed to form a CABGRUs network. At the same time, the Attention mechanism is introduced into the CABGRUs network, and the Attention layer is added, so that the model can not only extract the characteristics of the time series signal from the forward and reverse directions at the same time, but also obtain the ability to selectively learn the key information in the signal characteristics. Among them, in the CABGRUs network, each bidirectional GRU network contains 256 neurons, and the forward and reverse GRU networks are composed of 128 neurons, and each GRU neuron includes an update gate and a reset gate. The real-time monitoring model of the tool wear state can better learn the dependency relationship between the time series characteristics of the time series signals, and improve the accuracy of the model classification. In addition, the Attention mechanism is introduced, and the Attention mechanism is a brain signal processing mechanism similar to human vision. By assigning different initialization probability weights and weighted summation with the output vectors of each time step of the BiGRU layer, and then bringing them into the Sigmod function, the final calculation is performed and the numerical value is obtained. It is possible to selectively filter out some key information from a large number of signal features and focus on it. The focusing process is reflected in the calculation of weight coefficients. Different weights are assigned to different key information. The weight is increased to strengthen the proportion of key information and reduce the loss of key information in long-sequence time series signals.

总之,本发明利用传感器实时采集刀具加工过程中产生的振动信号,经小波阈值去噪后输入一维卷积神经网络中进行单个时间步时序信号局部特征提取,然后输入改进的深度门控循环单元神经网络CABGRUs中进行时序信号时间序列特征提取,引入Attention机制计算网络权重并对其进行合理分配,最后,将不同权重的信号特征信息放入Softmax分类器对刀具磨损状态进行分类,避免因人工提取特征带来的复杂性和局限性;同时有效解决了单卷积神经网络忽略时序信号前后关联问题,并避免了循环神经网络梯度弥散和梯度爆炸问题,通过引入了Attention机制提高了模型的准确率。因此,本发明具有提高刀具磨损状态监测的实时性和准确性的特点。In summary, the present invention uses sensors to collect vibration signals generated during tool processing in real time, and after wavelet threshold denoising, inputs them into a one-dimensional convolutional neural network to extract local features of single time-step time series signals, and then inputs them into an improved deep gated recurrent unit neural network CABGRUs to extract time series features of time series signals, introduces an Attention mechanism to calculate network weights and distributes them reasonably, and finally, puts signal feature information of different weights into a Softmax classifier to classify tool wear status, avoiding the complexity and limitations caused by manual feature extraction; at the same time, it effectively solves the problem of single convolutional neural networks ignoring the before and after association of time series signals, and avoids the gradient diffusion and gradient explosion problems of recurrent neural networks, and improves the accuracy of the model by introducing an Attention mechanism. Therefore, the present invention has the characteristics of improving the real-time and accuracy of tool wear status monitoring.

以下通过具体实施方式,进一步说明本发明的有益效果。The beneficial effects of the present invention are further illustrated below through specific implementation modes.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明的方法流程图;Fig. 1 is a flow chart of the method of the present invention;

图2为本发明的CABGRUs网络结构示意图;FIG2 is a schematic diagram of the CABGRUs network structure of the present invention;

图3为实施例中的CNN模型训练和验证图;FIG3 is a diagram of CNN model training and verification in an embodiment;

图4为实施例中的BiGRU模型训练和验证图;FIG4 is a diagram of BiGRU model training and verification in an embodiment;

图5为实施例中的CBLSTMs模型训练和验证图;FIG5 is a diagram of CBLSTMs model training and verification in an embodiment;

图6为实施例中的CABGRUs模型训练和验证图。FIG6 is a diagram of CABGRUs model training and verification in an embodiment.

具体实施方式DETAILED DESCRIPTION

以下结合附图及较佳实施例,对依据本发明提出的一种基于深度门控循环单元神经网络的刀具磨损状态监测方法的具体实施方式、特征及其功效,详细说明如后。In combination with the accompanying drawings and preferred embodiments, the specific implementation method, features and efficacy of a tool wear state monitoring method based on a deep gated recurrent unit neural network proposed in accordance with the present invention are described in detail as follows.

参见图1,本发明的一种基于深度门控循环单元神经网络的刀具磨损状态监测方法,包括如下步骤:Referring to FIG1 , a tool wear state monitoring method based on a deep gated recurrent unit neural network of the present invention comprises the following steps:

步骤一:利用加速度传感器实时采集数控加工设备在加工工件的过程中产生的振动信号,经小波阈值去噪后的信号作为该刀具磨损状态实时监测模型的输入信号,包括αx、αy、αz振动信号,原始振动信号将x、y、z方向的振动信号经连续采样裁剪为2000个采样点组成(2000,3)的张量,作为模型的输入数据带入到模型中。Step 1: Use the acceleration sensor to collect the vibration signal generated by the CNC machining equipment in the process of machining the workpiece in real time. The signal after wavelet threshold denoising is used as the input signal of the real-time monitoring model of the tool wear status, including α x , α y , and α z vibration signals. The original vibration signal is continuously sampled and cropped into a tensor (2000,3) consisting of 2000 sampling points in the x, y, and z directions, and is brought into the model as the input data of the model.

步骤二:单个时间步时序信号进行局部特征提取:将尺寸为(2000,3)的时序信号作为输入数据输入一维卷积神经网络(CNN)进行邻域滤波,利用滑动窗口进行计算,最终得到单个时间步时序信号的高维特征;Step 2: Extract local features of a single time-step time series signal: Input the time series signal of size (2000, 3) as input data into a one-dimensional convolutional neural network (CNN) for neighborhood filtering, and use a sliding window for calculation to finally obtain the high-dimensional features of the single time-step time series signal;

采用一维卷积神经网络直接处理刀具加工过程中产生的时序信号,其卷积神经网络部分包括2层卷积层(Convolutional Layer,CONV)和1层池化层(Pooling Layer,POOL),卷积层通过一维卷积运算的方式将每一维度的时序信号进行邻域滤波以生成特征映射,每个特征图可以被看作不同滤波器对当前时间步时序信号的卷积操作。当输入时序信号为xt时,滤波器为wt,卷积层的特征图yt可以表达为:A one-dimensional convolutional neural network is used to directly process the time series signals generated during tool processing. The convolutional neural network part includes two convolutional layers (CONV) and one pooling layer (POOL). The convolutional layer performs neighborhood filtering on the time series signal of each dimension through a one-dimensional convolution operation to generate a feature map. Each feature map can be regarded as a convolution operation of different filters on the time series signal of the current time step. When the input time series signal is xt , the filter is wt , and the feature map yt of the convolutional layer can be expressed as:

Figure BDA0002378980750000071
Figure BDA0002378980750000071

在卷积层中,第l层的每一个神经元都只和第l-1层的一个局部窗口内神经元相连,构成一个局部连接网络。一维卷积层的计算公式如下:In the convolutional layer, each neuron in the lth layer is only connected to a neuron in a local window of the l-1th layer, forming a local connection network. The calculation formula of the one-dimensional convolutional layer is as follows:

Figure BDA0002378980750000072
Figure BDA0002378980750000072

其中:

Figure BDA0002378980750000073
表示第l层的第j个特征映射,f表示激活函数,M表示输入特征映射的数量,
Figure BDA0002378980750000074
表示第l-1层的第i个特征映射,
Figure BDA0002378980750000075
表示可训练的卷积核,
Figure BDA0002378980750000076
表示偏置参数。考虑到收敛速度和过拟合问题,本发明的非线性激活函数选用收敛速度较快的修正线性单元(Rectified Linear,Relu),用于提高网络的稀疏性,减少参数的相互依存关系,缓解过拟合现象的发生。Relu激活函数的公式为:in:
Figure BDA0002378980750000073
represents the jth feature map of the lth layer, f represents the activation function, M represents the number of input feature maps,
Figure BDA0002378980750000074
represents the i-th feature map of the l-1th layer,
Figure BDA0002378980750000075
represents a trainable convolution kernel,
Figure BDA0002378980750000076
Represents the bias parameter. Considering the convergence speed and overfitting problems, the nonlinear activation function of the present invention uses the rectified linear unit (Relu) with a faster convergence speed to improve the sparsity of the network, reduce the interdependence of parameters, and alleviate the occurrence of overfitting. The formula of the Relu activation function is:

Figure BDA0002378980750000077
Figure BDA0002378980750000077

其中:

Figure BDA0002378980750000078
表示卷及操作的输出值;
Figure BDA0002378980750000079
表示
Figure BDA00023789807500000710
的激活值。in:
Figure BDA0002378980750000078
Represents the output value of the volume and operation;
Figure BDA0002378980750000079
express
Figure BDA00023789807500000710
The activation value of .

卷积层后面连接池化层,用于求取局部最大值或局部均值,即最大值池化(MaxPooling)和均值池化(Mean Pooling)。池化层具有类似于特征选择的功能,可以保证特征在拥有抗变形能力的同时,达到降低特征维度,加快网络训练速度,减少参数数量,提高特征鲁棒性的目的。本发明选用最大值池化对邻域内的特征点取最大值,公式如下:The convolutional layer is followed by a pooling layer to obtain the local maximum or local mean, namely, maximum pooling (MaxPooling) and mean pooling (Mean Pooling). The pooling layer has a function similar to feature selection, which can ensure that the feature has anti-deformation ability while achieving the purpose of reducing feature dimensions, speeding up network training, reducing the number of parameters, and improving feature robustness. The present invention uses maximum pooling to take the maximum value of the feature points in the neighborhood, and the formula is as follows:

Figure BDA00023789807500000711
Figure BDA00023789807500000711

其中:

Figure BDA00023789807500000712
表示第l层的第i个特征矢量中的第t个神经元的值,t∈[(j-1)w+1,jw];w为池化区域的宽度;Pi l+1(j)表示第l+1层神经元对应的值。in:
Figure BDA00023789807500000712
represents the value of the tth neuron in the i-th feature vector of the l-th layer, t∈[(j-1)w+1,jw]; w is the width of the pooling area; Pi l +1 (j) represents the value corresponding to the neuron in the l+1-th layer.

通过一维卷积神经网络对原始数据的特征提取,将时序信号的3维特征更好的表达为高维特征,便于后续的网络进行时间序列特征提取。By extracting the features of the original data through a one-dimensional convolutional neural network, the three-dimensional features of the time series signal can be better expressed as high-dimensional features, which facilitates the subsequent network to extract time series features.

步骤三:时序信号的时间序列进行特征提取:采用改进的门控循环单元处理连续时间步时序信号产生的高维特征,逐步合成输入信号的向量特征表示;Step 3: Extract features from the time series of the time series signal: Use an improved gated recurrent unit to process the high-dimensional features generated by the continuous time step time series signal, and gradually synthesize the vector feature representation of the input signal;

所述改进的门控循环单元,通过构建两个双向门控循环单元BiGRU网络共同叠加组成CABGRUs网络,同时,在CABGRUs网络中引入Attention机制,增加Attention层,使模型既获得了能从正向和反向同时提取时序信号特征的能力,又获得了选择性的学习信号特征中关键信息的能力;The improved gated recurrent unit constructs two bidirectional gated recurrent unit BiGRU networks and superimposes them to form a CABGRUs network. At the same time, an Attention mechanism is introduced into the CABGRUs network and an Attention layer is added, so that the model can not only extract time series signal features from both forward and reverse directions, but also selectively learn key information in signal features.

刀具加工过程中产生的原始信号存在时序关系,RNN网络可以对时序信号时间序列进行编码,挖掘时间序列中相对较长间隔的时序变化规律。为了让刀具磨损状态实时监测模型更好的学习到时序信号间时间序列特征的依赖关系,提高模型分类的准确率。本发明对现有的门控循环单元(GRU)进行改进,通过构建两个双向门控循环单元BiGRU网络共同叠加组成CABGRUs网络,同时,在CABGRUs网络中引入Attention机制,增加Attention层,使模型既获得了能从正向和反向同时提取时序信号特征的能力,又获得了选择性的学习信号特征中关键信息的能力。The original signals generated during tool processing have a time series relationship. The RNN network can encode the time series of the time series signal and explore the time series variation rules of relatively long intervals in the time series. In order to enable the real-time monitoring model of tool wear status to better learn the dependency relationship between the time series features of the time series signals, the accuracy of model classification is improved. The present invention improves the existing gated recurrent unit (GRU) by constructing two bidirectional gated recurrent unit BiGRU networks to form a CABGRUs network. At the same time, the Attention mechanism is introduced into the CABGRUs network, and the Attention layer is added, so that the model can not only extract the time series signal features from both the forward and reverse directions, but also obtain the ability to selectively learn key information in the signal features.

本发明构建的深度门控循环单元神经网络CABGRUs中每个双向的BiGRU网络包含256个神经元,正向和反向BiGRU网络均由128个神经元组成,每个BiGRU神经元包括更新门和重置门,分别使用zt和rt来表示。Each bidirectional BiGRU network in the deep gated recurrent unit neural network CABGRUs constructed by the present invention contains 256 neurons, and the forward and reverse BiGRU networks are composed of 128 neurons. Each BiGRU neuron includes an update gate and a reset gate, which are represented by z t and r t respectively.

其中,

Figure BDA0002378980750000084
表示在时间步t时的候选隐藏状态,ht表示在时间步t时的隐藏状态,xt表示在时间步t时的输入向量。更新门zt用于控制当前状态更新了多少状态信息,zt越趋近于1,表示当前状态对先前时刻的信息利用的越多。重置门rt用于控制从先前状态中移除哪些状态信息,rt越趋近于0,表示从先前时刻的输出状态所占的比例越小。公式如下:in,
Figure BDA0002378980750000084
represents the candidate hidden state at time step t, h t represents the hidden state at time step t, and x t represents the input vector at time step t. The update gate z t is used to control how much state information is updated in the current state. The closer z t is to 1, the more the current state uses the information of the previous moment. The reset gate r t is used to control which state information is removed from the previous state. The closer r t is to 0, the smaller the proportion of the output state from the previous moment. The formula is as follows:

rt=σ(xtWxr+ht-1Whr+br)r t =σ(x t W xr +h t-1 W hr +b r )

zt=σ(xtWxz+ht-1Whz+bz)z t =σ(x t W xz +h t-1 W hz +b z )

Figure BDA0002378980750000081
Figure BDA0002378980750000081

Figure BDA0002378980750000082
Figure BDA0002378980750000082

其中:Wxr和Whr表示重置门的权重向量,Wxz和Whz表示更新门的权重向量,Wxh和Whh表示候选隐藏状态的权重向量,br、bz、bh表示偏置向量,□表示Hadamard乘积,即矩阵的点乘,σ(·)表示Sigmod函数,tanh函数表示双曲正切激活函数。Wherein: Wxr and Whr represent the weight vector of the reset gate, Wxz and Whz represent the weight vector of the update gate, Wxh and Whh represent the weight vector of the candidate hidden state, br , bz , bh represent bias vectors, □ represents the Hadamard product, that is, the dot product of the matrix, σ(·) represents the Sigmod function, and the tanh function represents the hyperbolic tangent activation function.

输入时序信号的高维特征,经前向BiGRU网络输出隐藏状态

Figure BDA0002378980750000083
后向BiGRU网络输出隐藏状态
Figure BDA0002378980750000091
在时间步t时CABGRUs网络输出隐藏状态Pt。The high-dimensional features of the input time series signal are output through the forward BiGRU network to generate hidden states
Figure BDA0002378980750000083
The backward BiGRU network outputs the hidden state
Figure BDA0002378980750000091
At time step t, the CABGRUs network outputs the hidden state Pt .

公式如下:The formula is as follows:

Figure BDA0002378980750000092
Figure BDA0002378980750000092

Figure BDA0002378980750000093
Figure BDA0002378980750000093

Figure BDA0002378980750000094
Figure BDA0002378980750000094

步骤四:引入Attention机制:利用Attention机制计算连续时间步时序信号特征的重要性分布,生成含有注意力概率分布的时序信号特征模型;即引入的Attention机制通过分配不同的初始化概率权重与BiGRU层的各个时间步输出向量进行加权求和,最后经过Sigmod函数计算得到数值。Step 4: Introduce the Attention mechanism: Use the Attention mechanism to calculate the importance distribution of the time series signal features of continuous time steps, and generate a time series signal feature model containing the attention probability distribution; that is, the introduced Attention mechanism assigns different initialization probability weights to the output vectors of each time step of the BiGRU layer, and finally calculates the value through the Sigmod function.

本发明引入的Attention机制通过分配不同的初始化概率权重与BiGRU层的各个时间步输出向量进行加权求和,最后经过Sigmod函数计算得到数值。实现从大量信号特征中有选择地过滤出部分关键信息并进行聚焦,聚焦的过程体现在权重系数的计算上,对不同的关键信息分配不同的权重,以提升权重的方式来强化关键信息的比重,减少长序列时序信号关键信息的丢失。Attention机制的计算公式如下:The Attention mechanism introduced in the present invention performs weighted summation by assigning different initialization probability weights to the output vectors of each time step of the BiGRU layer, and finally obtains the value through Sigmod function calculation. It can selectively filter out some key information from a large number of signal features and focus on them. The focusing process is reflected in the calculation of weight coefficients. Different weights are assigned to different key information, and the proportion of key information is strengthened by increasing the weight, thereby reducing the loss of key information in long sequence time series signals. The calculation formula of the Attention mechanism is as follows:

ut=tanh(WsPt+bs)u t = tanh(W s P t + b s )

Figure BDA0002378980750000095
Figure BDA0002378980750000095

ν=∑αtPt ν=∑α t P t

其中,Pt表示BiGRU层在时间步t时的输出特征向量,ut表示Pt通过神经网络层得到的隐层表示,us表示随机初始化的上下文向量,αt表示ut通过Softmax函数归一化得到的重要性权重,ν表示最终文本信息的特征向量。us在训练过程中随机生成,最后经由Softmax函数将Attention层输出值ν进行映射,得到刀具磨损状态的实时分类结果。Among them, Pt represents the output feature vector of the BiGRU layer at time step t, ut represents the hidden layer representation of Pt obtained through the neural network layer, us represents the randomly initialized context vector, αt represents the importance weight obtained by normalizing ut through the Softmax function, and ν represents the feature vector of the final text information. us is randomly generated during the training process, and finally the output value ν of the Attention layer is mapped through the Softmax function to obtain the real-time classification result of the tool wear state.

步骤五:网络模型的训练:引入了Dropout技术,用以防止模型在训练过程中发生过拟合;网络模型的激活函数采用Softmax,损失函数采用Categorical_crossentropy,对上述步骤所获得的时序信号特征进行磨损分类,得到分类结果。Step 5: Training of the network model: Dropout technology is introduced to prevent the model from overfitting during the training process; the activation function of the network model uses Softmax, and the loss function uses Categorical_crossentropy. The time series signal features obtained in the above steps are used for wear classification to obtain the classification results.

本发明的刀具磨损状态实时监测模型中引入了Dropout技术,用以防止模型在训练过程中发生过拟合。网络模型的激活函数采用Softmax,损失函数采用Categorical_crossentropy,对所获得的时序信号特征进行磨损分类。The present invention introduces the Dropout technology into the real-time monitoring model of tool wear status to prevent the model from overfitting during the training process. The activation function of the network model adopts Softmax, and the loss function adopts Categorical_crossentropy to classify the wear of the obtained time series signal features.

公式如下:The formula is as follows:

Figure BDA0002378980750000101
Figure BDA0002378980750000101

y是一个维度为类别数量大小的向量,其每一维度的值都介于[0,1]之间,并且所有维度的和为1,该值代表该刀具磨损状态属于某个类别的概率,M是可能的类别个数。在模型的训练过程中,通过Categorical_crossentropy Loss训练整个模型。交叉熵误差计算公式为:y is a vector with the dimension of the number of categories. The value of each dimension is between [0,1], and the sum of all dimensions is 1. This value represents the probability that the tool wear state belongs to a certain category. M is the number of possible categories. During the model training process, the entire model is trained through Categorical_crossentropy Loss. The cross entropy error calculation formula is:

Figure BDA0002378980750000102
Figure BDA0002378980750000102

Figure BDA0002378980750000103
Figure BDA0002378980750000103

Figure BDA0002378980750000104
Figure BDA0002378980750000104

Figure BDA0002378980750000105
Figure BDA0002378980750000105

其中:m表示分类数,n表示样本数,

Figure BDA0002378980750000106
表示刀具磨损状态真实类别标签向量中的第i个值,yim表示Softmax分类器的输出向量y的第i个值。对于所获得的交叉熵误差,最后取其平均作为模型的损失函数。在训练模型的时候采用Adam方法来最小化目标函数,Adam本质上是带有动量项的RMSprop,它利用梯度的一阶矩估计和二阶矩估计动态调整每个参数的学习率。Adam的优点主要在于经过偏置校正后,每一次迭代学习率都有个确定范围,使得参数变化比较平稳。Among them: m represents the number of categories, n represents the number of samples,
Figure BDA0002378980750000106
represents the i-th value in the true category label vector of the tool wear state, and yim represents the i-th value of the output vector y of the Softmax classifier. For the obtained cross entropy error, the average is finally taken as the loss function of the model. When training the model, the Adam method is used to minimize the objective function. Adam is essentially RMSprop with a momentum term. It uses the first-order moment estimate and the second-order moment estimate of the gradient to dynamically adjust the learning rate of each parameter. The main advantage of Adam is that after bias correction, the learning rate of each iteration has a certain range, which makes the parameter change relatively stable.

如图2的CABGRUs网络结构示意图。所述的基于CABGRUs模型神经网络的输入数据包括时序信号,时序信号的特征提取与表达通过卷积层(C1和C2)、Dropout层、池化层(P1)、Flatten层、BiGRU层(B1和B2)、Attention层A1(注意力机制)和全连接层(F1和F2)实现。As shown in Figure 2, the CABGRUs network structure diagram. The input data of the CABGRUs model-based neural network includes time series signals, and the feature extraction and expression of time series signals are realized through convolutional layers (C1 and C2), Dropout layers, pooling layers (P1), Flatten layers, BiGRU layers (B1 and B2), Attention layers A1 (attention mechanism) and fully connected layers (F1 and F2).

将尺寸为(2000,3)的时序信号作为输入数据输入到深度学习神经网络,卷积层C1采用3×1的卷积核对时序信号进行卷积,卷积核步长为1,生成(20,98,128)特征图,由卷积层C1输入到卷积层C2,卷积层C2采用3×1的卷积核对时序信号进行卷积,卷积核步长为1,生成(20,96,128)特征图,由卷积层C2经Dropout输入到池化层P1,Dropout为0.5,采用最大值池化的方式,生成(20,48,128)特征图,由池化层P1输入到Flatten层L1,生成(20,6144)特征,由Flatten层L1输入到BiGRU层B1,生成(20,256)特征,由BiGRU层B1输入到BiGRU层B2,生成(20,256)特征,由BiGRU层B2经Dropout输入到Attention层A1,Dropout为0.5,生成256×1特征图,由Attention层A1输入到全连接层F1,输出128×1特征图,全连接层F1输入到全连接层F2,最终输出3类刀具磨损状态值,用以确认当前时刻刀具的磨损状态。The time series signal of size (2000,3) is input as input data to the deep learning neural network. The convolution layer C1 uses a 3×1 convolution kernel to convolve the time series signal. The convolution kernel step size is 1 to generate a (20,98,128) feature map, which is input from the convolution layer C1 to the convolution layer C2. The convolution layer C2 uses a 3×1 convolution kernel to convolve the time series signal. The convolution kernel step size is 1 to generate a (20,96,128) feature map, which is input from the convolution layer C2 to the pooling layer P1 through Dropout. The Dropout is 0.5 and the maximum value pooling is used to generate a (20,48,128) feature map, which is input from the pooling layer P1 to the Flatt en layer L1, generates (20,6144) features, which are input to the BiGRU layer B1 by the Flatten layer L1, and generate (20,256) features. The BiGRU layer B1 is input to the BiGRU layer B2, and the (20,256) features are generated. The BiGRU layer B2 is input to the Attention layer A1 through Dropout, and the Dropout is 0.5, generating a 256×1 feature map. The Attention layer A1 is input to the fully connected layer F1, and a 128×1 feature map is output. The fully connected layer F1 is input to the fully connected layer F2, and finally three types of tool wear status values are output to confirm the wear status of the tool at the current moment.

实施例如下:The examples are as follows:

1实验设计1 Experimental design

(1)状态监测(1) Condition monitoring

本发明实验采用高精度数控立式铣床(型号:VM600)用于铣削工件,铣削过程中不加冷却液,铣削工件为模具钢(S136),铣削刀具采用超细微粒钨钢硬质合金四刃铣刀,刀刃表面覆盖有TiAIN图层。表1为铣削实验切削参数。The experiment of the present invention uses a high-precision CNC vertical milling machine (model: VM600) for milling workpieces. No coolant is added during the milling process. The milling workpiece is mold steel (S136). The milling tool uses an ultra-fine particle tungsten steel carbide four-edge milling cutter, and the blade surface is covered with a TiAIN layer. Table 1 shows the cutting parameters of the milling experiment.

表1铣削实验切削参数Table 1 Cutting parameters of milling experiment

Figure BDA0002378980750000111
Figure BDA0002378980750000111

实验中,采用三个加速度传感器(型号:INV9822)按x、y、z方向磁性吸附在机床夹具上,用于实时采集刀具加工过程中产生的原始振动信号;采用北京东方振动和噪声研究所的高精度数字采集仪(型号:INV3018CT)处理实时信号并传送至计算机。信号的采样频率为20KHz,每次走刀沿x方向铣削200mm,记为一个铣削行程,每把刀具铣削330个行程,每个铣削行程结束后,采用预先校定好的高精度数字显微镜测量铣刀每个铣削刃的后刀面磨损值。In the experiment, three acceleration sensors (model: INV9822) were magnetically attached to the machine tool fixture in the x, y, and z directions to collect the original vibration signals generated during the tool processing process in real time; a high-precision digital acquisition instrument (model: INV3018CT) from the Beijing Oriental Vibration and Noise Research Institute was used to process the real-time signals and transmit them to the computer. The sampling frequency of the signal was 20KHz, and each pass of the tool milled 200mm along the x direction, which was recorded as a milling stroke. Each tool milled 330 strokes. After each milling stroke, a pre-calibrated high-precision digital microscope was used to measure the back face wear value of each milling edge of the milling cutter.

(2)数据分析(2) Data Analysis

实验的深度学习硬件平台采用高性能服务器:Intel Xeon E5-2650处理器,主频2.3GHz,256GB内存,GPU选用NVIDIA GeForce TITAN X图形处理器。软件平台使用Ubuntu16.04.4操作系统,深度学习框架选用Keras为前端,TensorFlow为后端进行数据分析。The deep learning hardware platform of the experiment uses a high-performance server: Intel Xeon E5-2650 processor, main frequency 2.3GHz, 256GB memory, GPU uses NVIDIA GeForce TITAN X graphics processor. The software platform uses Ubuntu16.04.4 operating system, and the deep learning framework uses Keras as the front end and TensorFlow as the back end for data analysis.

实验采用4把铣刀(C1、C2、C3、C4)完成铣削操作,铣削加工1320次,得到1320个原始信号样本,将3把铣刀(C1、C2、C3)的数据用于模型的训练和验证,1把铣刀(C4)数据用于模型测试,随机选取990个样本中的80%作为训练集,20%作为验证集。深度学习训练过程中需要足够多的样本数量用以提高神经网络的学习质量。数据扩充能在原有量级的数据基础上增加实验数据,提高鲁棒性。原始加工信号的数据样本为长序列周期性的时序信号,根据信号采样原理,本发明将每个样本连续采样裁剪为多个等长度的短序列时序信号,经数据归一化后用于模型的输入。The experiment uses 4 milling cutters (C1, C2, C3, C4) to complete the milling operation, milling 1320 times, and obtain 1320 original signal samples. The data of 3 milling cutters (C1, C2, C3) are used for model training and verification, and the data of 1 milling cutter (C4) is used for model testing. 80% of the 990 samples are randomly selected as training sets and 20% as verification sets. A sufficient number of samples are required in the deep learning training process to improve the learning quality of the neural network. Data expansion can increase experimental data on the basis of the original order of magnitude of data and improve robustness. The data sample of the original processing signal is a long sequence periodic time series signal. According to the signal sampling principle, the present invention continuously samples and cuts each sample into multiple short sequence time series signals of equal length, which are used as the input of the model after data normalization.

每个样本中包含三维信号和四个后刀面的磨损值,为防止不同刀刃磨损值的相互干扰,选取四个刀刃中的最大值作为该次铣削的刀具的磨损值。刀具的磨损状态分为:初期磨损、正常磨损、急剧磨损。本发明根据每把铣刀的实际磨损曲线定义刀具的磨损状态,用于确定刀具的磨损程度,将刀具磨损程度划分3类标签数据,采用one-hot编码形式将标签数据进行转换,便于最终刀具磨损状态分类。Each sample contains a three-dimensional signal and the wear values of the four back tool faces. To prevent the mutual interference of different blade wear values, the maximum value among the four blades is selected as the wear value of the tool for this milling. The wear state of the tool is divided into: initial wear, normal wear, and rapid wear. The present invention defines the wear state of the tool according to the actual wear curve of each milling cutter, which is used to determine the degree of wear of the tool, divides the degree of wear of the tool into three types of label data, and converts the label data in one-hot encoding form to facilitate the final classification of the tool wear state.

2深度学习对比实验结果2 Deep Learning Comparative Experimental Results

表2模型具体训练参数Table 2 Model specific training parameters

Figure BDA0002378980750000121
Figure BDA0002378980750000121

实验将铣刀加工过程产生的原始信号经采样裁剪后输入到CABGRUs神经网络模型中,模型自适应提取时序信号中隐含的高维特征,计算模型实际输出值与真实值之间的误差距离,采用Adam算法使Loss下降并不断更新网络权重,使模型的实际输出值更加趋近于真实值。本发明使用CNN、BiGRU、CBLSTMs深度学习神经网络与本发明提出的CABGRUs模型进行比较,训练过程中4个模型设置同样的训练参数。表2为模型具体训练参数表。In the experiment, the original signal generated by the milling process is sampled and cropped and then input into the CABGRUs neural network model. The model adaptively extracts the high-dimensional features implicit in the time series signal, calculates the error distance between the actual output value of the model and the true value, and uses the Adam algorithm to reduce the loss and continuously update the network weights, so that the actual output value of the model is closer to the true value. The present invention uses CNN, BiGRU, CBLSTMs deep learning neural networks to compare with the CABGRUs model proposed in the present invention. The same training parameters are set for the four models during the training process. Table 2 is a table of specific training parameters for the model.

经深度学习神经网络训练和验证后得到不同的损失函数值以及准确率,图3、图4、图5、图6,分别为CNN、BiGRU、CBLSTMs、CABGRUs模型输出的训练集和验证集的损失函数值以及验证集的准确率,其中x轴表示铣刀数据集的迭代次数、双y轴分别表示损失函数值和模型验证准确率。从图中可以得出,网络模型训练集的损失函数值随着迭代次数的增加不断减小,最终趋于稳定,验证集的损失函数值呈周期性波动,CNN和BiGRU网络模型损失函数振荡幅度较大,CBLSTMs和CABGRUs网络模型相对较为平稳,损失函数总体趋势不断递减,并最终收敛,没有出现梯度爆炸或弥散现象,并且网络收敛速度较快。CNN和BiGRU网络模型验证集的准确率分别为89.75%和88.02%,预测精度较低,这表明单独的深度学习网络虽然可以对于刀具磨损状态进行预测,但由于受网络模型能力的限制,无法捕获刀具振动信号中隐藏的更深层次的特征。本文提出的CABGRUs优于CNN和BiGRU网络模型,这是由于网络结构相对较深,有利于挖掘更深层次的特征,首先采用CNN网络可以有效提取时序信号中隐藏的局部特征,同时压缩时序信号特征的长度,便于后续网络学习到时序信号间时间序列特征的依赖关系,提高了模型预测的能力。与深度CBLSTMs网络模型相比较,本文提出的CABGRUs网络模型获得了较高的预测精度。CBLSTMs构建双层BiLSTM网络,利用双向的LSTM网络访问访问过去和未来的信息,即能从正向和反向同时提取时序信号特征,挖掘出更加丰富的信息特征,经22次迭代后,验证集的准确率基本稳定在96%以上,50次迭代后准确率为96.75%。CABGRUs在CBLSTMs基础上改进了神经元内部结构,引入了Attention机制,实现从大量信息中有选择地过滤出部分关键信息并进行聚焦,减少长序列文本关键信息特征的丢失,经20次迭代后,验证集的准确率基本稳定在96%以上,50次迭代后准确率为98.02%,损失函数值达到0.0595,网络稳定性较高。表3,为模型验证的损失函数和准确率。After deep learning neural network training and verification, different loss function values and accuracy rates are obtained. Figures 3, 4, 5, and 6 are the loss function values of the training set and verification set output by the CNN, BiGRU, CBLSTMs, and CABGRUs models, as well as the accuracy of the verification set. The x-axis represents the number of iterations of the milling cutter data set, and the double y-axes represent the loss function value and the model verification accuracy rate. It can be concluded from the figure that the loss function value of the network model training set decreases with the increase of the number of iterations and eventually stabilizes. The loss function value of the verification set fluctuates periodically. The oscillation amplitude of the loss function of the CNN and BiGRU network models is large. The CBLSTMs and CABGRUs network models are relatively stable. The overall trend of the loss function continues to decrease and eventually converges. There is no gradient explosion or diffusion phenomenon, and the network converges quickly. The accuracy of the validation set of the CNN and BiGRU network models is 89.75% and 88.02% respectively, and the prediction accuracy is low. This shows that although a single deep learning network can predict the tool wear state, it cannot capture the deeper features hidden in the tool vibration signal due to the limitation of the network model capacity. The CABGRUs proposed in this paper is better than the CNN and BiGRU network models. This is because the network structure is relatively deep, which is conducive to mining deeper features. First, the CNN network can effectively extract the local features hidden in the time series signal, and at the same time compress the length of the time series signal features, which is convenient for the subsequent network to learn the dependency relationship between the time series features of the time series signals, and improve the prediction ability of the model. Compared with the deep CBLSTMs network model, the CABGRUs network model proposed in this paper has achieved higher prediction accuracy. CBLSTMs constructs a two-layer BiLSTM network and uses a bidirectional LSTM network to access past and future information, that is, it can extract time series signal features from both forward and reverse directions, and mine more abundant information features. After 22 iterations, the accuracy of the validation set is basically stable at more than 96%, and the accuracy is 96.75% after 50 iterations. CABGRUs improves the internal structure of neurons on the basis of CBLSTMs and introduces the Attention mechanism to selectively filter out some key information from a large amount of information and focus on it, reducing the loss of key information features of long sequence texts. After 20 iterations, the accuracy of the validation set is basically stable at more than 96%, and after 50 iterations, the accuracy is 98.02%, and the loss function value reaches 0.0595, indicating high network stability. Table 3 shows the loss function and accuracy of the model validation.

表3模型验证的损失函数和准确率Table 3 Loss function and accuracy of model verification

Figure BDA0002378980750000141
Figure BDA0002378980750000141

选用铣刀(C4)的数据用于网络模型的测试集,测试样本总量为330个,其中初期磨损样本为23个、正常磨损样本为232个、急剧磨损样本为75个,随机将上述样本带入训练好的CABGRUs网络模型中。由测试结果可知,本发明提出的CABGRUs网络模型泛化能力较强,虽然测试时间不及部分对比模型,但该算法在时间与精度之间找到了较好的平衡点。表4所示为单个样本的测试时间和模型测试集的准确率。The data of the milling cutter (C4) was selected for the test set of the network model. The total number of test samples was 330, including 23 initial wear samples, 232 normal wear samples, and 75 acute wear samples. The above samples were randomly introduced into the trained CABGRUs network model. From the test results, it can be seen that the CABGRUs network model proposed in the present invention has a strong generalization ability. Although the test time is not as good as some comparative models, the algorithm has found a good balance between time and accuracy. Table 4 shows the test time of a single sample and the accuracy of the model test set.

表4单个样本的测试时间和模型测试集的准确率Table 4 Test time of a single sample and accuracy of the model test set

Figure BDA0002378980750000142
Figure BDA0002378980750000142

3深度学习与机器学习对比3. Comparison between Deep Learning and Machine Learning

表5机器学习和深度学习预测的准确率Table 5 Accuracy of machine learning and deep learning predictions

Figure BDA0002378980750000143
Figure BDA0002378980750000143

为进一步验证本发明所提算法的可行性,将本发明所采集的实验数据用于BP神经网络(BPNN)、支持向量机(SVM)、隐马尔可夫模型(HMM)、模糊神经网络(FNN)刀具磨损状态监测模型中,对加速度传感器采集的原始信号进行小波阈值去噪处理,经特征提取和特征筛选后,得到噪声干扰少且对刀具磨损关联程度大的特征。特征提取包括时域特征、频域特征和时频域特。采用皮尔森相关系数法反映特征与磨损量间的相关程度,并选取相关系数大于0.9的特征作为提取对象,实现特征降维,将提取的特征作为机器学习模型的输入。与本发明所提的CABGRUs网络模型的实验结果进行比较。从表中可以得出,传统机器的准确率差异很大,这是由于人工提取特征的不稳定性以及模型的构建都会对预测结果产生影响。深度学习的预测准确率明显高于机器学习BPNN、SVM、HMM,然而,机器学习FNN预测准确率达到94.24%,因为FNN利用神经网络来学习模糊系统的规则,根据输入输出的学习样本自动设计和调整模糊系统的设计参数,实现模糊系统的自学习和自适应功能。本文方法与其他算法模型对比,在性能上有较大的提升。CABGRUs模型的测试样本速度可以达到8ms,满足工业实际生产时实时对刀具磨损状态进行监测的要求。表5所示,为机器学习和深度学习预测的准确率。To further verify the feasibility of the algorithm proposed in the present invention, the experimental data collected by the present invention are used in the tool wear state monitoring model of BP neural network (BPNN), support vector machine (SVM), hidden Markov model (HMM), and fuzzy neural network (FNN). The original signal collected by the acceleration sensor is subjected to wavelet threshold denoising. After feature extraction and feature screening, features with less noise interference and a high degree of correlation with tool wear are obtained. Feature extraction includes time domain features, frequency domain features, and time-frequency domain features. The Pearson correlation coefficient method is used to reflect the degree of correlation between features and wear, and features with a correlation coefficient greater than 0.9 are selected as extraction objects to achieve feature dimensionality reduction, and the extracted features are used as inputs to the machine learning model. The experimental results are compared with those of the CABGRUs network model proposed in the present invention. It can be concluded from the table that the accuracy of traditional machines varies greatly. This is because the instability of manually extracted features and the construction of the model will affect the prediction results. The prediction accuracy of deep learning is significantly higher than that of machine learning BPNN, SVM, and HMM. However, the prediction accuracy of machine learning FNN reaches 94.24%, because FNN uses neural networks to learn the rules of fuzzy systems, automatically designs and adjusts the design parameters of fuzzy systems according to the input and output learning samples, and realizes the self-learning and self-adaptive functions of fuzzy systems. Compared with other algorithm models, the method in this paper has a significant improvement in performance. The test sample speed of the CABGRUs model can reach 8ms, which meets the requirements of real-time monitoring of tool wear status in actual industrial production. Table 5 shows the prediction accuracy of machine learning and deep learning.

总之,本发明将CNN和RNN融合的机器学习方法应用于刀具磨损状态实时监测任务中,并针对高频振动信号噪声大、样本冗余的特点修改了网络参数和结构,用于实时监测刀具磨损程度。在预处理阶段对加速度传感器采集的时序信号进行小波阈值去噪,结合数据扩充的方式将每次刀具进给产生的冗长信号划分为多个训练样本,在原有量级的数据基础上增加实验数据,以实现滤除噪声,改善算法的鲁棒性;提出依据实际磨损曲线来定义刀具的磨损状态,用于确定刀具的磨损程度,提高了数据标签分类的准确性;采用一维卷积神经网络进行局部特征提取,从去噪信号中挖掘出丰富的高维特征,更好的表征了原始信号中隐藏的刀具磨损状态信息,缩短了网络模型训练时间;将Attention机制的思想创新性的引入到改进后的CABGRUs网络模型中,有效的提升了网络模型实时监测的识别精度和泛化性能。文中使用刀具磨损状态实时监测系统验证了该方法可行性。实验搭建了信号采集单元及上位机分析单元,采用深度学习框架实时预测刀具磨损状态。实验结果表明:我们提出的CABGRUs网络模型的预测精度达到97.58%,均优于传统的机器学习算法,同时,能适应大部分生产环境的硬件系统,在识别精度和识别速度上能够满足工业要求。In summary, the present invention applies the machine learning method of CNN and RNN fusion to the task of real-time monitoring of tool wear status, and modifies the network parameters and structure in view of the characteristics of high-frequency vibration signal noise and sample redundancy, so as to monitor the degree of tool wear in real time. In the preprocessing stage, the time series signal collected by the acceleration sensor is denoised by wavelet threshold, and the lengthy signal generated by each tool feed is divided into multiple training samples by combining data expansion. Experimental data is added on the basis of the original order of magnitude data to filter out noise and improve the robustness of the algorithm; it is proposed to define the wear state of the tool based on the actual wear curve to determine the degree of tool wear, thereby improving the accuracy of data label classification; a one-dimensional convolutional neural network is used for local feature extraction to mine rich high-dimensional features from the denoised signal, better characterize the tool wear status information hidden in the original signal, and shorten the network model training time; the idea of the Attention mechanism is innovatively introduced into the improved CABGRUs network model, which effectively improves the recognition accuracy and generalization performance of the network model for real-time monitoring. The feasibility of this method is verified by using a real-time monitoring system for tool wear status in this paper. The experiment built a signal acquisition unit and a host computer analysis unit, and used a deep learning framework to predict the tool wear status in real time. The experimental results show that the prediction accuracy of the CABGRUs network model we proposed reached 97.58%, which is better than the traditional machine learning algorithm. At the same time, it can adapt to the hardware system of most production environments and meet industrial requirements in terms of recognition accuracy and recognition speed.

以上所述,仅是本发明的较佳实施例而已,并非对本发明作任何形式上的限制,任何未脱离本发明技术方案内容,依据本发明的技术实质对以上实施例所作的任何简单修改、等同变化与修饰,均仍属于本发明技术方案的范围内。The above description is only a preferred embodiment of the present invention and does not limit the present invention in any form. Any simple modification, equivalent change and modification made to the above embodiment according to the technical essence of the present invention without departing from the technical solution of the present invention still falls within the scope of the technical solution of the present invention.

Claims (3)

1. A cutter wear state monitoring method based on a depth gating cyclic unit neural network comprises the following steps:
the method comprises the following steps: collecting vibration signals generated by a cutter by using a sensor, carrying out denoising treatment on the collected original vibration signals by using a wavelet threshold denoising method, and cutting the vibration signals generated by cutter feeding each time into short sequence time sequence signals with the length of 2000 points;
step two: local feature extraction of single time step time sequence signals: processing the short sequence time sequence signal by adopting a one-dimensional convolutional neural network, wherein the convolutional neural network part comprises 2 convolutional layers CONV and 1 pooling layer POOL, the convolutional layers perform neighborhood filtering on the time sequence signal of each dimension in a one-dimensional convolutional operation mode to generate feature mapping, each feature map can be regarded as convolution operation of different filters on the current time step time sequence signal, namely, the signal self-adaptive feature extraction is performed through the one-dimensional convolutional neural network, the input parameters of a subsequent network are reduced, the calculation speed is improved, meanwhile, the feature map is reduced to a certain extent in the vector dimension, the features of a vibration signal are highlighted, and the subsequent neural network can perform time sequence feature extraction conveniently;
step three: time series signal time series feature extraction: in order to dig the time sequence change rule of relatively longer intervals in a time sequence, extracting time sequence signal time sequence characteristics by adopting an improved depth gate control cycle unit neural network CABGRUS and learning the dependency relationship of the time sequence characteristics among the time sequence signals;
the improved depth gating circulating unit neural network CABGRUs is formed by constructing two depth bidirectional gating circulating unit BiGRU networks which are overlapped together, and meanwhile, an Attention mechanism is introduced into the CABGRUs network to increase an Attention layer, so that the model not only obtains the capability of simultaneously extracting time sequence signal characteristics from the forward direction and the reverse direction, but also obtains the capability of selectively learning key information in the signal characteristics;
each bidirectional BiGRU network in the improved depth-gated cyclic unit neural networks CABGRUs comprises 256 neurons, each of the forward and reverse BiGRU networks consists of 128 neurons, each of the BiGRU neurons comprises an update gate and a reset gate, and the two neurons are used respectively
Figure 810085DEST_PATH_IMAGE001
And &>
Figure 727226DEST_PATH_IMAGE002
Is expressed in that>
Figure DEST_PATH_IMAGE003
Indicates at a time step->
Figure 70745DEST_PATH_IMAGE004
Candidate hidden states in time, based on a time delay>
Figure 528271DEST_PATH_IMAGE005
Indicating at a time step>
Figure 342643DEST_PATH_IMAGE004
Hidden state in time>
Figure DEST_PATH_IMAGE006
Indicates at a time step->
Figure 227423DEST_PATH_IMAGE004
The timed input vector updates the gate>
Figure 494456DEST_PATH_IMAGE007
For controlling how much state information is updated for the current state, based on the status information>
Figure 693356DEST_PATH_IMAGE007
The closer to 1, the more information is utilized indicating that the current state has for the previous time, the reset gate->
Figure 362235DEST_PATH_IMAGE008
For controlling which status information is removed from a previous status, based on the status information determined by the previous status>
Figure 417915DEST_PATH_IMAGE008
The closer to 0, the smaller the proportion of the output state from the previous time, the formula is as follows:
Figure 172245DEST_PATH_IMAGE009
wherein:
Figure 909257DEST_PATH_IMAGE010
and &>
Figure DEST_PATH_IMAGE011
A weight vector representing a reset gate>
Figure 996424DEST_PATH_IMAGE012
And &>
Figure 426268DEST_PATH_IMAGE013
Indicating the right to update the doorA override vector>
Figure 730210DEST_PATH_IMAGE014
And
Figure 5334DEST_PATH_IMAGE015
a weight vector representing a candidate hidden state>
Figure DEST_PATH_IMAGE016
Represents a bias vector, <' > based on the status of the bias>
Figure DEST_PATH_IMAGE018
Representing a Hadamard product, i.e. a dot multiplication of a matrix, is->
Figure 711122DEST_PATH_IMAGE019
Represents a Sigmod function, and the tanh function represents a hyperbolic tangent activation function;
step four: an Attention mechanism is introduced to calculate importance distribution numerical values of continuous time step time sequence signal features, the introduced Attention mechanism carries out weighted summation by distributing different initialization probability weights and each time step output vector of a deep bidirectional gating circulating unit BiGRU layer, and finally numerical values are obtained through calculation of a Sigmod function, so that the proportion of key information is enhanced in a weight increasing mode, and the loss of the key information of long sequence time sequence signals is reduced;
step five: training of a network model: dropout technology is introduced to prevent the model from being over-fitted during the training process; and (3) performing wear classification on the time sequence signal characteristics obtained in the steps by adopting Softmax as an activation function and adopting category _ cross control as a loss function of the network model to obtain a classification result, and confirming the wear state of the tool at the current moment.
2. The tool wear state monitoring method based on the deep gated cyclic unit neural network as claimed in claim 1, wherein: the method is characterized in that a vibration signal generated by cutter feeding each time is cut into a short sequence time sequence signal with the length of 2000, and the method specifically comprises the following steps: and intercepting continuous 100000 points in each sampling signal, and dividing the intercepted points into 50 samples by taking 2000 as the number of the samples, wherein the 50 samples correspond to the same wear state label.
3. The tool wear state monitoring method based on the deep gated cyclic unit neural network as claimed in claim 1, wherein: the equation for the Attention mechanism in step four is as follows:
Figure DEST_PATH_IMAGE020
wherein,
Figure 46288DEST_PATH_IMAGE021
represents an output feature vector at a time step for the BiGRU layer, based on the comparison of the output feature vector at the time step>
Figure DEST_PATH_IMAGE022
Represents->
Figure 837527DEST_PATH_IMAGE023
Hidden layer representation by neural network layer>
Figure DEST_PATH_IMAGE024
Represents a randomly initialized context vector, <' > or>
Figure 480123DEST_PATH_IMAGE025
Represents->
Figure DEST_PATH_IMAGE026
Importance weights normalized by the Softmax function>
Figure 978100DEST_PATH_IMAGE027
Feature vectors representing the final text information, i.e. </or >>
Figure DEST_PATH_IMAGE028
Randomly generated in the training process, and finally the output value of the Attention layer is subjected to the method of Softmax>
Figure 812064DEST_PATH_IMAGE029
And mapping to obtain a real-time classification result of the wear state of the cutter. />
CN202010077631.9A 2020-01-31 2020-01-31 Cutter wear state monitoring method based on depth gate control circulation unit neural network Active CN111325112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010077631.9A CN111325112B (en) 2020-01-31 2020-01-31 Cutter wear state monitoring method based on depth gate control circulation unit neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010077631.9A CN111325112B (en) 2020-01-31 2020-01-31 Cutter wear state monitoring method based on depth gate control circulation unit neural network

Publications (2)

Publication Number Publication Date
CN111325112A CN111325112A (en) 2020-06-23
CN111325112B true CN111325112B (en) 2023-04-07

Family

ID=71167049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010077631.9A Active CN111325112B (en) 2020-01-31 2020-01-31 Cutter wear state monitoring method based on depth gate control circulation unit neural network

Country Status (1)

Country Link
CN (1) CN111325112B (en)

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814728A (en) * 2020-07-22 2020-10-23 同济大学 Recognition method and storage medium for tool wear state of CNC machine tools
CN112001527B (en) * 2020-07-29 2024-01-30 中国计量大学 Industrial production process target data prediction method of multi-feature fusion depth neural network
CN112070208B (en) * 2020-08-05 2022-08-30 同济大学 Tool wear prediction method based on encoder-decoder stage attention mechanism
CN111879397B (en) * 2020-09-01 2022-05-13 国网河北省电力有限公司检修分公司 Fault Diagnosis Method for Energy Storage Mechanism of High Voltage Circuit Breaker
CN112033463B (en) * 2020-09-02 2022-09-06 哈尔滨工程大学 Nuclear power equipment state evaluation and prediction integrated method and system
CN112115550B (en) * 2020-09-13 2022-04-19 西北工业大学 Prediction method of aircraft maneuvering trajectory based on Mogrifier-BiGRU
CN112149355B (en) * 2020-09-27 2023-08-22 浙江科技学院 Soft measurement method based on semi-supervised dynamic feedback stack noise reduction self-encoder model
CN112200032B (en) * 2020-09-28 2023-05-30 辽宁石油化工大学 An Attention Mechanism-Based On-Line Monitoring Method for the Mechanical Status of High-Voltage Circuit Breakers
CN114418158A (en) * 2020-10-10 2022-04-29 中国移动通信集团设计院有限公司 Prediction method of cell network load index based on attention mechanism learning network
CN112329625A (en) * 2020-11-05 2021-02-05 中国科学技术大学 Cutter wear state real-time identification method and model based on deep learning
CN112629854B (en) * 2020-11-25 2022-08-05 西安交通大学 A Bearing Fault Classification Method Based on Neural Network Attention Mechanism
CN112668507A (en) * 2020-12-31 2021-04-16 南京信息工程大学 Sea clutter prediction method and system based on hybrid neural network and attention mechanism
CN112798956A (en) * 2020-12-31 2021-05-14 江苏国科智能电气有限公司 Wind turbine fault diagnosis method based on multi-resolution sequential cyclic neural network
CN112712063B (en) * 2021-01-18 2022-04-26 贵州大学 Tool wear value monitoring method, electronic device and storage medium
CN112785092B (en) * 2021-03-09 2024-05-07 中铁电气化局集团有限公司 Switch residual life prediction method based on self-adaptive deep feature extraction
CN113204640B (en) * 2021-04-02 2023-05-30 南京邮电大学 Text classification method based on attention mechanism
CN113177584B (en) * 2021-04-19 2022-10-28 合肥工业大学 Compound fault diagnosis method based on zero sample learning
CN113051689B (en) * 2021-04-25 2022-03-25 石家庄铁道大学 Bearing residual service life prediction method based on convolution gating circulation network
CN113286309B (en) * 2021-05-18 2023-02-07 合肥工业大学 Heterogeneous communication method and system based on CSI
CN113414638B (en) * 2021-06-04 2023-02-10 西北工业大学 Variable working condition milling cutter wear state prediction method based on milling force time sequence diagram deep learning
CN113305645B (en) * 2021-06-22 2022-07-15 重庆邮电大学工业互联网研究院 Numerical control machine tool cutter residual life prediction method based on hybrid neural model
CN113359577B (en) * 2021-07-02 2023-08-11 中国科学院空间应用工程与技术中心 Ultrasonic motor embedded state monitoring and fault diagnosis system and method
CN113657454B (en) * 2021-07-23 2024-02-23 杭州安脉盛智能技术有限公司 Nuclear power rotating machinery state monitoring method based on autoregressive BiGRU
CN113537472B (en) * 2021-07-26 2024-04-09 北京计算机技术及应用研究所 Construction method of bidirectional recurrent neural network with low calculation and storage consumption
CN113822139B (en) * 2021-07-27 2023-08-25 河北工业大学 Equipment fault diagnosis method based on improved 1DCNN-BiLSTM
CN113664612A (en) * 2021-08-24 2021-11-19 沈阳工业大学 Numerical control machine tool milling cutter abrasion real-time monitoring method based on deep convolutional neural network
CN113780153B (en) * 2021-09-07 2024-08-02 北京理工大学 Cutter wear monitoring and predicting method
CN113971489A (en) * 2021-10-25 2022-01-25 哈尔滨工业大学 Method and system for predicting remaining service life based on hybrid neural network
CN113779101B (en) * 2021-11-10 2022-03-18 北京航空航天大学 Time sequence set recommendation system and method based on deep neural network
CN114417697A (en) * 2021-12-07 2022-04-29 山东大学 Neural network-based TBM hob abrasion real-time prediction method and system
CN114662219B (en) * 2022-03-23 2024-08-30 清华大学 Wheel wear prediction network model training method based on wavelet network model
CN114925807B (en) * 2022-04-11 2024-06-11 大连理工大学 Method for marking abrasion data of impeller machining cutter of centrifugal compressor
CN114972874A (en) * 2022-06-07 2022-08-30 北京信息科技大学 Three-dimensional human body classification and generation method and system for complex action sequence
CN115186406A (en) * 2022-07-01 2022-10-14 江苏西格数据科技有限公司 Tool wear prediction method and device based on transfer learning and computer application
CN114915496B (en) * 2022-07-11 2023-01-10 广州番禺职业技术学院 Network intrusion detection method and device based on time weight and deep neural network
CN115299962A (en) * 2022-08-12 2022-11-08 山东大学 An Anesthesia Depth Monitoring Method Based on Bidirectional Gated Loop Unit and Attention Mechanism
CN115446663B (en) * 2022-10-14 2023-10-20 常州先进制造技术研究所 Tool wear status monitoring method and application based on physics-guided deep learning network
CN115582733A (en) * 2022-10-28 2023-01-10 大连工业大学 A Milling Cutter Wear Monitoring Method Based on Residual Structure and Convolutional Neural Network
CN116308304B (en) * 2023-05-24 2023-08-25 山东建筑大学 New energy intelligent operation and maintenance method and system based on meta learning concept drift detection
CN117272138B (en) * 2023-09-15 2024-07-09 东华理工大学 Geomagnetic data denoising method and system based on reference channel data constraint and deep learning
CN117633528A (en) * 2023-11-21 2024-03-01 元始智能科技(南通)有限公司 A manufacturing workshop energy consumption prediction technology based on small sample data repair and enhancement
CN119670529A (en) * 2023-12-27 2025-03-21 北京市水务规划研究院 Methods for predicting rainfall water level
CN118133000A (en) * 2024-05-08 2024-06-04 山东交通学院 Floating offshore wind turbine blade damage detection method based on Conv1d-GRU-MHA network
CN118345334B (en) * 2024-06-17 2024-08-23 华兴源创(成都)科技有限公司 Film thickness correction method and device and computer equipment
CN119129353A (en) * 2024-11-11 2024-12-13 中国石油大学(华东) A hybrid convolutional neural network construction method for pipeline damage calculation
CN119197651B (en) * 2024-11-22 2025-02-25 江苏河海工程技术有限公司 A method and system for intelligent detection of water conservancy gate status
CN119252401B (en) * 2024-12-06 2025-02-25 西南交通大学 Method for predicting fretting wear of milling material surface

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5579232A (en) * 1993-03-29 1996-11-26 General Electric Company System and method including neural net for tool break detection
GB0809516D0 (en) * 2006-02-06 2008-07-02 Smith International Method of real-time drilling simulation
US8781982B1 (en) * 2011-09-23 2014-07-15 Lockheed Martin Corporation System and method for estimating remaining useful life
CN103962888A (en) * 2014-05-12 2014-08-06 西北工业大学 Tool abrasion monitoring method based on wavelet denoising and Hilbert-Huang transformation
WO2017192821A1 (en) * 2016-05-06 2017-11-09 Massachusetts Institute Of Technology Method and apparatus for efficient use of cnc machine shaping tool including cessation of use no later than the onset of tool deterioration by monitoring audible sound during shaping
CN107584334A (en) * 2017-08-25 2018-01-16 南京航空航天大学 A kind of complex structural member numerical control machining cutter status real time monitor method based on deep learning
CN108319962A (en) * 2018-01-29 2018-07-24 安徽大学 A kind of Tool Wear Monitoring method based on convolutional neural networks
CN108520125A (en) * 2018-03-29 2018-09-11 上海理工大学 A method and system for predicting tool wear state
CN108747590A (en) * 2018-06-28 2018-11-06 哈尔滨理工大学 A kind of tool wear measurement method based on rumble spectrum and neural network
CN108960077A (en) * 2018-06-12 2018-12-07 南京航空航天大学 A kind of intelligent failure diagnosis method based on Recognition with Recurrent Neural Network
CN109145319A (en) * 2017-06-16 2019-01-04 哈尔滨理工大学 Key equipment cutting tool method for predicting residual useful life based on deep neural network
CN109571141A (en) * 2018-11-01 2019-04-05 北京理工大学 A kind of Monitoring Tool Wear States in Turning based on machine learning
CN109822399A (en) * 2019-04-08 2019-05-31 浙江大学 Prediction method of tool wear state of CNC machine tool based on parallel deep neural network
CN110000610A (en) * 2019-04-17 2019-07-12 哈尔滨理工大学 A kind of Tool Wear Monitoring method based on multi-sensor information fusion and depth confidence network
CN110153802A (en) * 2019-07-04 2019-08-23 西南交通大学 A tool wear state identification method based on the joint model of convolutional neural network and long-short-term memory neural network
CN110188637A (en) * 2019-05-17 2019-08-30 西安电子科技大学 A method of behavior recognition technology based on deep learning
CN110509109A (en) * 2019-07-16 2019-11-29 西安交通大学 Tool Wear Monitoring Method Based on Multi-scale Deep Convolutional Recurrent Neural Network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7397421B2 (en) * 2004-04-22 2008-07-08 Smith Gregory C Method for detecting acoustic emission using a microwave Doppler radar detector

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5579232A (en) * 1993-03-29 1996-11-26 General Electric Company System and method including neural net for tool break detection
GB0809516D0 (en) * 2006-02-06 2008-07-02 Smith International Method of real-time drilling simulation
US8781982B1 (en) * 2011-09-23 2014-07-15 Lockheed Martin Corporation System and method for estimating remaining useful life
CN103962888A (en) * 2014-05-12 2014-08-06 西北工业大学 Tool abrasion monitoring method based on wavelet denoising and Hilbert-Huang transformation
WO2017192821A1 (en) * 2016-05-06 2017-11-09 Massachusetts Institute Of Technology Method and apparatus for efficient use of cnc machine shaping tool including cessation of use no later than the onset of tool deterioration by monitoring audible sound during shaping
CN109145319A (en) * 2017-06-16 2019-01-04 哈尔滨理工大学 Key equipment cutting tool method for predicting residual useful life based on deep neural network
CN107584334A (en) * 2017-08-25 2018-01-16 南京航空航天大学 A kind of complex structural member numerical control machining cutter status real time monitor method based on deep learning
CN108319962A (en) * 2018-01-29 2018-07-24 安徽大学 A kind of Tool Wear Monitoring method based on convolutional neural networks
CN108520125A (en) * 2018-03-29 2018-09-11 上海理工大学 A method and system for predicting tool wear state
CN108960077A (en) * 2018-06-12 2018-12-07 南京航空航天大学 A kind of intelligent failure diagnosis method based on Recognition with Recurrent Neural Network
CN108747590A (en) * 2018-06-28 2018-11-06 哈尔滨理工大学 A kind of tool wear measurement method based on rumble spectrum and neural network
CN109571141A (en) * 2018-11-01 2019-04-05 北京理工大学 A kind of Monitoring Tool Wear States in Turning based on machine learning
CN109822399A (en) * 2019-04-08 2019-05-31 浙江大学 Prediction method of tool wear state of CNC machine tool based on parallel deep neural network
CN110000610A (en) * 2019-04-17 2019-07-12 哈尔滨理工大学 A kind of Tool Wear Monitoring method based on multi-sensor information fusion and depth confidence network
CN110188637A (en) * 2019-05-17 2019-08-30 西安电子科技大学 A method of behavior recognition technology based on deep learning
CN110153802A (en) * 2019-07-04 2019-08-23 西南交通大学 A tool wear state identification method based on the joint model of convolutional neural network and long-short-term memory neural network
CN110509109A (en) * 2019-07-16 2019-11-29 西安交通大学 Tool Wear Monitoring Method Based on Multi-scale Deep Convolutional Recurrent Neural Network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
唐利平等.刀具磨损状态特征参数提取与识别方法研究.《组合机床与自动化加工技术》.2019,(第10期),全文. *

Also Published As

Publication number Publication date
CN111325112A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
CN111325112B (en) Cutter wear state monitoring method based on depth gate control circulation unit neural network
CN112435363A (en) Cutter wear state real-time monitoring method
Zhao et al. Wavelet Denoised-ResNet CNN and LightGBM method to predict forex rate of change
CN108445752B (en) An ensemble modeling method of random weight neural network for adaptive selection of deep features
CN108319962A (en) A kind of Tool Wear Monitoring method based on convolutional neural networks
CN110619352A (en) Typical infrared target classification method based on deep convolutional neural network
CN109685653A (en) A method of fusion deepness belief network and the monitoring of the credit risk of isolated forest algorithm
CN112801059B (en) Graph convolution network system and 3D object detection method based on graph convolution network system
CN109492748B (en) A method for establishing medium and long-term load forecasting model of power system based on convolutional neural network
CN102521671A (en) Ultrashort-term wind power prediction method
CN114218292B (en) A Multivariate Time Series Similarity Retrieval Method
CN106845681A (en) A kind of stock trend forecasting method of application depth learning technology
CN115630314A (en) Electroencephalogram signal classification method based on improved inclusion network motor imagery
CN117726939B (en) Hyperspectral image classification method based on multi-feature fusion
Ma et al. Automatic recognition of machining features based on point cloud data using convolution neural networks
CN117671666A (en) A target recognition method based on adaptive graph convolutional neural network
Andriyanto et al. Sectoral stock prediction using convolutional neural networks with candlestick patterns as input images
Ren et al. Pulses classification based on sparse auto-encoders neural networks
CN116484271A (en) A Significant Wave Height Early Warning Method Based on Empirical Mode Decomposition and Deep Learning
Tekkali et al. Assessing CNN’s performance with multiple optimization functions for credit card fraud detection
CN118296459A (en) Automatic change system of line tool state monitoring network training model
CN108268461A (en) A kind of document sorting apparatus based on hybrid classifer
Ma et al. An end-to-end deep learning approach for tool wear condition monitoring
Li et al. Improved artificial rabbit optimization and its application in multichannel signal denoising
Wang Predicting the rise and fall of Shanghai composite index based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant