CN111191726B - A Fault Classification Method Based on Weakly Supervised Learning Multilayer Perceptron - Google Patents
A Fault Classification Method Based on Weakly Supervised Learning Multilayer Perceptron Download PDFInfo
- Publication number
- CN111191726B CN111191726B CN201911418196.5A CN201911418196A CN111191726B CN 111191726 B CN111191726 B CN 111191726B CN 201911418196 A CN201911418196 A CN 201911418196A CN 111191726 B CN111191726 B CN 111191726B
- Authority
- CN
- China
- Prior art keywords
- sample
- network
- label
- layer
- mlp
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 239000011159 matrix material Substances 0.000 claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 30
- 239000000203 mixture Substances 0.000 claims abstract description 26
- 230000008569 process Effects 0.000 claims abstract description 22
- 230000006870 function Effects 0.000 claims abstract description 18
- 230000007704 transition Effects 0.000 claims abstract description 16
- 238000004519 manufacturing process Methods 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 4
- 101100391182 Dictyostelium discoideum forI gene Proteins 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 239000010754 BS 2869 Class F Substances 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000003756 stirring Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a process data fault classification method based on a weak supervision learning multi-layer perceptron, which consists of a supervision classification network consisting of a multi-layer perceptron, a Batchnormal layer, a Dropout layer and a Softmax output layer and a Gaussian mixture model for acquiring inaccuracy of a sample label; the multi-layer perceptron can learn the characteristic representation of the data from the inaccurate label data, in addition, the Gaussian mixture model performs unsupervised clustering on the characteristics extracted by the multi-layer perceptron, the clustering result can be used for estimating the relation between various sample inaccurate labels and potential sample real labels, namely a label probability transition matrix, and the estimated label probability transition matrix is used for correcting a network loss function to perform secondary training on a classification network, so that the classification precision of the network on the inaccurate label samples is improved. The method and the device can be suitable for the situation of fault classification of the inaccurate label, namely, the label labeling error of the part of the industrial process data sample.
Description
Technical Field
The invention belongs to the field of industrial process fault diagnosis and classification, and particularly relates to a fault classification method based on a weak supervision learning multilayer perceptron.
Background
In industrial process monitoring, after detecting the occurrence of a fault, further analysis of fault information is needed, and fault classification is an important link in the fault information to obtain the type of the fault, so that recovery of the industrial process is facilitated.
In conventional fault classification, it is assumed that the obtained data sample labels are accurate so as to perform model training, however, labels of industrial process data are generated by means of an external knowledge base, a rule base or manual calibration, etc., and labels of samples may be inaccurate. In addition, inaccurate label samples are more readily available and less costly than accurate label samples. Sample tag inaccuracy has become a non-negligible feature of models. Therefore, in practice, weak supervision learning modeling is carried out on inaccurate label samples, and classification accuracy of the model on fault samples can be improved.
Disclosure of Invention
Aiming at the problems that sample labels may not be accurate and the like in the current industrial process, the invention provides a fault classification method based on a weak supervision learning multilayer perceptron.
The invention aims at realizing the following technical scheme: a process data fault classification method of a multi-layer sensor based on weak supervised learning, the multi-layer sensor based on weak supervised learning comprising: a two-layer perceptron MLP, a Softmax output layer, and a gaussian mixture model GMM. The process data fault classification method specifically comprises the following steps:
step one: collecting samples containing inaccurate labels in historical industrial processes as training data setsWherein (1)>For inaccurate tag data samples, +.>As a label for the sample,n represents the number of samples of the training data set, and K represents the number of sample categories.
Step two: normalizing the training data set D collected in the first step, namely mapping each variable of the labeled sample set X into a sample set X with the mean value of 0 and the variance of 1 std Each sample label of the label set Y is converted into a one-dimensional vector through one-hot coding, and a standardized data set is obtained
Step three: first a new data set D std As input, performing first supervised training on a network of perceptron MLPs to obtain a sample set X at a Softmax output layer std Belonging to its labelIs a posterior probability of (c).
Step four: taking the posterior probability obtained in the third step as the input of the Gaussian mixture model GMM, training the Gaussian mixture model, and using the parameters of the Gaussian mixture model after trainingTo estimate the tag probability transition matrix T to obtain an estimated matrix
Step five: according toCorrecting the loss function of the step three perceptron MLP fitting inaccurate label sample to obtain the data set D obtained in the step two std As input, performing supervised training on the third perceptron MLP for the second time to complete weak supervised learning and obtain a trained WS-MLP network;
step six: collecting new industrial process data of unknown fault class, normalizing the process data according to the method of the step two to obtain a data set d std Inputting the sample into the WS-MLP network trained in the step five, solving the posterior probability of each fault category corresponding to the sample, and taking the category with the maximum posterior probability as the category of the sample to realize the fault classification of the sample.
Further, the third step specifically includes the following steps:
(3.1) constructing a network of a perceptron MLP, wherein the network of the perceptron MLP consists of a first hidden layer, a Batchnormal layer, a Dropout layer, a second hidden layer, a Batchnormal layer, a Dropout layer and a Softmax layer which are sequentially connected. Wherein the weight matrix and the bias vector of the first hidden layer and the second hidden layer are respectively W 1 ,b 1 ,W 2 ,b 2 The weight matrix and the bias vector from the second hidden layer to the Softmax layer are respectively W 3 ,b 3 These network parameters are denoted as θ= { W 1 ,b 1 ,W 2 ,b 2 ,W 3 ,b 3 }。
(3.2) normalized sample set D std As input, a network of perceptron MLPs is supervised trained, using a cross entropy loss function:
wherein,,is a representation of the last layer of the MLP network.
The loss function carries out parameter adjustment on the network of the whole perceptron MLP through a back propagation algorithm (BP), and after repeated iteration loss convergence, the parameters of the whole network are obtained, and training is completed.
Further, the fourth step specifically includes the following steps:
(4.1) each type of sample of the inaccurate label sample set consists of a label accurate sample and a label erroneous sample, making the following assumptions: it is assumed that the generation of inaccurate labels is independent of the input, i.e. the probability that a sample of a certain class is marked as another class is the same. And assuming that the MLP network has perceived consistency, i.e. the MLP network obeys gaussian distribution separately for the characteristic representation of the label accurate samples and the label erroneous samples in each class.
From the assumption, it is possible to obtain:
wherein,,is the sample set D std Y is the potential true label of the sample, p (·) represents probability, e i I.epsilon. {1,2, L, K } is expressed in +.>Spatially, the i-th element is 1, the other elements are 0, θ represents all weight matrix and bias vector parameters in the MLP network, μ, Σ represents mean vector and covariance matrix of unknown gaussian distribution, respectively,/-a>And->The gaussian distribution density of all samples and i samples of class respectively, T represents the tag probability transition matrix and is defined +.>
(4.2) for different classes of sample subsetsModeling using a gaussian mixture model:
wherein x is i Representation belonging to a datasetSample data of->Representation->And i represents other categories than category i.
(4.3) establishing a two-component Gaussian mixture model, using a maximum Expectation (EM) algorithm to complete parameter estimation of the Gaussian mixture model, and solvingI.e. < ->
When step (E) is desired, the Q function is calculated:
where t is the number of iterations.
The calculation model is used for observing dataResponsibility degree of (2)>
Wherein,,represents x i Is the nth sample of (a).
At the maximum step (M step), the Gaussian distribution average μ is estimated m And a mixing coefficient alpha m 。
Wherein S is i Representation ofNumber of samples.
And E, alternately iterating the steps M and E until the model parameters converge or the preset maximum iteration times. Solving forI.e. < ->
(4.4) according to the formulaSolving to obtain a mixing coefficient->And uses this to obtain the estimate of the tag probability transition matrix T>
Wherein,,representing an estimation matrix->The ith row and the kth column of the block.
Further, in the fifth step, the second training of the network of the sensor MLP uses the modified loss function as follows:
compared with the prior art, the method has the beneficial effects that modeling can be performed when an inaccurate scene of a label sample label is obtained, and the model can improve the classification precision of the inaccurate label sample by performing label probability transition matrix evaluation on the inaccurate label sample and correcting the loss function of the classification network to finish weak supervision learning.
Drawings
FIG. 1 is a TennesseeEastman (TE) process flow diagram;
FIG. 2 is a graph of classification accuracy versus 5 label noise ratios for a MLP network and a weakly supervised learning based multi-layer perceptron (WS-MLP) versus class 9 TE process fault condition.
Detailed Description
The fault classification method based on the weak supervision learning multilayer sensor is further described in detail below with reference to the specific embodiments.
The utility model provides a process data fault classification method of multilayer perceptron based on weak supervised learning, which is characterized in that the multilayer perceptron based on weak supervised learning includes: a two-layer perceptron MLP, a Softmax output layer, and a gaussian mixture model GMM. The process data fault classification method specifically comprises the following steps:
step one: collecting samples containing inaccurate labels in historical industrial processesThe book is used as a training data setWherein (1)>For inaccurate tag data samples, +.>As a label for the sample,n represents the number of samples of the training data set, and K represents the number of sample categories.
Step two: normalizing the training data set D collected in the first step, namely mapping each variable of the labeled sample set X into a sample set X with the mean value of 0 and the variance of 1 std Each sample label of the label set Y is converted into a one-dimensional vector through one-hot coding, and a standardized data set is obtained
Step three: first a new data set D std As input, performing first supervised training on a network of perceptron MLPs to obtain a sample set X at a Softmax output layer std Belonging to its labelIs a posterior probability of (c). The process specifically comprises the following substeps:
(3.1) constructing a network of a perceptron MLP, wherein the network of the perceptron MLP consists of a first hidden layer, a Batchnormal layer, a Dropout layer, a second hidden layer, a Batchnormal layer, a Dropout layer and a Softmax layer which are sequentially connected. Wherein the weight matrix and the bias vector of the first hidden layer and the second hidden layer are respectively W 1 ,b 1 ,W 2 ,b 2 The weight matrix and the bias vector from the second hidden layer to the Softmax layer are respectively W 3 ,b 3 These network parameters are denoted as θ= { W 1 ,b 1 ,W 2 ,b 2 ,W 3 ,b 3 }。
(3.2) normalized sample set D std As input, a network of perceptron MLPs is supervised trained, using a cross entropy loss function:
wherein,,is a representation of the last layer of the MLP network.
The loss function carries out parameter adjustment on the network of the whole perceptron MLP through a back propagation algorithm (BP), and after repeated iteration loss convergence, the parameters of the whole network are obtained, and training is completed.
Step four: taking the posterior probability obtained in the third step as the input of the Gaussian mixture model GMM, training the Gaussian mixture model, and using the parameters of the Gaussian mixture model after trainingTo estimate the tag probability transition matrix T to obtain an estimated matrix +.>The general label probability transition matrix is difficult to obtain, the generation and the input of the label are independent according to the assumption inaccuracy, the label has perception consistency with the MLP network, and the first training result of the MLP network can be subjected to unsupervised learning by utilizing the Gaussian mixture model, so that the mixing coefficient learned by the Gaussian mixture model approximates to the element in the label probability transition matrix, and the method specifically comprises the following steps:
(4.1) each type of sample of the inaccurate label sample set consists of a label accurate sample and a label erroneous sample, making the following assumptions: it is assumed that the generation of inaccurate labels is independent of the input, i.e. the probability that a sample of a certain class is marked as another class is the same. And assuming that the MLP network has perceived consistency, i.e. the MLP network obeys gaussian distribution separately for the characteristic representation of the label accurate samples and the label erroneous samples in each class.
From the assumption, it is possible to obtain:
wherein,,is the sample set D std Y is the potential true label of the sample, p (·) represents probability, e i I.epsilon. {1,2, …, K } is expressed in +.>Spatially, the i-th element is 1, the other elements are 0, θ represents all weight matrix and bias vector parameters in the MLP network, μ, Σ represents mean vector and covariance matrix of unknown gaussian distribution, respectively,/-a>And->The gaussian distribution density of all samples and i samples of class respectively, T represents the tag probability transition matrix and is defined +.>
(4.2) for different classes of sample subsetsModeling using a gaussian mixture model:
wherein x is i Representation belonging to a datasetSample data of->Representation->And i represents other categories than category i.
(4.3) establishing a two-component Gaussian mixture model, using a maximum Expectation (EM) algorithm to complete parameter estimation of the Gaussian mixture model, and solvingI.e. < ->
When step (E) is desired, the Q function is calculated:
where t is the number of iterations.
Calculation ofModel for observation dataResponsibility degree of (2)>
Wherein,,represents x i Is the nth sample of (a).
At the maximum step (M step), the Gaussian distribution average μ is estimated m And a mixing coefficient alpha m 。
Wherein S is i Representation ofNumber of samples.
And E, alternately iterating the steps M and E until the model parameters converge or the preset maximum iteration times. Solving forI.e. < ->
(4.4) according to the formulaSolving to obtain a mixing coefficient->And uses this to obtain the estimate of the tag probability transition matrix T>
Wherein,,representing an estimation matrix->The ith row and the kth column of the block.
Step five: according toCorrecting the loss function of the step three perceptron MLP fitting inaccurate label sample to obtain the data set D obtained in the step two std As input, the second supervised training step is performed on the network of the third perceptron MLP, so that weak supervised learning is completed, and a trained WS-MLP network is obtained.
The second time the network training of the perceptron MLP uses the modified loss function as:
step six: collecting new unknown fault classesOther industrial process data, the process data is standardized according to the method of the second step, and a data set d is obtained std Inputting the sample into the WS-MLP network trained in the step five, solving the posterior probability of each fault category corresponding to the sample, and taking the category with the maximum posterior probability as the category of the sample to realize the fault classification of the sample.
To evaluate the classification effect of the fault classification model, a class F corresponding to a certain class of faults is defined 1 The index and the calculation formula are as follows:
precision=TP/(TP+FP)
recall=TP/(TP+FN)
TP is the number of samples with correct classification of the fault samples; FP is the number of samples that misclassifies other classes of samples into such faults, and FN is the number of samples that misclassifies such fault samples.
Examples
The performance of the fault classification method of the multi-layer sensor based on weak supervised learning is described below in connection with a specific TE process example. TE process is a standard data set commonly used in the field of fault diagnosis and fault classification, and the whole data set includes 53 process variables, and the process flow is shown in fig. 1. The process consists of 5 operation units including gas-liquid separating tower, continuous stirring reactor, dephlegmator, centrifugal compressor, reboiler, etc.
The 9 faults in the TE process were selected, and the specific cases of the 9 selected faults are given in Table 1.
Table 1: TE process fault list
For this process, 34 variables total of 22 process measurement variables and 12 control variables were used as modeling variables to test classification performance on 9 types of fault condition data.
The MLP network is composed of a first hidden layer, a Batchnormalization layer, a Dropout layer, a second hidden layer, a Batchnormalization layer, a Dropout layer and a Softmax layer which are sequentially connected. The number of input nodes of the MLP network is 34, the number of nodes of two hidden layers is 200 and 100 respectively, the number of nodes of a final Softmax layer is 9, the momentum values of a Batchnormalization layer are all set to 0.5, the loss proportion of nodes of a Dropout layer is 0.5, an Adam optimizer with an initial learning rate of 0.001 is used, the batch size is 110, and the iteration number is 30.
In fig. 2, the classification effect of the MLP network and the weak supervision learning multi-layer sensor (WS-MLP) based model under the F1 index is shown to be compared, the MLP hidden layer nodes of the two networks are kept consistent, and the sample labels with the proportions of 0%,10%,20%,30%,40% and 50% are respectively set as errors by adjusting the label inaccuracy of the input sample, so as to observe the change condition of the classification index F1. The WS-MLP can be seen to be accurate (namely 0% of sample label error) except for the sample label, and the classification effect is better than that of the MLP network in other cases, so that the classification performance improvement caused by estimating the label probability transition matrix by the Gaussian mixture model and correcting the MLP network loss function by using the probability transition matrix in the method is verified; meanwhile, the classification performance of the WS-MLP model under the condition of accurate label is similar to that of an MLP network, so that the WS-MLP model is not only suitable for an inaccurate label sample, but also suitable for fault classification of the accurate label sample.
Claims (4)
1. The utility model provides a process data fault classification method of multilayer perceptron based on weak supervised learning, which is characterized in that the multilayer perceptron based on weak supervised learning includes: two layers of perceptron MLP, softmax output layer and Gaussian mixture model GMM; the process data fault classification method specifically comprises the following steps:
step one: collecting samples containing inaccurate labels in historical industrial processes as training data setsWherein (1)>For inaccurate tag data samples, +.>For the label of the sample,>n represents the number of samples of the training data set, and K represents the number of sample categories;
step two: normalizing the training data set D collected in the first step, namely mapping each variable of the labeled sample set X into a sample set X with the mean value of 0 and the variance of 1 std And tag sets are encoded by one-hotEach sample tag is converted into a one-dimensional vector to obtain a standardized data set +.>
Step three: first a new data set D std As input, performing first supervised training on a network of perceptron MLPs to obtain a sample set X at a Softmax output layer std Belonging to its labelPosterior probability of (2);
step four: taking the posterior probability obtained in the third step as the input of the Gaussian mixture model GMM, training the Gaussian mixture model, and using the parameters of the Gaussian mixture model after trainingTo estimate the tag probability transition matrix T to obtain an estimated matrix +.>
Step five: according toCorrecting the loss function of the step three perceptron MLP fitting inaccurate label sample to obtain the data set D obtained in the step two std As input, performing supervised training on the third perceptron MLP for the second time to complete weak supervised learning and obtain a trained WS-MLP network;
step six: collecting new industrial process data of unknown fault class, normalizing the process data according to the method of the step two to obtain a data set d std Inputting the sample into the WS-MLP network trained in the step five, solving the posterior probability of each fault category corresponding to the sample, and taking the category with the maximum posterior probability as the category of the sample to realize the fault classification of the sample.
2. The fault classification method according to claim 1, wherein the step three specifically includes the steps of:
(3.1) constructing a network of a perceptron MLP, wherein the network of the perceptron MLP consists of a first hidden layer, a Batchnormal layer, a Dropout layer, a second hidden layer, a Batchnormal layer, a Dropout layer and a Softmax layer which are sequentially connected; wherein the weight matrix and the bias vector of the first hidden layer and the second hidden layer are respectively W 1 ,b 1 ,W 2 ,b 2 The weight matrix and the bias vector from the second hidden layer to the Softmax layer are respectively W 3 ,b 3 These network parameters are denoted as θ= { W 1 ,b 1 ,W 2 ,b 2 ,W 3 ,b 3 };
(3.2) normalized sample set D std As input, a network of perceptron MLPs is supervised trained, using a cross entropy loss function:
wherein,,is a representation of the last layer of the MLP network;
the loss function carries out parameter adjustment on the network of the whole perceptron MLP through a back propagation algorithm (BP), and after repeated iteration loss convergence, the parameters of the whole network are obtained, and training is completed.
3. The fault classification method according to claim 1, wherein the fourth step specifically comprises the steps of:
(4.1) each type of sample of the inaccurate label sample set consists of a label accurate sample and a label erroneous sample, making the following assumptions: assuming that the generation of inaccurate labels is independent of input, i.e. the probability that a certain class of samples is marked as other classes is the same; and assuming that the MLP network has perceptual consistency, namely, the MLP network obeys Gaussian distribution on characteristic representations of samples with accurate labels and samples with wrong labels in each category respectively;
from the assumption, it is possible to obtain:
wherein,,is the sample set D std Y is the potential true label of the sample, p (·) represents probability, e i I.epsilon. {1,2, …, K } is expressed in +.>Spatially, the i-th element is a vector of 1, the other elements are vectors of 0, θ represents all weight matrix and bias vector parameters in the MLP network, μ, Σ represents mean vector and covariance matrix of unknown gaussian distribution respectively,and->The gaussian distribution density of all samples and i samples of class respectively, T represents the tag probability transition matrix and is defined +.>
(4.2) for different classes of sample subsetsModeling using a gaussian mixture model:
wherein x is i Representation belonging to a datasetSample data of->Representation-> Representing other categories than category i;
(4.3) establishing a two-component Gaussian mixture model, using a maximum Expectation (EM) algorithm to complete parameter estimation of the Gaussian mixture model, and solvingI.e. < ->
When a step is expected, a Q function is calculated:
wherein t is the number of iterations;
the calculation model is used for observing dataResponsibility degree of (2)>
Wherein,,represents x i Is the nth sample of (2);
at maximum step, the mean value mu of the Gaussian distribution is estimated m And a mixing coefficient alpha m ;
Wherein S is i Representation ofThe number of samples;
alternating iteration of the expected step and the maximum step until the model parameters converge or the preset maximum iteration times; solving forI.e. < ->
(4.4) according to the formulaSolving to obtain a mixing coefficient->And uses this to obtain the estimate of the tag probability transition matrix T>
Wherein,,representing an estimation matrix->The ith row and the kth column of the block.
4. The fault classification method according to claim 1, wherein in step five, the second training of the network of perceptron MLPs uses a modified loss function as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911418196.5A CN111191726B (en) | 2019-12-31 | 2019-12-31 | A Fault Classification Method Based on Weakly Supervised Learning Multilayer Perceptron |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911418196.5A CN111191726B (en) | 2019-12-31 | 2019-12-31 | A Fault Classification Method Based on Weakly Supervised Learning Multilayer Perceptron |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111191726A CN111191726A (en) | 2020-05-22 |
CN111191726B true CN111191726B (en) | 2023-07-21 |
Family
ID=70709761
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911418196.5A Active CN111191726B (en) | 2019-12-31 | 2019-12-31 | A Fault Classification Method Based on Weakly Supervised Learning Multilayer Perceptron |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111191726B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111814962B (en) * | 2020-07-09 | 2024-05-10 | 平安科技(深圳)有限公司 | Parameter acquisition method and device for identification model, electronic equipment and storage medium |
CN112989971B (en) * | 2021-03-01 | 2024-03-22 | 武汉中旗生物医疗电子有限公司 | Electrocardiogram data fusion method and device for different data sources |
CN113077441B (en) * | 2021-03-31 | 2024-09-27 | 上海联影智能医疗科技有限公司 | Coronary calcified plaque segmentation method and method for calculating coronary calcification score |
CN113919439B (en) * | 2021-10-22 | 2025-07-18 | 南京邮电大学 | Method, system, device and storage medium for improving quality of classified learning data set |
CN114925196B (en) * | 2022-03-01 | 2024-05-21 | 健康云(上海)数字科技有限公司 | Auxiliary eliminating method for abnormal blood test value of diabetes under multi-layer sensing network |
CN116090872A (en) * | 2022-12-07 | 2023-05-09 | 湖北华中电力科技开发有限责任公司 | Power distribution area health state evaluation method |
CN116503658A (en) * | 2023-05-04 | 2023-07-28 | 苏州泛函信息科技有限公司 | A domain-adapted diagnostic system for adversarial training against conditional and label drift |
CN117347788B (en) * | 2023-10-17 | 2024-06-11 | 国网四川省电力公司电力科学研究院 | A method for predicting the probability of single-phase ground fault types in distribution networks |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875771A (en) * | 2018-03-30 | 2018-11-23 | 浙江大学 | A kind of failure modes model and method being limited Boltzmann machine and Recognition with Recurrent Neural Network based on sparse Gauss Bernoulli Jacob |
WO2019048324A1 (en) * | 2017-09-07 | 2019-03-14 | Nokia Solutions And Networks Oy | Method and device for monitoring a telecommunication network |
CN110472665A (en) * | 2019-07-17 | 2019-11-19 | 新华三大数据技术有限公司 | Model training method, file classification method and relevant apparatus |
-
2019
- 2019-12-31 CN CN201911418196.5A patent/CN111191726B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019048324A1 (en) * | 2017-09-07 | 2019-03-14 | Nokia Solutions And Networks Oy | Method and device for monitoring a telecommunication network |
CN108875771A (en) * | 2018-03-30 | 2018-11-23 | 浙江大学 | A kind of failure modes model and method being limited Boltzmann machine and Recognition with Recurrent Neural Network based on sparse Gauss Bernoulli Jacob |
CN110472665A (en) * | 2019-07-17 | 2019-11-19 | 新华三大数据技术有限公司 | Model training method, file classification method and relevant apparatus |
Non-Patent Citations (2)
Title |
---|
Vahid Golmah,et al.Developing A Fault Diagnosis Approach Based On Artificial Neural Network And Self Organization Map For Occurred ADSL Faults.Journal of Advances in Computer Engineering and Technology.2017,第3卷(第3期),第125-134页. * |
肖涵.基于高斯混合模型与子空间技术的故障识别研究.工程科技Ⅱ辑 信息科技.2008,(第4期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111191726A (en) | 2020-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111191726B (en) | A Fault Classification Method Based on Weakly Supervised Learning Multilayer Perceptron | |
CN111079836B (en) | Process data fault classification method based on pseudo-label method and weakly supervised learning | |
CN111222290B (en) | Multi-parameter feature fusion-based method for predicting residual service life of large-scale equipment | |
CN108875772B (en) | A Fault Classification Model and Method Based on Stacked Sparse Gaussian Bernoulli Restricted Boltzmann Machine and Reinforcement Learning | |
CN109146246B (en) | Fault detection method based on automatic encoder and Bayesian network | |
CN108875771B (en) | Fault classification model and method based on sparse Gaussian Bernoulli limited Boltzmann machine and recurrent neural network | |
CN107144428A (en) | A kind of rail traffic vehicles bearing residual life Forecasting Methodology based on fault diagnosis | |
CN107505837A (en) | A kind of semi-supervised neural network model and the soft-measuring modeling method based on the model | |
CN111046961B (en) | Fault classification method based on bidirectional long short-term memory unit and capsule network | |
CN104035431B (en) | The acquisition methods of kernel functional parameter and system for non-linear process monitoring | |
CN103914064A (en) | Industrial process fault diagnosis method based on multiple classifiers and D-S evidence fusion | |
CN111580506A (en) | Industrial process fault diagnosis method based on information fusion | |
CN110824914B (en) | An intelligent monitoring method for wastewater treatment based on PCA-LSTM network | |
CN109189028A (en) | PCA method for diagnosing faults based on muti-piece information extraction | |
CN112098600A (en) | Fault detection and diagnosis method for chemical sensor array | |
CN114692507B (en) | Soft-sensing modeling method for count data based on stacked Poisson autoencoder network | |
CN112507479B (en) | Oil drilling machine health state assessment method based on manifold learning and softmax | |
CN109240276B (en) | Multi-block PCA fault monitoring method based on fault-sensitive pivot selection | |
CN113011102B (en) | Multi-time-sequence-based Attention-LSTM penicillin fermentation process fault prediction method | |
CN114266289A (en) | A method for evaluating the health status of complex equipment | |
CN109298633A (en) | Fault monitoring method in chemical production process based on adaptive block non-negative matrix decomposition | |
CN117171702A (en) | Multi-mode power grid fault detection method and system based on deep learning | |
CN113283288B (en) | Recognition method of eddy current signal type of nuclear power plant evaporator based on LSTM-CNN | |
CN110209150B (en) | Job shop scheduling scheme robustness measuring method based on multi-process fault influence | |
CN113850320A (en) | Transformer fault detection method based on improved support vector machine regression algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |