CN119027107B - Intelligent diagnosis and maintenance method, device and equipment for incinerator and storage medium - Google Patents
Intelligent diagnosis and maintenance method, device and equipment for incinerator and storage medium Download PDFInfo
- Publication number
- CN119027107B CN119027107B CN202411527357.5A CN202411527357A CN119027107B CN 119027107 B CN119027107 B CN 119027107B CN 202411527357 A CN202411527357 A CN 202411527357A CN 119027107 B CN119027107 B CN 119027107B
- Authority
- CN
- China
- Prior art keywords
- target
- feature
- state
- incinerator
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012423 maintenance Methods 0.000 title claims abstract description 129
- 238000000034 method Methods 0.000 title claims abstract description 88
- 238000003745 diagnosis Methods 0.000 title claims abstract description 47
- 238000003860 storage Methods 0.000 title claims abstract description 21
- 238000013507 mapping Methods 0.000 claims abstract description 35
- 230000006399 behavior Effects 0.000 claims abstract description 34
- 230000000739 chaotic effect Effects 0.000 claims abstract description 26
- 238000004458 analytical method Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 230000009977 dual effect Effects 0.000 claims abstract description 15
- 238000004364 calculation method Methods 0.000 claims abstract description 14
- 230000008878 coupling Effects 0.000 claims abstract description 14
- 238000010168 coupling process Methods 0.000 claims abstract description 14
- 238000005859 coupling reaction Methods 0.000 claims abstract description 14
- 239000013598 vector Substances 0.000 claims description 112
- 230000006870 function Effects 0.000 claims description 93
- 238000009826 distribution Methods 0.000 claims description 51
- 230000008569 process Effects 0.000 claims description 42
- 238000012545 processing Methods 0.000 claims description 37
- 238000005457 optimization Methods 0.000 claims description 36
- 230000004913 activation Effects 0.000 claims description 35
- 238000010606 normalization Methods 0.000 claims description 32
- 230000009466 transformation Effects 0.000 claims description 31
- 230000009471 action Effects 0.000 claims description 29
- 230000008859 change Effects 0.000 claims description 22
- 239000011159 matrix material Substances 0.000 claims description 21
- 238000002485 combustion reaction Methods 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 19
- 238000004422 calculation algorithm Methods 0.000 claims description 17
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 15
- 230000015654 memory Effects 0.000 claims description 14
- 239000000446 fuel Substances 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 12
- 238000013097 stability assessment Methods 0.000 claims description 11
- 230000002068 genetic effect Effects 0.000 claims description 10
- 239000012634 fragment Substances 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000004088 simulation Methods 0.000 claims description 6
- 239000000779 smoke Substances 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000013450 outlier detection Methods 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 description 34
- 230000002159 abnormal effect Effects 0.000 description 16
- UGFAIRIUMAVXCW-UHFFFAOYSA-N Carbon monoxide Chemical compound [O+]#[C-] UGFAIRIUMAVXCW-UHFFFAOYSA-N 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 230000008901 benefit Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- MWUXSHHQAYIFBG-UHFFFAOYSA-N nitrogen oxide Inorganic materials O=[N] MWUXSHHQAYIFBG-UHFFFAOYSA-N 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 239000003546 flue gas Substances 0.000 description 4
- 239000007789 gas Substances 0.000 description 4
- 238000011478 gradient descent method Methods 0.000 description 4
- 230000007774 longterm Effects 0.000 description 4
- 229910002091 carbon monoxide Inorganic materials 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000012804 iterative process Methods 0.000 description 3
- 230000007787 long-term memory Effects 0.000 description 3
- CURLTUGMZLYLDI-UHFFFAOYSA-N Carbon dioxide Chemical compound O=C=O CURLTUGMZLYLDI-UHFFFAOYSA-N 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 2
- 230000002238 attenuated effect Effects 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000004880 explosion Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000012384 transportation and delivery Methods 0.000 description 2
- 241000135164 Timea Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 229910002092 carbon dioxide Inorganic materials 0.000 description 1
- 239000001569 carbon dioxide Substances 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000001808 coupling effect Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000003344 environmental pollutant Substances 0.000 description 1
- 238000003912 environmental pollution Methods 0.000 description 1
- 238000011049 filling Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 238000005312 nonlinear dynamic Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- -1 oxygen (O 2) Chemical compound 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 231100000719 pollutant Toxicity 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/20—Administration of product repair or maintenance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/10—Pre-processing; Data cleansing
- G06F18/15—Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/086—Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Probability & Statistics with Applications (AREA)
- Physiology (AREA)
- Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Incineration Of Waste (AREA)
Abstract
The invention relates to the technical field of intelligent diagnosis and discloses an intelligent diagnosis and maintenance method, device, equipment and storage medium for an incinerator, wherein the method comprises the steps of collecting original data sets of a plurality of target parts of the incinerator and preprocessing the original data sets to obtain preprocessed data sets; the method comprises the steps of extracting multi-scale features to obtain initial multi-dimensional features, carrying out local maximum average difference calculation to obtain aligned multi-dimensional features, inputting the aligned multi-dimensional features into a dual Q network, learning optimal incineration parameter combinations to obtain target operation parameter combinations, carrying out iterative prediction by adopting two-dimensional coupling complex chaotic mapping to obtain dynamic behavior prediction results of the incinerator, and carrying out residual service life prediction and global dynamic self-adaptive maintenance analysis on a plurality of target positions to obtain a target maintenance scheme.
Description
Technical Field
The invention relates to the technical field of intelligent diagnosis, in particular to an intelligent diagnosis and maintenance method, device and equipment for an incinerator and a storage medium.
Background
The long-term operation of the incinerator faces the problems of equipment aging, performance degradation, fault risks and the like, which not only affects the incineration efficiency, but also can cause environmental pollution and potential safety hazards. The traditional incinerator maintenance method mainly depends on periodic overhaul and empirical diagnosis, potential faults are difficult to discover in time, and precise predictive maintenance cannot be realized.
With the development of the Internet of things and artificial intelligence technology, the intelligent diagnosis and maintenance method provides a new idea for operation management of the incinerator. However, the existing intelligent diagnosis system usually only focuses on single parameters or local characteristics, and complex dynamic behaviors of the incinerator are difficult to comprehensively capture. In addition, as the working conditions of the incinerator are changeable, the data distribution under different working conditions has obvious difference, which brings challenges to fault diagnosis and life prediction of the cross working conditions. On the other hand, maintenance decisions for incinerators involve a number of objectives, such as equipment reliability, maintenance costs, production efficiency, etc., between which conflicts often exist. How to minimize maintenance cost and maintain high production efficiency while ensuring equipment reliability is a key problem faced by intelligent maintenance of incinerators.
Disclosure of Invention
The invention provides an intelligent diagnosis and maintenance method, device and equipment for an incinerator and a storage medium, which are used for realizing flexible scheduling of maintenance activities, reducing the influence of maintenance on production to the greatest extent and improving the overall operation efficiency of the incinerator.
In a first aspect, the present invention provides an intelligent diagnosis and maintenance method for an incinerator, the intelligent diagnosis and maintenance method for an incinerator comprising:
Collecting original data sets of a plurality of target parts of the incinerator through a sensor, and preprocessing the original data sets to obtain preprocessed data sets;
Inputting the preprocessing data set into a preset residual error network to perform multi-scale feature extraction to obtain initial multi-dimensional features;
Carrying out local maximum average difference calculation on the initial multidimensional feature to realize the feature distribution alignment of a source domain and a target domain, and obtaining an aligned multidimensional feature;
inputting the aligned multidimensional features into a dual Q network, and learning an optimal incineration parameter combination to obtain a target operation parameter combination;
based on the target operation parameter combination, performing iterative prediction by adopting two-dimensional coupling complex chaotic mapping to obtain a dynamic behavior prediction result of the incinerator;
And according to the dynamic behavior prediction result, predicting the residual service life of the plurality of target parts and performing global dynamic self-adaptive maintenance analysis to obtain a target maintenance scheme.
In a second aspect, the present invention provides an incinerator intelligent diagnosis and maintenance apparatus, comprising:
the acquisition module is used for acquiring original data sets of a plurality of target parts of the incinerator through the sensor, and preprocessing the original data sets to obtain preprocessed data sets;
the extraction module is used for inputting the preprocessing data set into a preset residual error network to extract multi-scale characteristics so as to obtain initial multi-dimensional characteristics;
The computing module is used for carrying out local maximum average difference computation on the initial multidimensional feature to realize the feature distribution alignment of the source domain and the target domain and obtain the aligned multidimensional feature;
The learning module is used for inputting the aligned multidimensional features into a dual Q network, and learning an optimal incineration parameter combination to obtain a target operation parameter combination;
the prediction module is used for carrying out iterative prediction by adopting two-dimensional coupling complex chaotic mapping based on the target operation parameter combination to obtain a dynamic behavior prediction result of the incinerator;
And the analysis module is used for predicting the residual service life of the plurality of target parts and carrying out global dynamic self-adaptive maintenance analysis according to the dynamic behavior prediction result to obtain a target maintenance scheme.
The invention provides intelligent diagnosis and maintenance equipment for an incinerator, which comprises a memory and at least one processor, wherein the memory stores instructions, and the at least one processor calls the instructions in the memory so that the intelligent diagnosis and maintenance equipment for the incinerator can execute the intelligent diagnosis and maintenance method for the incinerator.
A fourth aspect of the present invention provides a computer-readable storage medium having instructions stored therein that, when run on a computer, cause the computer to perform the above-described incinerator intelligent diagnosis and maintenance method.
According to the technical scheme provided by the invention, the original data of a plurality of target parts of the incinerator are collected through the multi-source heterogeneous sensor network and are preprocessed, so that the comprehensiveness and quality of the data are improved, the improved residual error network is adopted for multi-scale feature extraction, the feature information of different scales of the incinerator can be effectively captured, and the richness and accuracy of feature representation are improved. The local maximum average difference calculation is introduced, so that the characteristic distribution alignment of the source domain and the target domain is realized, the problem of data distribution deviation in cross-working condition diagnosis is solved, and the generalization capability of the model is improved. The optimal incineration parameter combination is learned by using the double Q networks, so that the intelligent optimization of the operation parameters of the incinerator is realized, and the incineration efficiency and the environmental protection performance are improved. The complex chaotic mapping of two-dimensional coupling is adopted for iterative prediction, so that the complex dynamic behavior of the incinerator can be accurately simulated, the prediction of the residual service life of key parts of the incinerator and global dynamic self-adaptive maintenance analysis are realized by combining a long-term memory network and a multi-objective optimization algorithm, the flexible scheduling of maintenance activities is realized by coordinating and optimizing a maintenance scheme and a production plan, the influence of maintenance on production is reduced to the greatest extent, and the overall operation efficiency of the incinerator is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a schematic view showing an embodiment of an intelligent diagnosis and maintenance method for an incinerator according to an embodiment of the present invention;
FIG. 2 is a schematic view of an embodiment of an intelligent diagnosis and maintenance apparatus for an incinerator according to the embodiment of the present invention;
FIG. 3 is a schematic view of an embodiment of the intelligent diagnosis and maintenance apparatus for an incinerator according to the embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "comprising" and "having" and any variations thereof, as used in the embodiments of the present invention, are intended to cover non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed or inherent to such process, method, article, or apparatus but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
For the convenience of understanding the present embodiment, first, an intelligent diagnosis and maintenance method for an incinerator disclosed in the present embodiment will be described in detail. As shown in fig. 1, the method comprises the following steps:
101. collecting original data sets of a plurality of target parts of the incinerator through a sensor, and preprocessing the original data sets to obtain preprocessed data sets;
it is to be understood that the execution subject of the present invention may be an intelligent diagnosis and maintenance device for an incinerator, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, various types of sensors including a temperature sensor, a pressure sensor, a gas composition analyzer, a flow meter, a vibration sensor, and the like are installed at a plurality of target sites of the incinerator. The sensors form a multi-source heterogeneous sensor network, can collect various key data in real time in the operation process of the incinerator, including temperature distribution in the incinerator, flue gas pressure, flue gas components, fuel supply quantity, air supply quantity, equipment vibration data and the like, and a comprehensive original data set is formed through the collection of the multi-source data. The original data set is preprocessed. In order to eliminate noise interference in the data, a wavelet transformation method is adopted to denoise the original data set. The wavelet transformation effectively separates noise and useful information in the signal to obtain denoised multidimensional time sequence data. And carrying out normalization processing on the denoised multidimensional time sequence data. Data of different dimensions and ranges are mapped to the same scale, and the data is scaled to be within the interval of [0, 1] or [ -1, 1 ]. Through normalization processing, dimensional differences among different data are eliminated, so that data acquired by different sensors are compared and analyzed on the same scale, and the efficiency and effect of data processing are improved. And detecting abnormal values on the normalized multidimensional time sequence data, and identifying abnormal data points possibly generated in the data set due to sensor faults, external interference or system abnormality. And marking the abnormal points in the data by adopting a method based on statistical characteristics, a clustering algorithm or a machine learning model to obtain an abnormal value marking result. And (3) carrying out outlier rejection on the normalized multi-dimensional time sequence data according to the marking result, and preventing misleading and influence of the abnormal data on subsequent analysis to obtain the cleaned multi-dimensional time sequence data. And performing time window segmentation on the cleaned multi-dimensional time sequence data. The data is divided according to a predetermined fixed time length, for example, in seconds, minutes or hours, and the whole time series data is divided into a plurality of time slices of fixed length. The fixed-length data segments capture the running states and the change trends of the incinerator in different time periods, and are helpful for finding rules and abnormal modes. The fixed length data segments are stored in a time series database to form a final preprocessed data set. The time sequence database is used for storing and managing time sequence data, can efficiently process a large number of time sequence data fragments, and supports quick inquiry and complex analysis operation.
102. Inputting the preprocessed data set into a preset residual error network to perform multi-scale feature extraction to obtain initial multi-dimensional features;
Specifically, the preprocessed data set is input into a first layer multi-scale convolution structure of a preset residual network. The multi-scale convolution structure comprises three convolution kernels of different sizes, namely 1x1, 3x3 and 5x5 convolution kernels, which extract characteristic information in different receptive field ranges. And carrying out convolution operation on the preprocessed data set through the convolution cores to obtain three groups of characteristic diagrams with different scales, and respectively capturing information of the incinerator data on different spatial scales, wherein the information comprises local detail characteristics and global characteristics. And respectively carrying out batch normalization processing on the three groups of feature images with different scales, eliminating internal covariate offset generated in the training process of the feature images, stabilizing the training process and accelerating convergence. And applying a modified linear unit activation function to the normalized feature map, wherein the activation function can introduce nonlinear features, and improves the expression capacity and generalization performance of the network. And after batch normalization and ReLU activation function processing, three groups of feature graphs after normalization and activation are obtained. And inputting the three groups of normalized and activated feature graphs into a feature fusion module. A self-attention mechanism is introduced, and weight coefficients are determined by calculating the importance of each normalized activated feature map in the overall feature. The self-attention mechanism can capture the correlation between different scale features, highlight key features, suppress irrelevant or redundant information. And carrying out weighted summation operation on the three groups of normalized and activated feature graphs according to the calculated weight coefficient to obtain the feature graph after multi-scale fusion. And transmitting the feature map after multi-scale fusion to a deep layer of a network through residual connection, and carrying out element-level addition operation with the deep layer features. The residual connection mode can effectively relieve the problems of gradient disappearance and gradient explosion in the deep network, so that the deep network can learn complex characteristic representation more easily, and a fused deep characteristic diagram is obtained. And carrying out stacked residual block processing on the fused deep feature map. Each residual block consists of two 3x3 convolutional layers, two batch normalization layers, two modified linear unit activation function layers, and one shortcut connection. With this structure, the residual block can perform residual learning for a plurality of times, constantly optimize the feature representation, and simultaneously maintain the transfer of the original feature information. And gradually extracting more abstract and deep features by stacking a plurality of residual blocks to obtain a feature map after residual processing. And inputting the residual processed feature map into a global average pooling layer. The global average pooling layer carries out average operation on the feature map of each channel in the space dimension, converts the feature map into a channel level vector representation with a fixed dimension, effectively reduces the dimension of the feature map, simultaneously retains global feature information, reduces the number of parameters and reduces the risk of overfitting. And inputting the channel level vector representation with fixed dimension into a full connection layer, and extracting deep features through linear transformation and nonlinear activation to obtain initial multidimensional features.
103. Carrying out local maximum average difference calculation on the initial multidimensional feature to realize the feature distribution alignment of the source domain and the target domain, and obtaining the aligned multidimensional feature;
Specifically, the initial multi-dimensional features are divided into source domain features and target domain features according to the source of the data. The source domain features are derived from historical data or annotation data during model training, while the target domain features are derived from real-time operational data of the incinerator. And (3) carrying out sample division between the two types of features, namely respectively dividing the source domain and the target domain features into sample sets in different types and sample sets between the types containing relations between the different types according to the types to which the data belong. And respectively calculating feature vectors of sample sets in the source domain and the target domain to obtain an average feature vector in the source domain and an average feature vector in the target domain, and representatively capturing feature distribution conditions of the source domain and the target domain in each class. And similarly, calculating the inter-class average feature vector for the inter-class sample sets of the source domain and the target domain respectively to obtain the inter-class average feature vector of the source domain and the inter-class average feature vector of the target domain. The intra-class average feature vector reflects the feature concentration trend inside the same class, while the inter-class average feature vector reveals the distribution difference between different classes. And calculating Euclidean distance between the average feature vector in the source domain class and the average feature vector in the target domain class to obtain inter-domain distance in the class, measuring the distribution difference of the source domain and the target domain in the same class, and reflecting the consistency of the source domain and the target domain on the similar sample characteristics. Similarly, the Euclidean distance between the average feature vector between the source domain classes and the average feature vector between the target domain classes is calculated, the inter-class inter-domain distance is obtained, the distribution difference between the source domain and the target domain in different classes is measured, and the consistency of the inter-class relationship between the different domains is reflected. By measuring the two distances, the feature distribution difference of the source domain and the target domain on the fine granularity level is captured. And adding the inter-class inter-domain distance and the inter-class inter-domain distance to obtain a local maximum average difference value. The local maximum average difference value is a measure of the overall difference in the intra-class and inter-class feature distribution of the source and target domains. To effectively minimize this difference, a loss function is constructed based on the local maximum average difference value. The goal of the penalty function is to achieve feature alignment by optimizing network parameters to minimize source domain and target domain distribution differences within and among classes. And performing gradient descent optimization on the initial multi-dimensional characteristics based on the constructed loss function. Parameters of the network are continuously adjusted through a back propagation algorithm and a gradient descent method, so that the value of the loss function is gradually reduced, and the characteristic distribution difference between the source domain and the target domain is gradually reduced. The optimization process can promote the adaptability of the model to the target domain data and can also reserve the effective information contained in the source domain data. And carrying out nonlinear transformation on the optimized multidimensional characteristics. The nonlinear transformation maps the features to a new feature space through nonlinear activation functions, such as ReLU, sigmoid or Tanh, and captures more complex relationships and modes in the original features to obtain the aligned multidimensional features.
104. Inputting the aligned multidimensional features into a dual Q network, and learning an optimal incineration parameter combination to obtain a target operation parameter combination;
Specifically, the aligned multi-dimensional features are input as states into a dual Q network. The dual Q network includes two identically structured neural networks, an online network and a target network. The aligned multidimensional features are respectively input into input layers of an online network and a target network, linear transformation is carried out through a full-connection layer, and the input features are mapped into a feature space with higher dimension, so that the subsequent network layer can learn complex relations in data better. The linear transformation is subjected to batch normalization processing, so that the scale difference of input data in different dimensions is eliminated, and the training stability and the convergence rate are improved. And carrying out nonlinear transformation on the normalized data by using a modified linear unit activation function (ReLU) to obtain an initial state representation. The initial state representation is input to a multi-layer perceptron hidden layer of the online network for nonlinear transformation. The multi-layer perceptron hiding layer comprises a plurality of full-connection layers, and each full-connection layer is connected with a batch normalization layer and a correction linear unit activation function layer. By the stacking of these layers and the action of the activation function, potentially complex nonlinear relationships in the input features are effectively captured, yielding a richer representation of the intermediate features. Intermediate feature representations of the online network are input into a state value function estimator and a merit function estimator, respectively. The state value function estimator is used for estimating the overall value of the current state, consists of one full-connection layer and outputs a scalar state value estimation, and the dominance function estimator is used for estimating the dominance of taking different actions in the current state, consists of another full-connection layer and outputs a dominance function estimation in the form of a vector. And carrying out addition operation on the state value estimation and the dominance function estimation, and subtracting the mean value of the dominance function estimation from the result to remove unnecessary deviation, thereby obtaining the Q value estimation of the online network. The Q value estimate represents the expected total return for each possible action taken in the current state. Based on the Q value estimation, the action with the largest Q value is selected as the candidate action. In order to improve the exploratory property of the strategy, the candidate action is added with Gaussian noise with zero mean and variance decayed exponentially along with the training process, so that the actual execution action is obtained. The strategy increases exploration in the initial stage of training, and avoids the model from sinking into local optimum. The actual execution actions are applied to the simulation environment of the incinerator, and the operation parameters of the incinerator, such as the temperature of the incinerator, the fuel supply rate, the air supply amount and the like, are updated, and a time step is simulated to obtain the next state and the instant rewards. The instant rewards are used for measuring the influence of actions taken in the current state on the operation performance of the incinerator, and the higher the rewards value, the more excellent the current actions are. The new state is input into the target network and the Q value estimate of the target network is calculated by the neural network layer of the same structure as the on-line network. Based on the instant prize and the Q value estimate for the target network, a target Q value is calculated using the bellman equation, the target Q value representing the expected cumulative return over all time steps in the future after taking some action in the current state. And comparing the target Q value with the Q value estimated value of the online network, and calculating the mean square error between the target Q value and the Q value estimated value of the online network to obtain a loss function value. Parameters of the online network are optimized using a gradient descent method to minimize the loss function value. In order to avoid the impact of severe parameter updates on model stability, the learning rate decays exponentially with the training process. And soft updating parameters of the online network into the target network every fixed step number, wherein the soft updating strategy can gradually transfer the learned latest information to the target network, and meanwhile, the stability of the target network is maintained. Through the series of training and updating processes, the optimal incineration parameter combination, namely the target operation parameter combination, is finally obtained.
105. Based on the target operation parameter combination, performing iterative prediction by adopting two-dimensional coupling complex chaotic mapping to obtain a dynamic behavior prediction result of the incinerator;
Specifically, furnace temperature distribution and velocity field parameters in the target operating parameter combination are mapped to complex planes. The complex physical parameters are converted into complex forms for subsequent chaotic mapping processing. The furnace temperature distribution and the velocity field parameters represent the thermodynamic and hydrodynamic states, respectively, inside the incinerator, which are converted into an initial state complex vector z by a mapping function, wherein the real and imaginary parts represent different physical quantities, respectively. More complex nonlinear transformation and mapping operations are performed on complex planes by means of complex vectors. And performing iterative computation on the complex vector z in the initial state by applying a two-dimensional coupling complex chaotic mapping function. The chaotic map function is of the form f (z) =az+bconj (z) +c =z 2+d, wherein a, b, c, d is a predetermined complex parameter, and conj (z) represents the complex conjugate of the initial state complex vector z. The complex parameter setting determines the evolution behavior of the chaotic map, and can generate diversified and complex dynamic characteristics. Substituting the initial state complex vector z into the chaotic mapping function to obtain the state complex vector at the next moment, and capturing the nonlinear evolution characteristic of the internal state of the incinerator along with time. And calculating Euclidean distance between the state complex vector and the initial state complex vector at the next moment, measuring the change amplitude of the state in the current iteration step length, and obtaining the state change quantity. And comparing the state change quantity with a preset threshold value, if the state change quantity is smaller than the threshold value, the system is stabilized, the convergence condition is met, and if the state change quantity is larger than the threshold value, the system is still changed obviously, and iterative calculation is continued at the moment. If the convergence condition is not satisfied, the state complex vector at the next moment is used as a new initial state, and the iterative computation process of the chaotic mapping function is repeatedly executed. The iterative process is continued until the convergence condition is satisfied or the maximum number of iterations N is reached, to obtain the target state complex vector of the system. The complex vector of the target state is reversely mapped back to the physical space from the complex plane, the complex target state is converted back to the physical quantity, namely, the complex target state is restored to the furnace temperature distribution and the speed field of the incinerator, and the temperature change trend and the airflow flow characteristic of the incinerator in the future time period are intuitively reflected. To assess the stability and safety of the system, a corresponding Lyapunov index is calculated based on the predicted furnace temperature profile and velocity field. The Lyapunov index is an index for measuring the stability of a dynamic system, and describes the sensitivity of the system to initial conditions. If the Lyapunov index is positive, the system is highly sensitive to disturbance in the initial state, the predicted track diverges in an exponential form, the system is in a chaotic state, and if the Lyapunov index is negative, the track of the system converges to show stable behavior. Based on the predicted furnace temperature distribution, speed field, lyapunov index and other information, dynamic behavior prediction results of the incinerator are generated, wherein the prediction results comprise temperature fluctuation trend and combustion stability assessment, and potential risk early warning is included. the trend of temperature fluctuation can reflect the variation amplitude and period of the temperature in the furnace, and is helpful for identifying the possible conditions of overhigh or overlow temperature, the combustion stability assessment is based on the collaborative analysis of the speed field and the temperature distribution, the combustion process is judged to be uniform and stable, and the potential risk early warning is to give an early warning to the possible abnormal conditions in the future according to the Lyapunov index and the characteristics of the unstable state in the historical data.
106. And according to the dynamic behavior prediction result, predicting the residual service life of a plurality of target parts and carrying out global dynamic self-adaptive maintenance analysis to obtain a target maintenance scheme.
Specifically, key characteristic information of each target part is extracted from dynamic behavior prediction results, including temperature fluctuation trend, combustion stability assessment, potential risk early warning and the like. The trend of temperature fluctuation can reflect the change rule and fluctuation amplitude of the temperature in the incinerator, combustion stability evaluation is used for judging the stability and consistency of the incineration process, and potential risk early warning is based on a prediction model to early warn possible faults or abnormal conditions in the future. And integrating the characteristic information to construct a characteristic matrix describing the current state of the target part, and obtaining the state characteristic matrix of the target part. And fusing the state characteristic matrix of the target part with the historical operation data of the incinerator. The historical operating data includes information such as the operating status, operating parameters, and maintenance records of the device over a period of time. And (3) constructing a time sequence sample by using a sliding window method, and arranging the historical data and the current predicted characteristic into continuous time segments in time sequence. The sliding window method can keep the change trend of the target part in different time periods, and the fused time sequence characteristic sequence is obtained. And inputting the fused time sequence characteristic sequence into a long-short-term memory network (LSTM) for processing. LSTM is a deep learning model for processing time series data that can effectively capture long-term dependencies and complex nonlinear features in a time series. By learning and predicting the timing characteristics of each target site, the LSTM is able to output a residual life prediction value for each target site. The remaining life prediction value indicates the length of time that the device can continue to operate normally in the current state. And constructing a multi-objective optimization problem based on the residual service life predicted value of each objective part. The objective function of the optimization problem includes three factors of reliability, maintenance cost and production efficiency of the equipment. The equipment reliability refers to the probability that equipment does not fail in the predicted remaining service life, the maintenance cost comprises spare parts, replacement cost and labor cost, and the production efficiency refers to the influence degree of a maintenance plan on normal production operation. And constructing an initial optimization model by comprehensively considering the objective function. And solving the initial optimization model through a genetic algorithm. The genetic algorithm is a search algorithm based on natural selection and genetics principles, and can effectively solve the complex optimization problem. The genetic algorithm continuously generates a new solution set through iterative evolution of the population, and finally obtains the pareto optimal solution set. Each solution in the pareto optimal solution set reaches an equilibrium state on all targets, i.e. cannot be optimized on one target without losing the performance of the other target. And selecting a compromise solution from the pareto optimal solution set as a preliminary maintenance scheme. The selection of the compromise considers the comprehensive balance of equipment reliability, maintenance cost and production efficiency, ensures that the maintenance scheme can effectively reduce the fault risk without excessively increasing the maintenance cost or causing unnecessary interruption to production. and (5) carrying out coordination optimization on the preliminary maintenance scheme and the production plan of the incinerator. The production plan includes the contents of the production task arrangement, production cycle, capacity requirement and the like of the equipment. The preliminary maintenance scheme is adjusted for the maintenance time window to ensure that maintenance work can be performed without affecting normal production. The coordination and optimization process comprehensively considers the factors such as equipment downtime, spare part preparation conditions, scheduling of operators and the like, and finally determines an optimal maintenance time window. The resulting target maintenance schedule includes specific maintenance schedules and specifies the required spare parts and detailed operating steps.
In the embodiment of the invention, the original data of a plurality of target parts of the incinerator are acquired through the multi-source heterogeneous sensor network and preprocessed, so that the comprehensiveness and quality of the data are improved, the improved residual error network is adopted for multi-scale feature extraction, the feature information of different scales of the incinerator can be effectively captured, and the richness and accuracy of feature representation are improved. The local maximum average difference calculation is introduced, so that the characteristic distribution alignment of the source domain and the target domain is realized, the problem of data distribution deviation in cross-working condition diagnosis is solved, and the generalization capability of the model is improved. The optimal incineration parameter combination is learned by using the double Q networks, so that the intelligent optimization of the operation parameters of the incinerator is realized, and the incineration efficiency and the environmental protection performance are improved. The complex chaotic mapping of two-dimensional coupling is adopted for iterative prediction, so that the complex dynamic behavior of the incinerator can be accurately simulated, the prediction of the residual service life of key parts of the incinerator and global dynamic self-adaptive maintenance analysis are realized by combining a long-term memory network and a multi-objective optimization algorithm, the flexible scheduling of maintenance activities is realized by coordinating and optimizing a maintenance scheme and a production plan, the influence of maintenance on production is reduced to the greatest extent, and the overall operation efficiency of the incinerator is improved.
In a specific embodiment, the process of executing step 101 may specifically include the following steps:
Installing a temperature sensor, a pressure sensor, a gas component analyzer, a flowmeter and a vibration sensor on a plurality of target positions to obtain a multi-source heterogeneous sensor network;
Acquiring temperature distribution, smoke pressure, smoke components, fuel supply quantity, air supply quantity and equipment vibration data in the operation process of the incinerator based on a multi-source heterogeneous sensor network to obtain an original data set;
carrying out wavelet transformation denoising treatment on the original data set to obtain denoised multidimensional time sequence data, and carrying out normalization treatment on the denoised multidimensional time sequence data to obtain normalized multidimensional time sequence data;
Performing outlier detection on the normalized multi-dimensional time sequence data to obtain outlier marking results, and performing outlier rejection on the normalized multi-dimensional time sequence data according to the outlier marking results to obtain cleaned multi-dimensional time sequence data;
And performing time window segmentation on the cleaned multidimensional time sequence data to obtain data fragments with fixed lengths, and storing the data fragments with the fixed lengths into a time sequence database to obtain a preprocessing data set.
Specifically, the sensor layout for different key positions of the incinerator comprises a temperature sensor, a pressure sensor, a gas component analyzer, a flowmeter and a vibration sensor. The temperature sensor is mainly used for collecting temperature distribution conditions of the interior of the hearth and related parts and reflecting temperature change in the incineration process. The pressure sensor is used for monitoring parameters such as flue gas pressure, fuel conveying pressure and the like, and the parameters can intuitively reflect pressure fluctuation in the incineration process. The gas component analyzer detects the component concentration in the flue gas, such as oxygen (O 2), carbon dioxide (CO 2), carbon monoxide (CO), and nitrogen oxides (NOx), which can reflect the combustion efficiency and pollutant emission levels of the incinerator. The flow meter is used to monitor the fuel and air supply, including the fuel delivery flow and air supply flow, and is an important factor affecting the combustion process of the incinerator. The vibration sensor is used for monitoring the vibration condition of the incinerator, and effectively capturing the working state and potential abnormal condition of the mechanical structure. And acquiring furnace temperature distribution, smoke pressure, smoke composition, fuel supply quantity, air supply quantity and equipment vibration data in the operation process of the incinerator based on the multi-source heterogeneous sensor network. The data are summarized by the data acquisition system to form a complete original data set. The original data set is a time sequence data matrix with multiple types and multiple dimensions and is recorded asWhereinExpressed in timeFrom time to timeThe values acquired by the individual sensors. These data contain detailed operating conditions of the incinerator at various points in time. And carrying out wavelet transformation denoising treatment on the original data set. The wavelet transformation is a time-frequency analysis tool and can effectively separate noise and useful signals in data. To the original dataPerforming wavelet transformation to obtain wavelet coefficient matrixWhereinExpressed in wavelet scaleAnd positionAnd coefficients on the same. The soft threshold processing is carried out on the high-frequency wavelet coefficient to eliminate the influence of noise, and finally the inverse wavelet transformation is carried out to reconstruct the denoised dataWhereinIs the denoised time sequence data. The denoised multidimensional time sequence data can more accurately reflect the real state of the incinerator. And carrying out normalization processing on the denoised multidimensional time sequence data, and eliminating dimension differences among different sensor data, so that the data of each dimension are on the same scale, and normalized multidimensional time sequence data is obtained. And detecting abnormal values of the normalized multidimensional time sequence data. The abnormal value refers to an observed value that deviates significantly from a normal state, due to a sensor malfunction, an external environment change, or a system abnormality. Outlier detection uses statistical methods or machine learning algorithms, such as 3 sigma principles based on mean and standard deviation:
;
Wherein, AndRespectively represent the firstMean and standard deviation of the individual sensor data. If the data pointAnd mean value ofThe deviation of (2) exceeds 3 times the standard deviationAn outlier is determined and marked 1, otherwise 0. Obtaining an abnormal value marking result matrix through abnormal value detectionAnomaly. And carrying out outlier rejection on the normalized multidimensional time sequence data according to the outlier marking result. For the data points marked as abnormal, interpolation filling is carried out by adopting the data of the front and back time points, or the abnormal points are deleted directly. After the outlier is removed, the cleaned multidimensional time sequence data is obtainedWhereinThe values after washing. And (3) carrying out time window segmentation on the cleaned multi-dimensional time sequence data, and cutting the time sequence data according to a fixed time length to form a plurality of time segments. Setting the length of the time window asEach time segment is expressed as:
;
Wherein, Representing slave timeStarting length ofIs a data segment of the same. By integrating time-series dataThe method comprises the steps of dividing the data into a plurality of fixed-length data fragments to obtain a group of data fragment sets, storing the fixed-length data fragments into a time sequence database, and effectively managing a large amount of time sequence data by the time sequence database to provide efficient data access and processing capacity for subsequent feature extraction, pattern recognition and model training. All the preprocessed data pieces constitute a preprocessed data set containing status information of the incinerator in different time windows.
In a specific embodiment, the process of executing step 102 may specifically include the following steps:
inputting the preprocessed data set into a first layer multi-scale convolution structure of a preset residual error network, wherein the multi-scale convolution structure comprises convolution kernels of three scales of 1x1, 3x3 and 5x5, and performing convolution operation on the preprocessed data set to obtain three groups of feature images with different scales;
respectively carrying out batch normalization processing and linear unit activation function correction processing on the three groups of feature images with different scales to obtain three groups of feature images after normalization activation;
Inputting the three groups of normalized activated feature images into a feature fusion module, calculating the weight coefficient of each normalized activated feature image by adopting a self-attention mechanism, and carrying out weighted summation according to the weight coefficients to obtain a feature image after multi-scale fusion;
Transmitting the feature map after multi-scale fusion to the deep layer of the network through residual connection, and carrying out element-level addition operation on the feature map and the deep layer feature to obtain a fused deep layer feature map;
carrying out stacked residual block processing on the fused deep feature images, wherein each residual block comprises two 3x3 convolution layers, two batch normalization layers, two correction linear unit activation function layers and one shortcut connection, and carrying out residual learning for a plurality of times to obtain a feature image after residual processing;
And inputting the residual processed feature images into a global averaging pooling layer, carrying out space dimension averaging operation on the feature images of each channel to obtain channel level vector representation with fixed dimension, inputting the channel level vector representation with fixed dimension into a full-connection layer, and carrying out linear transformation and nonlinear activation to obtain the initial multidimensional feature.
Specifically, the pre-processed data set is a high-dimensional time-series data matrix, which is recorded asWhereinThe number of time steps is indicated and,Representing a dimension of the feature, such as data collected by a plurality of sensors. The preprocessed data set is input into a first layer multi-scale convolution structure of a residual network, and multi-level features in the data are captured on different receptive fields. The first layer contains three convolution kernels of different sizes, one for each、And. These convolution kernels are able to extract feature information of different granularity from different scales. For a given input data matrixDuring the convolution operation, each convolution kernel traverses the entire input data and calculates a convolution sum at each location. Assume that the convolution kernel has a size ofFirst, theThe convolution kernel parameters of the individual channels are expressed asThe convolution operation is expressed as:
;
Wherein, Represent the passing of the firstOf individual channelsConvolution kernel locationIs used for outputting a result of the convolution of (1),Representing input data at offset positionIs used as a reference to the value of (a),Radius rounding, which represents the size of the convolution kernel. The operation matches the convolution kernel with the local area of the input data in a weighted summation mode to obtain a feature map. For the 1x1 convolution kernel, the receptive field is extremely small, linear combination can be carried out only at local positions, and the method is mainly used for dimension reduction or dimension increase operation, and the 3x3 convolution kernel and the 5x5 convolution kernel capture larger local spatial features, so that richer feature representations are extracted. After convolution operation of three convolution kernels with different sizes, three sets of feature graphs with different scales are obtained and respectively recorded asAnd. Each set of feature maps represents a characteristic representation of the input data at a different scale. And respectively carrying out batch normalization processing on the three groups of characteristic graphs with different scales. Batch normalization is a regularization technique that eliminates scale differences between different feature channels by normalizing the data in each small batch, expressed as:
;
Wherein, Represent the firstA characteristic diagram of the normalized individual channels,AndThe mean and variance of the small batch data,Is a small constant for preventing the denominator from being zero. The normalized feature map is subjected to a modified linear unit activation function (ReLU), namely:
Wherein, Represent the firstFeature map of each channel after ReLU activation. The ReLU activation function can introduce nonlinear characteristics to enhance the expression capability of the network. For three groups of feature graphs with different scales, respectively carrying out batch normalization processing and ReLU activation to obtain three groups of feature graphs after normalization activationAnd. The three groups of normalized and activated feature graphs are input into a feature fusion module, and features with different scales are effectively integrated, so that the subsequent network can comprehensively utilize multi-scale information. And in the feature fusion module, calculating the weight coefficient of each normalized and activated feature map by adopting a self-attention mechanism. The self-attention mechanism determines the importance of features by computing a similarity matrix between features. For each set of feature maps, their weighting coefficients are calculated:
;
Wherein, Represent the firstThe weight coefficients of the individual feature maps are,Is a weight parameter of the self-attention mechanism and is obtained through training. And carrying out weighted summation on the three groups of normalized and activated feature graphs by using the weight coefficient to obtain a multi-scale fused feature graph:
;
Representing the comprehensive characteristic diagram fused with different scale information and containing more layers of characteristic information. The feature map after multi-scale fusion is transmitted to the deep layer of the network through residual connection, and element-level addition operation is carried out on the feature map and the deep layer features. Assume that the deep feature map is represented as Then the fusion of the two is expressed as:
;
Wherein, Is a residual feature map fusing the multi-scale feature map and the deep feature map. Residual connection can effectively relieve gradient disappearance and gradient explosion problems in a deep network, so that the network can learn complex characteristic representation more easily. And carrying out processing of stacking residual blocks on the fused deep feature map. Each residual block contains two 3x3 convolutional layers, two batch normalization layers, two ReLU activation function layers, and one shortcut. Stacking multiple residual blocks progressively optimizes the feature representation while maintaining the delivery of the original feature information. The structure of the residual block is expressed as:
;
Wherein, Represent the firstThe output feature map of the individual residual blocks,Is the firstThe combined result of the convolution, normalization and activation function operations of the residual blocks,Is the firstParameters of the residual block. The residual processed feature map is input into the global averaging pooling layer. The global averaging pooling layer averages the feature maps of each channel in a spatial dimension and compresses the spatial information of each feature map into a scalar. Let the feature map of each channel be expressed asThe global average pooling operation is expressed as:
;
Wherein, Represent the firstThe pooled scalar values are globally averaged over the channels,AndThe height and width of the feature map, respectively. And obtaining a channel level vector representation with a fixed dimension after global average pooling treatment. And inputting the channel level vector representation with fixed dimension into a full connection layer, and processing through linear transformation and nonlinear activation function to obtain final initial multidimensional feature.
In a specific embodiment, the process of executing step 103 may specifically include the following steps:
Dividing the initial multidimensional feature into a source domain feature and a target domain feature, and respectively dividing the source domain feature and the target domain feature into intra-class samples and inter-class samples to obtain intra-class sample sets and inter-class sample sets of the source domain and the target domain;
Respectively calculating intra-class average feature vectors for intra-class sample sets of a source domain and a target domain to obtain intra-class average feature vectors of the source domain and intra-class average feature vectors of the target domain, and respectively calculating inter-class average feature vectors for inter-class sample sets of the source domain and the target domain to obtain inter-class average feature vectors of the source domain and inter-class average feature vectors of the target domain;
Calculating Euclidean distance between the average feature vector in the source domain class and the average feature vector in the target domain class to obtain inter-domain distance, and calculating Euclidean distance between the average feature vector between the source domain class and the average feature vector between the target domain class to obtain inter-domain distance;
Adding the inter-class inter-domain distance and the inter-class inter-domain distance to obtain a local maximum average difference value, and constructing a loss function based on the local maximum average difference value;
And performing gradient descent optimization on the initial multi-dimensional characteristics based on the loss function to obtain optimized multi-dimensional characteristics, and performing nonlinear transformation on the optimized multi-dimensional characteristics to obtain aligned multi-dimensional characteristics.
Specifically, the source domain features refer to data features used by the model in training, and come from historical data or annotated training data. The target domain features refer to feature data acquired in real time in the actual application scene of the model, and the data come from data sets acquired in real time or distributed differently. Characterizing the initial multi-dimensionsDividing into source domain featuresAnd target domain featuresWhereinRepresent the firstThe multi-dimensional feature vectors of the individual samples,Representing the number of source domain samples,Is the total number of samples to be processed,Is the dimension of the feature. The samples of the source domain and the target domain are divided into intra-class and inter-class. Intra-class sample sets represent samples of the same class, while inter-class sample sets represent sample combinations of different classes. Assuming that there is a data setCategory of source domain featuresLabeling it according to categoryDividing into each categoryIs represented as an intra-class sample set of (1). The sample set between classes is obtained by randomly sampling between different classes, i.e. Similarly, for target domain featuresDividing samples in the class and between classes in the same way to obtain a sample set in the class of the target domainAnd an inter-class sample set. Intra-class and inter-class average feature vectors for the source domain and the target domain are calculated, respectively. The intra-class average feature vector represents the average feature within the same class, calculated by:
;
Wherein, Representing the first in the source domainThe average feature vector within the class of the class,Representing source domain numberNumber of class samples. Likewise, the target domain is the firstThe intra-class average feature vector of a class is expressed as:
;
Wherein, Representing the first field in the target domainThe average feature vector within the class of the class,Representing the target domainNumber of class samples. The inter-class average feature vector represents average features among different classes and is obtained by carrying out average value calculation on an inter-class sample set. For the inter-class average feature vector of the source domain, it is expressed as:
;
Wherein, Representing the first in the source domainClass and numberInter-class average feature vectors between classes. Likewise, the target domain is the firstClass and numberThe inter-class average feature vector between classes is expressed as:
;
Wherein, Representing the first field in the target domainClass and numberInter-class average feature vectors between classes. And calculating Euclidean distance of the average feature vector in the class between the source domain and the target domain. Inter-class inter-domain distances are expressed as:
;
Wherein, Representing the first in the source domain and the target domainInter-class intra-domain distances of the classes,Representing the euclidean distance. Inter-class inter-domain distances reflect differences in the distribution of features of source and target domains within the same class. Similarly, inter-class inter-domain distances are expressed as:
;
Wherein, Representing the first in the source domain and the target domainClass and numberInter-class inter-domain distances between classes. Inter-class inter-domain distances reflect differences in the feature distribution of source and target domains between different classes. Respectively accumulating inter-class inter-domain distances and inter-class inter-domain distances of all classes to obtain global inter-class inter-domain distances and inter-class inter-domain distances:
;
Adding the inter-class inter-domain distance and the inter-class inter-domain distance to obtain a local maximum average difference value:
;
The local maximum average difference value measures the distribution difference of the source domain and the target domain within the same class and between different classes. To minimize the distribution differences, a loss function is constructed based on the local maximum average difference value. The loss function is expressed as:
;
Wherein, Representing task loss functions, such as cross entropy loss in classification tasks,Is a balance coefficient for adjusting the weights of the task loss and the local maximum average difference loss. By minimizing the loss function, feature distribution differences between the source domain and the target domain are minimized while optimizing the task targets. And performing gradient descent optimization on the initial multi-dimensional characteristics based on the loss function. Calculating gradient of loss function relative to network parameters using back propagation algorithmAnd continuously updating network parameters by a gradient descent methodSo that the loss functionMinimizing. The iterative update of the optimization process is expressed as:
;
Wherein, Represent the firstThe network parameters of the number of iterations,Is the learning rate. And after repeated iterative optimization, obtaining the optimized multidimensional feature. And in order to improve the nonlinear expression capability of the characteristics, performing nonlinear transformation on the optimized multidimensional characteristics. Common nonlinear transformations include activation functions such as ReLU, sigmoid, or Tanh, among others. Assume that the optimized feature isThe nonlinear transformation is expressed as:
;
Wherein, Representing a nonlinear activation function, such as a ReLU function: . After nonlinear transformation, the aligned multidimensional feature is obtained The complex characteristic distribution relation of the source domain and the target domain can be better captured, so that the generalization performance and the adaptability of the model are improved in practical application.
In a specific embodiment, the process of executing step 104 may specifically include the following steps:
The aligned multidimensional features are used as input layers of an online network and a target network in a state input dual Q network, linear transformation and batch normalization processing are carried out through a full-connection layer, and an initial state representation is obtained through correcting a linear unit activation function;
Carrying out nonlinear transformation on the initial state representation through a multi-layer perceptron hidden layer of the online network, wherein the multi-layer perceptron hidden layer comprises a full-connection layer, a batch normalization layer and a modified linear unit activation function, so as to obtain an intermediate characteristic representation of the online network;
respectively inputting the intermediate characteristic representation of the online network into a state value function estimator and a dominant function estimator, wherein the two estimators are composed of all connection layers, so as to obtain scalar state value estimation and vector dominant function estimation;
performing addition operation on the scalar state value estimation and the vector dominance function estimation, subtracting the average value of the dominance function estimation to obtain the Q value estimation of the online network, and selecting the action corresponding to the maximum Q value based on the Q value estimation of the online network to obtain a candidate action;
adding the candidate actions and Gaussian noise with zero mean value and variance exponentially attenuated along with the training process to obtain actual execution actions, applying the actual execution actions to the simulation environment of the incinerator, updating parameters of the furnace temperature, the fuel supply rate and the air supply quantity, and simulating a time step to obtain the next state and instant rewards;
inputting the next state into a target network, obtaining Q value estimation of the target network through a neural network layer with the same structure as the online network, calculating a target Q value based on instant rewards and the Q value estimation of the target network, calculating a mean square error with the Q value estimation of the online network, and obtaining a loss function value;
And performing gradient descent optimization on the online network parameters, wherein the learning rate decays exponentially along with the training process, so that the loss function value is minimized, and soft updating the online network parameters to a target network every fixed step number to obtain a target operation parameter combination.
Specifically, the aligned multidimensional features are input into the input layers of the online network and the target network of the dual Q network. The dual Q network is made up of two independent neural networks, the on-line network being used to generate action policies and the target network being used to help update and optimize policies of the on-line network. The multidimensional feature input into the network is denoted asWhereinIs the dimension of the feature. After entering the network, the input features are linearly transformed through the fully connected layer. The full connection layer performs linear combination on input features, maps the input features to a feature space with higher dimension, and has the following formula:
;
Wherein, Represents an initial state representation after the linear transformation,Is a weight matrix of the full connection layer,Is the offset vector of the reference signal,Indicating the size of the hidden layer. In order to prevent the characteristics from influencing the training process in different dimensions, carrying out batch normalization processing, and carrying out normalization processing on the characteristics, wherein the formula is as follows:
;
Wherein, Is an initial state representation after normalization,AndThe mean and variance of the current lot characteristics respectively,Is a small constant for preventing the denominator from being zero. After normalization processing, nonlinear characteristics are introduced through modified linear unit activation function (ReLU) processing, so that the network has stronger expression capability, and the formula of the ReLU is as follows:
;
obtaining an initial state representation . The initial state represents a nonlinear transformation through the multi-layer perceptron hidden layer of the online network. The multi-layer perceptron hiding layer comprises a multi-layer full-connection layer, a batch normalization layer and a ReLU activation function. Assuming that two hidden layers are used, then the firstThe calculation of the layer hidden layer is expressed as:
;
;
;
obtaining the intermediate characteristic representation of the online network after the hidden layer transformation of the multi-layer perceptron WhereinRepresenting the index of the last hidden layer. The intermediate feature representations of the online network are input to a state value function estimator and a merit function estimator, respectively. The state value function estimator is used to estimate the overall value of the current state and the dominance function estimator is used to estimate the relative dominance of taking different actions in that state. The state value function is estimated through a full connection layer, and the formula is:
;
Wherein, Is a function of the state value of the state,Is a weight matrix of the state value function estimator,Is offset. The dominance function is estimated by a fully connected layer, expressed as:
;
Wherein, Is a function of the advantages of the present invention,AndThe weight and bias of the dominance function estimator, respectively. For state value functionsDominance functionPerforming addition operation, and subtracting the mean value of the dominance function to obtain final Q value estimation:
;
Wherein, Is an estimate of the Q value, expressed in stateDown selection actionThe expected return that can be obtained. After the Q value estimation of each action is calculated, the action with the largest Q value is selected as a candidate action:
;
To maintain exploratory, candidate actions Gaussian noise with mean value of zero and variance gradually attenuated along with training processAdding to obtain the actual execution action:
;
As the training process proceeds, the variance of the noise Will taper allowing the network to make more use of its learned policies. Will actually perform the actionIn a simulation environment for an incinerator, operating parameters of the incinerator (such as the temperature of the incinerator, the fuel supply rate and the air supply amount) are updated and simulated for a time step. Obtaining the next state through the simulation environmentAnd instant rewards. Instant rewardsThe influence of the current action on the operation efficiency and stability of the incinerator is measured. In the updated state, the new state is setInputting the target network. The target network has the same structure as the online network and is used for calculating the target Q value. The target Q value is calculated based on the bellman equation:
;
Wherein, Is a discount factor representing the discount rate of future rewards. Comparing the target Q value with the Q value of the online network, and calculating a loss function:
L;
Wherein, Is a parameter of the online network. Updating parameters of the online network by a gradient descent method:
;
Wherein, Is the learning rate, which represents the step size of the parameter update. In order to maintain the consistency of the target network and the online network, the parameters of the online network are updated to the target network in a soft mode every fixed step number:
;
Wherein, Is a soft update coefficient, typically taking a small value (e.g., 0.01) for smoothly updating the parameters of the target network. Through the above process, the dual Q network is able to learn the optimal operating parameter combinations of the incinerator, such as the furnace temperature, fuel supply rate, air supply amount, etc. The strategy not only can optimize the operation efficiency of the incinerator, but also can ensure the stability and safety of the combustion process. For example, in practical applications, the dual Q network continuously adjusts the ratio of fuel supply to air supply to ensure that the incinerator is capable of maintaining optimal combustion conditions under different loads.
In a specific embodiment, the process of executing step 105 may specifically include the following steps:
S1, mapping furnace temperature distribution and speed field parameters in a target operation parameter combination to a complex plane to obtain an initial state complex vector z;
s2, performing iterative computation on an initial state complex vector z by applying a two-dimensional coupling complex chaotic mapping function f (z) =a+b+c+z 2 +d, wherein a, b, c, d is a preset complex parameter, and conj (z) is the complex conjugate of the initial state complex vector z to obtain a complex vector in the next moment;
S3, calculating Euclidean distance between the state complex vector and the initial state complex vector at the next moment to obtain a state change quantity, comparing the state change quantity with a preset threshold value, and judging whether a convergence condition is met;
S4, if the convergence condition is not met, taking the complex vector of the next time state as a new initial state, and repeatedly executing the steps S2 to S4 until the convergence condition is met or the maximum iteration number N is reached, so as to obtain the complex vector of the target state;
S5, reversely mapping the complex vector of the target state, and mapping the state on the complex plane back to a physical space to obtain predicted furnace temperature distribution and speed field;
S6, calculating Lyapunov indexes corresponding to the predicted furnace temperature distribution and the speed field to obtain a system stability index, and generating a dynamic behavior prediction result of the incinerator according to the predicted furnace temperature distribution, the speed field and the system stability index, wherein the dynamic behavior prediction result comprises a temperature fluctuation trend, combustion stability assessment and potential risk early warning.
Specifically, the physical parameters are converted into a complex vector form. The target operating parameter combination of the incinerator comprises the incinerator temperature distribution and the speed field parameters, which are recorded asAndWhereinRepresent the firstThe temperature of the individual measuring points is measured,Represent the firstThe speed of the points is measured. To map these parameters to complex planes, temperature and velocity are combined into a complex vectorWherein each complex numberRepresenting imaginary units. Obtaining complex vector of initial state. And performing iterative computation on the complex vector z in the initial state by applying a two-dimensional coupling complex chaotic mapping function. The chaotic mapping function is expressed as:
;
Wherein, 、、、Determining the dynamic characteristic of chaotic mapping for a preset complex parameter; representing complex numbers Complex conjugate of conjFor introducing nonlinear coupling effects. For initial state complex vectorEach complex componentThrough a mapping functionTo obtain the complex vector of the state at the next momentWhereinThe calculation mode of (a) is as follows:
;
the iterative process reflects the dynamic evolution of the furnace temperature and velocity field on the complex plane. During the calculation, due to 、、、The mapping exhibits complex chaotic behavior that captures nonlinear dynamic characteristics of temperature and airflow changes within the furnace. Calculating the state complex vector at the next momentComplex vector with initial stateThe Euclidean distance between the two is used for measuring the change quantity of the state. The calculation formula of the Euclidean distance is as follows:
;
Wherein, Represent the firstA modulus of the difference of the complex components. Calculated state change amountRepresenting the magnitude of the overall offset of the complex vector in the complex plane. The state change quantity is compared with a preset threshold valueAnd comparing to judge whether the convergence condition is satisfied. If it isIf the state change is greater than the threshold, the iteration needs to be continued. If the convergence condition is not satisfied, the state complex vector at the next momentAs a new initial state complex vector, the iterative process of the chaotic mapping function is repeatedly performed, i.e., returns to step S2. The iteration process is continued until the state change quantity meets the convergence condition or reaches the preset maximum iteration number. The process captures the long-term behavior characteristics of the system to obtain the complex vector of the target state. Vector of target state complexReverse mapping is performed and converted back to physical space. Each complex componentContains predicted furnace temperature distribution and velocity field information, namely:
;
Wherein, Representing complex numbersCorresponding to the predicted furnace temperature profile,Representing complex numbersCorresponding to the predicted velocity field. And obtaining the predicted value of the furnace temperature distribution and the speed field in the physical space corresponding to the complex vector of the target state on the complex plane through the reverse mapping process. Based on the predicted furnace temperature profile and velocity field, a corresponding Lyapunov index is calculated. The Lyapunov index is an index for measuring the sensitivity of a dynamic system to initial conditions, and can reflect the chaos and stability of the system. For the predicted state, the Lyapunov exponent is calculated as:
;
Wherein, Representing the Lyapunov index,For the number of iterations,Representing the mapping function at each iteration pointA derivative thereof. A positive Lyapunov index indicates that the system is very sensitive to initial conditions, the predicted track is exponentially divergent, the system is in a chaotic state, and a negative Lyapunov index indicates that the system tends to be stable. And (3) obtaining a stability index of the system through analysis of the Lyapunov index, so as to judge the running state of the incinerator. And generating a dynamic behavior prediction result of the incinerator according to the predicted furnace temperature distribution, the speed field and the system stability index. The dynamic behavior prediction results comprise temperature fluctuation trend, combustion stability assessment and potential risk early warning. The temperature fluctuation trend reflects the change characteristics of the temperature in the furnace along with time and helps to judge the stability of the burning process, the burning stability is evaluated, whether the burning in the furnace is uniform or not and whether the bias current phenomenon exists or not is evaluated by combining speed field information, and the potential risk early warning is based on Lyapunov indexes and historical data and is used for early warning on the conditions of abnormal burning, overhigh temperature or overlow temperature and the like possibly occurring in the future.
In a specific embodiment, the process of executing step 106 may specifically include the following steps:
Extracting the temperature fluctuation trend, combustion stability assessment and potential risk early warning information of each target part from dynamic behavior prediction results to obtain a target part state feature matrix;
Fusing the state characteristic matrix of the target part with the historical operation data, and constructing a time sequence sample by adopting a sliding window method to obtain a fused time sequence characteristic sequence;
inputting the fused time sequence characteristic sequences into a long-short-period memory network, and predicting the residual service life of each target part to obtain the residual service life predicted value of each target part;
constructing a multi-objective optimization problem based on the residual service life predicted value of each objective part, wherein an objective function comprises equipment reliability, maintenance cost and production efficiency, and an initial optimization model is obtained;
solving the initial optimization model through a genetic algorithm to obtain a pareto optimal solution set, and selecting a compromise solution from the pareto optimal solution set to obtain a preliminary maintenance scheme;
And carrying out coordination optimization on the preliminary maintenance scheme and the production plan of the incinerator, and adjusting a maintenance time window to obtain a target maintenance scheme, wherein the target maintenance scheme comprises maintenance time, required spare parts and operation steps.
Specifically, the operation state of each target portion of the incinerator is analyzed. The dynamic behavior prediction results are based on the simulation and prediction of complex dynamic behaviors inside the incinerator by a pre-model, and the results comprise the temperature fluctuation trend of different parts in the hearth, the stability condition in the combustion process and potential risk factors. For each target part, extracting three key characteristic information including temperature fluctuation trendCombustion stability assessmentAnd potential okang danger early warning. These feature information are represented as state feature vectors of the target siteWhereinRepresent the firstThe temperature fluctuation characteristics of the individual target sites,The results of the evaluation of the combustion stability thereof are shown,Then represents the potential risk warning information for that location. Combining the state feature vectors of all the target parts to obtain a target part state feature matrixWhereinIs the number of target sites. And fusing the state characteristic matrix with the historical operation data. The historical operating data contains information such as operating conditions, operating parameters, maintenance records, etc. of the incinerator at different time periods. Matrix the state characteristics of the target partWith historical operating dataAnd fusing, and constructing a time sequence sample by adopting a sliding window method. The sliding window method is to slice time series data according to a fixed time length to form a plurality of time window samples. Setting the length of the time window asThe sliding step length isThen (1)The time samples are expressed as:
;
Wherein, Representing slave timeA sequence of starting fused time series characteristics,AndRespectively representing the state characteristics of the target part and the time of the historical operation dataIs a value of (2). In this way, a set of fused time series characteristic sequences is obtainedThe method comprises the steps of including state characteristic information of the current target part and integrating dynamic change rules of historical data. And inputting the fused time sequence characteristic sequences into a long short term memory network (LSTM), and predicting the residual service life of each target part. LSTM is a recurrent neural network capable of handling long-term dependencies, suitable for modeling and prediction of time series data. Setting the LSTM input asOutputting the predicted value of the residual service life of each target partWhereinRepresent the firstRUL predictions for each target site. The calculation of LSTM is expressed as:
;
;
;
;
Wherein, Indicating the hidden state of the current moment,In order to be the state of the memory cell,AndThe activation values of the forget gate, the input gate and the output gate respectively,As a weight matrix of the LSTM,、As a result of the bias term,Representing Sigmoid activation functions, representing element product operations. Through multilayer recursion processing of the LSTM network, complex dynamic relations in the time sequence characteristics are captured, so that the residual service life of each target part is accurately predicted. And constructing a multi-objective optimization problem based on the residual service life predicted value of each objective part. Objective functions of the optimization problem include equipment reliability, maintenance costs, and production efficiency. The device reliability is expressed as a weighted sum of the remaining useful lives of all parts:
;
Wherein, Indicating the overall reliability of the device,Is the firstImportance weight of each target site. Maintenance costs are expressed as the total cost of maintenance activities, including replacement spare parts, labor costs, downtime losses, etc., expressed as:
;
Wherein, Representing the total cost of maintenance,Is the firstThe maintenance cost weight of the individual sites,For the actual cost of each maintenance. Production efficiency is then related to downtime of the apparatus, expressed as:
;
Wherein, The production efficiency is represented by the number of the production lines,In order to achieve a total production time, the process is,For downtime. The multi-objective optimization problem is expressed as:
;
In a multi-objective optimization model, the objective is to minimize maintenance costs while maximizing equipment reliability and production efficiency. To solve the multi-objective optimization problem, genetic algorithms are employed. Genetic algorithms search for optimal solutions in solution space by simulating the process of natural selection and biological evolution. The individual representations of the genetic algorithm are the codes of the maintenance strategy, and the fitness function is calculated from a weighted sum of maintenance cost, equipment reliability and production efficiency. The basic steps of the genetic algorithm comprise selection, crossing and mutation, and the pareto optimal solution set is obtained through multi-generation evolution. A compromise solution is selected from the pareto optimal solution set as a preliminary maintenance solution. The choice of the compromise is based on the decision maker's preference weights for different objectives, finding an optimal balance between maintenance costs, equipment reliability and production efficiency. And (5) carrying out coordination optimization on the preliminary maintenance scheme and the production plan of the incinerator. Production planning typically includes constraints such as production cycle time, equipment utilization, order requirements, etc. And the primary maintenance scheme and the production plan are comprehensively considered, and the maintenance time window is adjusted, so that the maintenance activity can ensure the normal running of the production task, the maintenance cost can be reduced, and the running reliability of the equipment is improved. The resulting target maintenance schedule includes specific maintenance schedules, required spare parts, and detailed operating steps. The maintenance schedule refers to when to perform the shutdown maintenance and the repair of the equipment, the required spare parts are a list of parts that may need to be replaced for each target site, and the operation steps are specific processes of actually performing the maintenance work.
The intelligent diagnosis and maintenance method for an incinerator according to the embodiment of the present invention is described above, and the intelligent diagnosis and maintenance device for an incinerator according to the embodiment of the present invention is described below, referring to fig. 2, and one embodiment of the intelligent diagnosis and maintenance device for an incinerator according to the embodiment of the present invention includes:
The acquisition module 201 is used for acquiring original data sets of a plurality of target parts of the incinerator through sensors, and preprocessing the original data sets to obtain preprocessed data sets;
The extraction module 202 is configured to input the preprocessed data set into a preset residual network to perform multi-scale feature extraction, so as to obtain an initial multi-dimensional feature;
The computing module 203 is configured to perform local maximum average difference computation on the initial multi-dimensional feature, and implement feature distribution alignment of the source domain and the target domain, so as to obtain an aligned multi-dimensional feature;
The learning module 204 is configured to input the aligned multidimensional feature into a dual Q network, learn an optimal incineration parameter combination, and obtain a target operation parameter combination;
the prediction module 205 is configured to perform iterative prediction by adopting two-dimensional coupled complex chaotic mapping based on a target operation parameter combination, so as to obtain a dynamic behavior prediction result of the incinerator;
And the analysis module 206 is configured to predict the remaining service lives of the multiple target locations and perform global dynamic adaptive maintenance analysis according to the dynamic behavior prediction result, so as to obtain a target maintenance scheme.
Through the cooperation of the components, the original data of a plurality of target parts of the incinerator are collected through the multi-source heterogeneous sensor network and are preprocessed, the comprehensiveness and quality of the data are improved, the improved residual error network is adopted for multi-scale feature extraction, feature information of different scales of the incinerator can be effectively captured, and the richness and accuracy of feature representation are improved. The local maximum average difference calculation is introduced, so that the characteristic distribution alignment of the source domain and the target domain is realized, the problem of data distribution deviation in cross-working condition diagnosis is solved, and the generalization capability of the model is improved. The optimal incineration parameter combination is learned by using the double Q networks, so that the intelligent optimization of the operation parameters of the incinerator is realized, and the incineration efficiency and the environmental protection performance are improved. The complex chaotic mapping of two-dimensional coupling is adopted for iterative prediction, so that the complex dynamic behavior of the incinerator can be accurately simulated, the prediction of the residual service life of key parts of the incinerator and global dynamic self-adaptive maintenance analysis are realized by combining a long-term memory network and a multi-objective optimization algorithm, the flexible scheduling of maintenance activities is realized by coordinating and optimizing a maintenance scheme and a production plan, the influence of maintenance on production is reduced to the greatest extent, and the overall operation efficiency of the incinerator is improved.
The above-described intelligent diagnosis and maintenance apparatus for a middle burner in the embodiment of the present invention is described in detail in terms of modularized functional entities in fig. 2, and the intelligent diagnosis and maintenance apparatus for a middle burner in the embodiment of the present invention is described in detail in terms of hardware processing in the following.
Fig. 3 is a schematic structural diagram of an intelligent diagnosis and maintenance apparatus for an incinerator 300 according to an embodiment of the present invention, where the intelligent diagnosis and maintenance apparatus for an incinerator 300 may have a relatively large difference according to a configuration or performance, and may include one or more processors (central processing units, CPU) 310 (e.g., one or more processors) and a memory 320, and one or more storage mediums 330 (e.g., one or more mass storage devices) storing application programs 333 or data 332. Wherein memory 320 and storage medium 330 may be transitory or persistent storage. The program stored in the storage medium 330 may include one or more modules (not shown), each of which may include a series of instruction operations in the incinerator intelligent diagnosis and maintenance apparatus 300. Still further, the processor 310 may be configured to communicate with the storage medium 330 and execute a series of instruction operations in the storage medium 330 on the incinerator intelligent diagnosis and maintenance apparatus 300 to implement the steps of the incinerator intelligent diagnosis and maintenance method described above.
The incinerator intelligent diagnostic and maintenance apparatus 300 may also include one or more power supplies 340, one or more wired or wireless network interfaces 350, one or more input output interfaces 360, and/or one or more operating systems 331, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the configuration of the intelligent diagnostic and maintenance apparatus for an incinerator shown in fig. 3 does not constitute a limitation of the intelligent diagnostic and maintenance apparatus for an incinerator provided by the present invention, and may include more or less components than those illustrated, or may combine certain components, or may have a different arrangement of components.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, or may be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, and when the instructions are run on a computer, the computer is caused to perform the steps of the incinerator intelligent diagnosis and maintenance method.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, systems and units may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
While the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the foregoing embodiments may be modified or equivalents may be substituted for some of the features thereof, and that the modifications or substitutions do not depart from the spirit and scope of the embodiments of the invention.
Claims (8)
1. An intelligent diagnosis and maintenance method for an incinerator, which is characterized by comprising the following steps:
Collecting original data sets of a plurality of target parts of the incinerator through a sensor, and preprocessing the original data sets to obtain preprocessed data sets;
Inputting the preprocessing data set into a preset residual error network to perform multi-scale feature extraction to obtain initial multi-dimensional features;
Carrying out local maximum average difference calculation on the initial multidimensional feature to realize the feature distribution alignment of a source domain and a target domain, and obtaining an aligned multidimensional feature;
inputting the aligned multidimensional features into a dual Q network, and learning an optimal incineration parameter combination to obtain a target operation parameter combination;
Based on the target operation parameter combination, performing iterative prediction by adopting two-dimensional coupling complex chaotic mapping to obtain a dynamic behavior prediction result of the incinerator; S1, mapping furnace temperature distribution and speed field parameters in the target operation parameter combination to a complex plane to obtain an initial state complex vector z; S2, performing iterative computation on the initial state complex vector z by applying a two-dimensional coupling complex chaotic mapping function f (z) =a+b (z) +c (z 2 +d), wherein a, b, c, d is a preset complex parameter, conj (z) is the complex conjugate of the initial state complex vector z to obtain a next moment state complex vector, S3, calculating the Euclidean distance between the next moment state complex vector and the initial state complex vector to obtain a state variation, comparing the state variation with a preset threshold to judge whether convergence conditions are met, S4, if the convergence conditions are not met, repeatedly executing steps S2 to S4 until the convergence conditions are met or the maximum iterative times N are met, S5, performing inverse mapping on the target state complex vector, mapping the state on a complex plane back to a physical space to obtain predicted furnace temperature distribution and a predicted speed field, S6, calculating the predicted furnace temperature distribution and the predicted furnace velocity, and predicting the furnace velocity, and generating a dynamic stability and a dynamic stability index according to a predicted dynamic stability of a furnace temperature distribution and a predicted system, wherein the predicted furnace velocity and a predicted stability of the furnace velocity comprises a predicted furnace velocity and a predicted stability and a dynamic stability index, combustion stability assessment and potential risk early warning;
The method comprises the steps of extracting temperature fluctuation trend, combustion stability assessment and potential risk early warning information of each target part from dynamic behavior prediction results to obtain a target part state characteristic matrix, fusing the target part state characteristic matrix with historical operation data, constructing a time sequence sample by adopting a sliding window method to obtain a fused time sequence characteristic sequence, inputting the fused time sequence characteristic sequence into a long-short period memory network, predicting the residual service life of each target part to obtain residual service life prediction values of each target part, constructing a multi-target optimization problem based on the residual service life prediction values of each target part, solving the initial optimization model through a genetic algorithm to obtain a pareto optimal solution set, selecting a solution in the pareto optimal solution set to obtain a preliminary maintenance scheme, coordinating the preliminary maintenance scheme with a furnace, and carrying out window adjustment on the target maintenance scheme and the required maintenance operation time.
2. The intelligent diagnosis and maintenance method of an incinerator according to claim 1, wherein the steps of collecting raw data sets of a plurality of target parts of the incinerator by a sensor, and preprocessing the raw data sets to obtain preprocessed data sets, include:
installing a temperature sensor, a pressure sensor, a gas component analyzer, a flowmeter and a vibration sensor on the plurality of target positions to obtain a multi-source heterogeneous sensor network;
Acquiring furnace temperature distribution, smoke pressure, smoke components, fuel supply quantity, air supply quantity and equipment vibration data in the operation process of the incinerator based on the multi-source heterogeneous sensor network to obtain an original data set;
Performing wavelet transformation denoising processing on the original data set to obtain denoised multi-dimensional time sequence data, and performing normalization processing on the denoised multi-dimensional time sequence data to obtain normalized multi-dimensional time sequence data;
Performing outlier detection on the normalized multi-dimensional time sequence data to obtain outlier marking results, and performing outlier rejection on the normalized multi-dimensional time sequence data according to the outlier marking results to obtain cleaned multi-dimensional time sequence data;
and performing time window segmentation on the cleaned multi-dimensional time sequence data to obtain data fragments with fixed lengths, and storing the data fragments with the fixed lengths into a time sequence database to obtain a preprocessing data set.
3. The intelligent diagnosis and maintenance method of incinerator according to claim 1, wherein said inputting the preprocessed data set into a preset residual network for multi-scale feature extraction, obtaining initial multi-dimensional features, comprises:
Inputting the preprocessing data set into a first layer multi-scale convolution structure of a preset residual error network, wherein the multi-scale convolution structure comprises convolution kernels with three scales of 1x1, 3x3 and 5x5, and carrying out convolution operation on the preprocessing data set to obtain three groups of feature graphs with different scales;
respectively carrying out batch normalization processing and linear unit activation function correction processing on the three groups of feature images with different scales to obtain three groups of feature images after normalization activation;
Inputting the three groups of normalized and activated feature images into a feature fusion module, calculating the weight coefficient of each normalized and activated feature image by adopting a self-attention mechanism, and carrying out weighted summation according to the weight coefficients to obtain a feature image after multi-scale fusion;
Transmitting the multi-scale fused feature map to the deep layer of the network through residual connection, and performing element-level addition operation on the multi-scale fused feature map and the deep layer feature to obtain a fused deep layer feature map;
Carrying out stacked residual block processing on the fused deep feature images, wherein each residual block comprises two 3x3 convolution layers, two batch normalization layers, two correction linear unit activation function layers and one shortcut connection, and carrying out repeated residual learning to obtain a feature image after residual processing;
And inputting the residual processed feature map into a global average pooling layer, carrying out space dimension average operation on the feature map of each channel to obtain a channel level vector representation with fixed dimension, inputting the channel level vector representation with fixed dimension into a full-connection layer, and carrying out linear transformation and nonlinear activation to obtain the initial multidimensional feature.
4. The intelligent diagnosis and maintenance method of an incinerator according to claim 1, wherein the calculating the local maximum average difference of the initial multidimensional feature to align the feature distribution of the source domain and the target domain, and obtaining the aligned multidimensional feature comprises:
Dividing the initial multidimensional feature into a source domain feature and a target domain feature, and respectively carrying out intra-class and inter-class sample division on the source domain feature and the target domain feature to obtain an intra-class sample set and an inter-class sample set of the source domain and the target domain;
respectively calculating intra-class average feature vectors for the intra-class sample sets of the source domain and the target domain to obtain intra-class average feature vectors of the source domain and intra-class average feature vectors of the target domain, and respectively calculating inter-class average feature vectors for the inter-class sample sets of the source domain and the target domain to obtain inter-class average feature vectors of the source domain and inter-class average feature vectors of the target domain;
calculating Euclidean distance between the average feature vector in the source domain class and the average feature vector in the target domain class to obtain inter-class domain distance, and calculating Euclidean distance between the average feature vector between the source domain class and the average feature vector between the target domain class to obtain inter-class domain distance;
Adding the inter-class inter-domain distance and the inter-class inter-domain distance to obtain a local maximum average difference value, and constructing a loss function based on the local maximum average difference value;
And performing gradient descent optimization on the initial multi-dimensional feature based on the loss function to obtain an optimized multi-dimensional feature, and performing nonlinear transformation on the optimized multi-dimensional feature to obtain an aligned multi-dimensional feature.
5. The intelligent diagnosis and maintenance method of incinerator according to claim 1, wherein said inputting the aligned multidimensional feature into a dual Q network, learning an optimal incineration parameter combination, obtaining a target operation parameter combination, comprises:
Taking the aligned multidimensional features as input layers of an online network and a target network in a state input double Q network, performing linear transformation and batch normalization processing through a full-connection layer, and obtaining initial state representation through correcting a linear unit activation function;
Carrying out nonlinear transformation on the initial state representation through a multi-layer perceptron hidden layer of the online network, wherein the multi-layer perceptron hidden layer comprises a full-connection layer, a batch normalization layer and a modified linear unit activation function, so as to obtain an intermediate characteristic representation of the online network;
Respectively inputting the intermediate characteristic representation of the online network into a state value function estimator and a dominant function estimator, wherein the two estimators are composed of all connection layers, so as to obtain scalar state value estimation and vector dominant function estimation;
Performing addition operation on the scalar state value estimation and the vector dominance function estimation, subtracting the average value of the dominance function estimation to obtain Q value estimation of an online network, and selecting an action corresponding to the maximum Q value based on the Q value estimation of the online network to obtain candidate actions;
adding the candidate actions and Gaussian noise with zero mean value and variance decayed exponentially along with the training process to obtain actual execution actions, applying the actual execution actions to an incinerator simulation environment, updating parameters of the furnace temperature, the fuel supply rate and the air supply quantity, and simulating a time step to obtain the next state and instant rewards;
inputting the next state into a target network, obtaining Q value estimation of the target network through a neural network layer with the same structure as an online network, calculating a target Q value based on the instant rewards and the Q value estimation of the target network by using a Bellman equation, and calculating a mean square error with the Q value estimation of the online network to obtain a loss function value;
and performing gradient descent optimization on the online network parameters, wherein the learning rate decays exponentially along with the training process, so that the loss function value is minimized, and soft updating the online network parameters to a target network every fixed step number to obtain a target operation parameter combination.
6. An incinerator intelligent diagnosis and maintenance apparatus for performing the incinerator intelligent diagnosis and maintenance method according to any one of claims 1 to 5, said apparatus comprising:
the acquisition module is used for acquiring original data sets of a plurality of target parts of the incinerator through the sensor, and preprocessing the original data sets to obtain preprocessed data sets;
the extraction module is used for inputting the preprocessing data set into a preset residual error network to extract multi-scale characteristics so as to obtain initial multi-dimensional characteristics;
The computing module is used for carrying out local maximum average difference computation on the initial multidimensional feature to realize the feature distribution alignment of the source domain and the target domain and obtain the aligned multidimensional feature;
The learning module is used for inputting the aligned multidimensional features into a dual Q network, and learning an optimal incineration parameter combination to obtain a target operation parameter combination;
The prediction module is used for carrying out iterative prediction by adopting two-dimensional coupling complex chaotic mapping based on the target operation parameter combination to obtain a dynamic behavior prediction result of the incinerator, and concretely comprises S1, mapping furnace temperature distribution and speed field parameters in the target operation parameter combination to a complex plane to obtain an initial state complex vector z, S2, carrying out iterative computation by applying a two-dimensional coupling complex chaotic mapping function f (z) =a+b (z) +c z 2 +d to the initial state complex vector z, wherein a, b, c, d is a preset complex parameter, conj (z) is a complex conjugate of the initial state complex vector z to obtain a state complex vector at the next moment, S3, calculating the Euclidean distance between the state complex vector at the next moment and the initial state complex vector to obtain a state change quantity, comparing the state change quantity with a preset threshold to judge whether a convergence condition is met, S4, repeatedly executing the next state complex vector as a new initial state if the convergence condition is not met, and carrying out step S2 to meet the convergence condition or the maximum convergence condition, and carrying out iterative computation by the system until the calculated state complex vector reaches the calculated state complex vector at the next moment, the calculated as the complex state complex vector is equal to the optimal state complex vector, and the calculated by the calculated state complex velocity has the calculated to reach the threshold value, and the dynamic behavior prediction result of the dynamic behavior of the incinerator is predicted by the system until the calculated by the dynamic behavior of the system reaches the complex state has the calculated state complex curve, and reaches the calculated state prediction result of the complex curve, and the complex state has a threshold, and the dynamic state has a state prediction result of the state has a state curve of the system, and is obtained by the system, combustion stability assessment and potential risk early warning;
The analysis module is used for carrying out residual service life prediction and overall dynamic self-adaptive maintenance analysis on the multiple target positions according to the dynamic behavior prediction results, and the target maintenance scheme comprises the steps of extracting temperature fluctuation trend, combustion stability assessment and potential risk early warning information of each target position from the dynamic behavior prediction results to obtain a target position state characteristic matrix, fusing the target position state characteristic matrix with historical operation data, adopting a sliding window method to construct a time sequence sample to obtain a fused time sequence characteristic sequence, inputting the fused time sequence characteristic sequence into a long-short-term memory network, carrying out residual service life prediction on each target position to obtain residual service life prediction values of each target position, constructing a multi-target optimization problem based on the residual service life prediction values of each target position, solving the initial optimization model through a genetic algorithm to obtain a pareto optimal solution set, selecting a maintenance scheme from the pareto optimal solution set, carrying out coordination on the initial maintenance scheme and the furnace, carrying out time adjustment on the initial maintenance scheme and the furnace, and carrying out the adjustment on the target maintenance scheme, and the target maintenance scheme comprises the required maintenance scheme.
7. The intelligent diagnosis and maintenance equipment for the incinerator is characterized by comprising a memory and at least one processor, wherein the memory stores instructions;
The at least one processor invokes the instructions in the memory to cause the incinerator intelligent diagnosis and maintenance apparatus to perform the incinerator intelligent diagnosis and maintenance method according to any one of claims 1 to 5.
8. A computer readable storage medium having instructions stored thereon, wherein the instructions when executed by a processor implement the intelligent diagnostic and maintenance method of an incinerator according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411527357.5A CN119027107B (en) | 2024-10-30 | 2024-10-30 | Intelligent diagnosis and maintenance method, device and equipment for incinerator and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411527357.5A CN119027107B (en) | 2024-10-30 | 2024-10-30 | Intelligent diagnosis and maintenance method, device and equipment for incinerator and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN119027107A CN119027107A (en) | 2024-11-26 |
CN119027107B true CN119027107B (en) | 2025-03-07 |
Family
ID=93534267
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411527357.5A Active CN119027107B (en) | 2024-10-30 | 2024-10-30 | Intelligent diagnosis and maintenance method, device and equipment for incinerator and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119027107B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN120105267A (en) * | 2025-05-08 | 2025-06-06 | 福建陆源智能科技有限公司 | A smart management system for special equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118395161A (en) * | 2024-04-10 | 2024-07-26 | 江西欧易科技有限公司 | A remote maintenance device and method for rotational molding machine with automatic fault diagnosis and repair functions |
CN118853987A (en) * | 2024-06-14 | 2024-10-29 | 青岛特殊钢铁有限公司 | A method for controlling temperature in converter smelting |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112633339A (en) * | 2020-12-14 | 2021-04-09 | 华中科技大学 | Bearing fault intelligent diagnosis method, bearing fault intelligent diagnosis system, computer equipment and medium |
CN115017980A (en) * | 2022-05-23 | 2022-09-06 | 同济大学 | Prediction method of heavy metal migration in waste incineration process based on random forest algorithm |
CN117807872A (en) * | 2023-12-21 | 2024-04-02 | 大连海洋大学 | Solid waste incineration multi-temperature synchronous prediction method capable of improving prediction precision |
-
2024
- 2024-10-30 CN CN202411527357.5A patent/CN119027107B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118395161A (en) * | 2024-04-10 | 2024-07-26 | 江西欧易科技有限公司 | A remote maintenance device and method for rotational molding machine with automatic fault diagnosis and repair functions |
CN118853987A (en) * | 2024-06-14 | 2024-10-29 | 青岛特殊钢铁有限公司 | A method for controlling temperature in converter smelting |
Also Published As
Publication number | Publication date |
---|---|
CN119027107A (en) | 2024-11-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116757534B (en) | A reliability analysis method for smart refrigerators based on neural training network | |
CN107092582B (en) | A method for online detection and confidence evaluation of outliers based on residual posterior | |
CN118739948A (en) | Control method and system for motor controller | |
CN119027107B (en) | Intelligent diagnosis and maintenance method, device and equipment for incinerator and storage medium | |
CN114547974A (en) | Dynamic soft measurement modeling method based on input variable selection and LSTM neural network | |
CN118624851A (en) | Water quality monitoring method, device, equipment and storage medium based on multi-dimensional parameters | |
CN116843080B (en) | Machine learning-based carbon element footprint prediction method and system for urea production | |
CN116703455A (en) | Medicine data sales prediction method and system based on time series hybrid model | |
Zhang et al. | Surrogate-assisted evolutionary Q-learning for black-box dynamic time-linkage optimization problems | |
Ge et al. | An improved PF remaining useful life prediction method based on quantum genetics and LSTM | |
CN119511905A (en) | An intelligent signal I/O control system and method for industrial automation | |
CN118228130A (en) | Monitoring method, system and storage medium based on equipment health state | |
CN115389743B (en) | A method, medium and system for predicting interval of dissolved gas content in transformer oil | |
CN119313150A (en) | A 360° self-inspection and self-correction system for corporate environmental compliance based on big data | |
CN119740183A (en) | A health assessment method for primary fans in thermal power plants based on data fusion | |
CN117613890B (en) | Wind power prediction method, device, computer equipment and storage medium | |
CN119005942A (en) | Equipment predictive maintenance method and system based on multidimensional time sequence data | |
CN118520396A (en) | Abnormality detection method based on multi-scale time convolution network and seasonal decomposition | |
Salvador | Automatic and adaptive preprocessing for the development of predictive models. | |
CN119740197B (en) | Fault diagnosis and maintenance method and system for intelligent street lamp | |
Dash et al. | A Robust Framework for Online Fault Detection in Environmental Monitoring Using Pretrained Probabilistic Models | |
CN119494016A (en) | Sensor fault identification method, system and medium based on SAC reinforcement learning | |
CN119474678A (en) | A method and device for predicting salinity time series | |
CN119125878A (en) | Assembly test method, device and equipment for brushless motor | |
Ye et al. | LICORICE: Label-Efficient Concept-Based Interpretable Reinforcement Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |