[go: up one dir, main page]

CN111401126A - A method and device for monitoring combustion conditions based on flame images and deep learning - Google Patents

A method and device for monitoring combustion conditions based on flame images and deep learning Download PDF

Info

Publication number
CN111401126A
CN111401126A CN202010039651.7A CN202010039651A CN111401126A CN 111401126 A CN111401126 A CN 111401126A CN 202010039651 A CN202010039651 A CN 202010039651A CN 111401126 A CN111401126 A CN 111401126A
Authority
CN
China
Prior art keywords
image
images
training
cooling
flame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010039651.7A
Other languages
Chinese (zh)
Inventor
田宏伟
赵恒斌
杨向东
柳倩
韩哲哲
许传龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHN Energy Jianbi Power Plant
Original Assignee
CHN Energy Jianbi Power Plant
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHN Energy Jianbi Power Plant filed Critical CHN Energy Jianbi Power Plant
Priority to CN202010039651.7A priority Critical patent/CN111401126A/en
Publication of CN111401126A publication Critical patent/CN111401126A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a combustion condition monitoring method and a device based on flame images and deep learning, wherein the method comprises the following steps of firstly, collecting flame images under different combustion conditions by using an image monitoring device; preprocessing the flame image and then dividing the flame image into a training set, a verification set and a test set; establishing convolution sparse self-coding, and performing unsupervised training by using a training set; the trained convolution sparse self-coding is used for extracting deep features of the verification set image; establishing a Soft-max classifier, and performing supervision training by using deep features and labels of the verification set images; the trained Soft-max classifier can accurately classify deep features of the image, so that the combustion working condition recognition is realized; and (4) carrying out performance inspection on the monitoring method by using the test set label image. The monitoring device provided by the invention can stably operate on a high-temperature boiler to realize flame image acquisition, and the monitoring method can accurately identify the combustion working condition of a single flame image, and has a good application prospect in the field of combustion working condition monitoring.

Description

一种基于火焰图像和深度学习的燃烧工况监测方法和装置A method and device for monitoring combustion conditions based on flame images and deep learning

技术领域technical field

本发明涉及一种基于火焰图像和深度学习的燃烧工况监测方法和装置,属于燃烧检测领域。The invention relates to a combustion condition monitoring method and device based on flame images and deep learning, and belongs to the field of combustion detection.

背景技术Background technique

新能源发电系统受限于可再生资源的随机性、间歇性,大规模并网会给电网的稳定运行带来严重负面影响。为提供新能源并网足够的可利用空间,火力发电机组有必要进行灵活性改造,以提高深度调峰的能力。基于此,电站锅炉有可能的长期处于频繁、快速的变负荷或长期低负荷工况下,引发煤粉着火困难、燃烧不稳定,甚至锅炉灭火等现象,使得燃烧调整困难、燃烧效率降低,给火力发电机组的安全稳定运行带来挑战。因此,建立准确、可靠的燃烧状态监测系统,对于预防潜在的危险,提高锅炉运行的整体性能具有重要的意义。The new energy power generation system is limited by the randomness and intermittency of renewable resources, and large-scale grid connection will have a serious negative impact on the stable operation of the power grid. In order to provide enough space for new energy to be connected to the grid, it is necessary to carry out flexible transformation of thermal power generating units to improve the ability of deep peak shaving. Based on this, the power plant boilers may be under frequent and rapid variable load or long-term low-load conditions for a long time, which may lead to difficult ignition of pulverized coal, unstable combustion, and even fire extinguishing of the boiler, making combustion adjustment difficult and combustion efficiency reduced. The safe and stable operation of thermal power generating units brings challenges. Therefore, establishing an accurate and reliable combustion state monitoring system is of great significance for preventing potential dangers and improving the overall performance of boiler operation.

火焰成像监测方法,不同于传统的火焰检测装置,具有抗干扰、低维护等优点,是一种更为有效的监测方法。该方法通常包括两个重要步骤:特征提取和状态识别。特征提取是利用图像处理技术获取关键特征;状态识别是对得到的图像特征进一步分析,实现燃烧状态识别。目前,常见的图像处理技术主要分为两类:1)频域和时域方法,通过分析图像的灰度值变化的方差和功率谱密度,推断燃烧状态变化趋势。该方法存在响应时间长、泛化能力弱的缺点;2)机器学习方法,利用主成分分析或偏最小二乘方法以无监督方式提取图像特征。然而,所得到的特征属于图像的浅层信息,缺乏鲁棒性,导致识别精度低。The flame imaging monitoring method, different from the traditional flame detection device, has the advantages of anti-interference and low maintenance, and is a more effective monitoring method. The method usually includes two important steps: feature extraction and state recognition. Feature extraction is to use image processing technology to obtain key features; state recognition is to further analyze the obtained image features to realize combustion state recognition. At present, common image processing techniques are mainly divided into two categories: 1) frequency domain and time domain methods, by analyzing the variance and power spectral density of the gray value change of the image, infer the change trend of the combustion state. This method has the shortcomings of long response time and weak generalization ability; 2) machine learning method, using principal component analysis or partial least squares method to extract image features in an unsupervised manner. However, the obtained features belong to the shallow information of the image and lack robustness, resulting in low recognition accuracy.

深度学习方法,被认为是人工智能领域的一个重大突破,其深层网络结构能够挖掘数据的本质特征,为传统的数据驱动方法无法解决的难题提供了可能。深度学习网络,不仅能够克服浅层方法表现能力差的缺点,而且避免了特征选择的繁琐过程,可用于处理高维、大量图像数据。自编码是其中一种典型的深度学习网络,能够以无监督方式提取图像的非线性特征。但自编码在训练过程中,容易会出现梯度消失或梯度爆炸等现象,存在简单复制输入层信息的可能,不能真正获取图像的关键信息。因此,仍需要开发更加智能的火焰图像监测与识别方法,为燃烧调整与优化提供指导,使其更具有产业上的利用价值。The deep learning method is considered to be a major breakthrough in the field of artificial intelligence. Its deep network structure can mine the essential characteristics of the data, which provides the possibility for difficult problems that cannot be solved by traditional data-driven methods. The deep learning network can not only overcome the shortcomings of poor performance of shallow methods, but also avoid the tedious process of feature selection, and can be used to process high-dimensional and large-scale image data. Autoencoder is one of the typical deep learning networks that can extract nonlinear features of images in an unsupervised manner. However, during the training process of self-encoding, gradient disappearance or gradient explosion is easy to occur. There is the possibility of simply copying the information of the input layer, and the key information of the image cannot be really obtained. Therefore, it is still necessary to develop a more intelligent flame image monitoring and identification method to provide guidance for combustion adjustment and optimization, making it more valuable for industrial use.

发明内容SUMMARY OF THE INVENTION

为解决上述技术问题,本发明的目的是提供一种基于火焰图像和深度学习的燃烧工况监测方法和装置。In order to solve the above technical problems, the purpose of the present invention is to provide a combustion condition monitoring method and device based on flame images and deep learning.

本发明的一种基于火焰图像和深度学习的燃烧工况监测方法,包括以下步骤:A combustion condition monitoring method based on flame images and deep learning of the present invention includes the following steps:

步骤1、利用图像监测装置采集炉膛火焰图像,并记录燃烧工况,所采集的图像经预处理后分为训练集、验证集和测试集,其中,训练集中的图像不带标签,而验证集和测试集中的图像带标签;Step 1. Use an image monitoring device to collect images of furnace flames and record the combustion conditions. The collected images are preprocessed and divided into training sets, validation sets and test sets. The images in the training set do not have labels, while the validation set and the images in the test set are labeled;

步骤2、利用训练集图像无监督训练卷积稀疏自编码;将稀疏惩罚项、均方误差与结构相似度相结合作为损失函数,表示为:Step 2. Unsupervised training of convolutional sparse auto-encoding with training set images; the combination of sparse penalty term, mean square error and structural similarity as a loss function, expressed as:

L=LSPT+LMSE+LSSIM L=L SPT +L MSE +L SSIM

式中,LSPT表示稀疏惩罚项,LMSE表示均方误差,LSSIM表示结构相似度;In the formula, L SPT represents the sparse penalty term, L MSE represents the mean square error, and L SSIM represents the structural similarity;

步骤3、利用完成训练的CSAE网络提取验证集图像的深层特征,并结合图像标签训练Soft-max分类器;Step 3. Use the CSAE network that has completed the training to extract the deep features of the validation set images, and train the Soft-max classifier in combination with the image labels;

步骤4、已训练的Soft-max分类器能够对新图像特征进行分类,实现单张火焰图像的燃烧工况识别,并利用测试集对其性能进行检验。Step 4. The trained Soft-max classifier can classify the new image features, realize the combustion condition recognition of a single flame image, and use the test set to test its performance.

一种用于基于火焰图像和深度学习的燃烧工况监测方法的图像监测装置,所述图像监测装置由冷却套管、光学视镜和工业相机组成,所述光学视镜和工业相机相连,冷却套管套接在工业相机和光学视镜外侧;An image monitoring device for a combustion condition monitoring method based on flame images and deep learning, the image monitoring device is composed of a cooling jacket, an optical sight glass and an industrial camera, the optical sight glass is connected with the industrial camera, and the cooling The sleeve is sleeved on the outside of the industrial camera and the optical sight glass;

其中冷却套管顶端采用45°拐角设计,冷却套管分为水夹层和气夹层,水夹层为密封结构,气夹层为开放结构,并依次对应水冷与风冷两种冷却方式,其中风冷气夹层在内侧,水冷水夹层在外侧,并且在冷却套管尾端上设有冷却水进口和冷却风进口,在冷却套管顶端设有冷却风出口,其中冷却水进口和水夹层连通,冷却风进口以及冷却风出口和气夹层连通。The top of the cooling jacket is designed with a 45° corner, and the cooling jacket is divided into a water interlayer and an air interlayer. On the inner side, the water and cold water interlayer is on the outer side, and a cooling water inlet and a cooling air inlet are arranged at the end of the cooling jacket, and a cooling air outlet is arranged at the top of the cooling jacket, wherein the cooling water inlet is communicated with the water interlayer, and the cooling air inlet and The cooling air outlet is communicated with the air interlayer.

光学视镜顶端配备90°视角的耐高温镜头;The top of the optical sight glass is equipped with a high temperature resistant lens with a viewing angle of 90°;

工业相机尾端设有电源接口和视频信号接口。The end of the industrial camera is provided with a power interface and a video signal interface.

进一步的,所述步骤2中的卷积稀疏自编码网络的特征提取步骤为:Further, the feature extraction steps of the convolutional sparse self-encoding network in the step 2 are:

步骤2.1、将训练集X送至卷积编码,首先由k1个a1×a1窗口大小、步长为 s1的卷积过滤器(C1(k1@a1×a1+s1))进行特征提取,然后使用ReLU函数(y(x)= max(0,x))进行神经元激活,最后由b1×b1窗口大小、步长为f1的最大池化层 (P1(b1×b1+f1))进行特征降维,得到特征向量h1Step 2.1. Send the training set X to the convolutional coding. First, perform feature extraction by k1 convolutional filters (C1(k1@a1×a1+s1)) with a1×a1 window size and step size s1, and then use The ReLU function (y(x)=max(0,x)) performs neuron activation, and finally the feature reduction is performed by the maximum pooling layer (P1(b1×b1+f1)) with a window size of b1×b1 and a step size of f1 dimension, get the feature vector h 1 ;

步骤2.2、特征向量h1送至编码器2号和编码器3号进一步处理,处理过程与编码器1号相似,最终得到特征向量h3Step 2.2, the feature vector h 1 is sent to the encoder No. 2 and the encoder No. 3 for further processing, the processing process is similar to that of the encoder No. 1, and finally the feature vector h 3 is obtained;

步骤2.3、特征向量h3送至解码器1号,首先由g1×g1窗口大小的上采样层 (U1(g1×g1))实现特征维度提升,然后分别经卷积过滤器(C4(k4@a4×a4+s4))、 ReLU激活函数处理,得到特征向量h4Step 2.3 . The feature vector h3 is sent to the decoder No. 1. First, the feature dimension is improved by the upsampling layer of the g1×g1 window size (U1(g1×g1)), and then the convolution filter (C4(k4@ a4×a4+s4)), ReLU activation function processing, to obtain feature vector h 4 ;

步骤2.4、特征向量h4送至解码器2号和解码器3号进一步处理,处理过程与解码器1号相似,最终得到重建图像Z。Step 2.4, the feature vector h 4 is sent to the decoder No. 2 and the decoder No. 3 for further processing. The processing process is similar to that of the decoder No. 1, and finally the reconstructed image Z is obtained.

进一步的,所述步骤2中的稀疏惩罚项LSPT表达为:Further, the sparse penalty term L SPT in the step 2 is expressed as:

Figure RE-GDA0002489801740000031
Figure RE-GDA0002489801740000031

式中,β表示稀疏率;F表示隐藏层神经元数量;p表示稀疏目标常数;基于i 轴的神经元平均激活量pj表达为:In the formula, β represents the sparsity rate; F represents the number of neurons in the hidden layer; p represents the sparse target constant; the average activation of neurons based on the i-axis p j is expressed as:

Figure RE-GDA0002489801740000032
Figure RE-GDA0002489801740000032

式中,E表示训练集图像数量;sij(i∈(1,E),j∈(1,F))表示第j位置的神经元的激活量;KL散度表达为:In the formula, E represents the number of images in the training set; s ij (i∈(1, E), j∈(1, F)) represents the activation of the neuron at the jth position; the KL divergence is expressed as:

Figure RE-GDA0002489801740000041
Figure RE-GDA0002489801740000041

进一步的,所述步骤2中的损失函数LMSE表达为:Further, the loss function L MSE in the step 2 is expressed as:

Figure RE-GDA0002489801740000042
Figure RE-GDA0002489801740000042

式中,Xij与Zij分别表示尺寸为A×T的输入图像与重建图像中第(i,j)位置的灰度。In the formula, X ij and Z ij represent the grayscale of the (i, j)th position in the input image and the reconstructed image with size A×T, respectively.

进一步的,所述步骤2中的损失函数LSSIM表达为:Further, the loss function L SSIM in the step 2 is expressed as:

Figure RE-GDA0002489801740000043
Figure RE-GDA0002489801740000043

式中,c1=(k1r)2与c2=(k2r)2,k1与k2为小于1的常数,r表示图像灰度的动态范围;μX与μZ表示输入图像与重建图像的平均值;σX与σZ表示输入图像与重建图像的方差;σXZ表示输入图像与重建图像的协方差。In the formula, c 1 =(k 1 r) 2 and c 2 =(k 2 r) 2 , k 1 and k 2 are constants less than 1, r represents the dynamic range of image grayscale; μ X and μ Z represent the input The average value of the image and the reconstructed image; σ X and σ Z represent the variance of the input image and the reconstructed image; σ XZ represents the covariance of the input image and the reconstructed image.

进一步的,所述步骤3中的Soft-max分类器定义为:Further, the Soft-max classifier in the step 3 is defined as:

Figure RE-GDA0002489801740000044
Figure RE-GDA0002489801740000044

式中,yi表示第i个样本xi的输出;k表示训练样本的类别数量。In the formula, yi represents the output of the ith sample xi; k represents the number of categories of training samples.

借由上述方案,本发明至少具有以下优点:By means of the above scheme, the present invention has at least the following advantages:

与现有燃烧工况监测技术相比,本发明的一种基于火焰图像和深度学习的燃烧工况监测方法和装置具有如下优点:Compared with the existing combustion condition monitoring technology, a combustion condition monitoring method and device based on flame images and deep learning of the present invention have the following advantages:

1、其中本发明提供的图像监测装置具有高效冷却效果,为图像采集系统的稳定运行提供了保障;1. The image monitoring device provided by the present invention has an efficient cooling effect, which provides a guarantee for the stable operation of the image acquisition system;

2、本发明提供的监测方法能够准确提取火焰图像的本质特征,无需专家先验知识;新的自编码损失函数有效解决了训练困难、泛化性能差的缺点;并且采用的Soft-max分类器适应于图像特征分类,能够实现燃烧工况的准确识别。2. The monitoring method provided by the present invention can accurately extract the essential features of the flame image without prior knowledge of experts; the new self-encoding loss function effectively solves the shortcomings of difficult training and poor generalization performance; and the Soft-max classifier adopted It is suitable for image feature classification and can realize accurate identification of combustion conditions.

上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,并可依照说明书的内容予以实施,以下以本发明的较佳实施例并配合附图详细说明如后。The above description is only an overview of the technical solution of the present invention. In order to understand the technical means of the present invention more clearly, and implement it according to the content of the description, the preferred embodiments of the present invention are described in detail below with the accompanying drawings.

附图说明Description of drawings

为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某个实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings that need to be used in the embodiments. It should be regarded as a limitation of the scope, and for those of ordinary skill in the art, other related drawings can also be obtained according to these drawings without any creative effort.

图1是本发明一种基于火焰图像和深度学习的燃烧工况监测方法和装置的原理流程示意图。FIG. 1 is a schematic flow chart of the principle of a combustion condition monitoring method and device based on flame images and deep learning of the present invention.

图2是一种基于火焰图像和深度学习的燃烧工况监测方法和装置中图像监测装置示意图。FIG. 2 is a schematic diagram of an image monitoring device in a method and device for monitoring combustion conditions based on flame images and deep learning.

图3是本发明一种基于火焰图像和深度学习的燃烧工况监测方法和装置中卷积稀疏自编码(CSAE)网络结构示意图。3 is a schematic diagram of the structure of a convolutional sparse self-encoding (CSAE) network in a method and device for monitoring combustion conditions based on flame images and deep learning of the present invention.

图4是本发明实施例的三种燃烧工况火焰图像示意图。FIG. 4 is a schematic diagram of flame images of three combustion conditions according to an embodiment of the present invention.

图5是本发明实施例的数据集结构。FIG. 5 is a data set structure according to an embodiment of the present invention.

图6是本发明实施例的卷积稀疏自编码(CSAE)网络训练损失示意图。FIG. 6 is a schematic diagram of training loss of a convolutional sparse autoencoder (CSAE) network according to an embodiment of the present invention.

其中,图2中,1、冷却套管;2、光学视镜;3、工业相机;4、水夹层;5、气夹层;6、冷却水进口;7、冷却风进口;8、冷却风出口;9、电源接口;10、视频信号接口。Among them, in Figure 2, 1. Cooling sleeve; 2. Optical sight glass; 3. Industrial camera; 4. Water interlayer; 5. Air interlayer; 6. Cooling water inlet; 7. Cooling air inlet; 8. Cooling air outlet ; 9. Power interface; 10. Video signal interface.

具体实施方式Detailed ways

下面结合附图和实施例,对本发明的具体实施方式作进一步详细描述。以下实施例用于说明本发明,但不用来限制本发明的范围。The specific embodiments of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. The following examples are intended to illustrate the present invention, but not to limit the scope of the present invention.

参见图1至图6,本发明一较佳实施例所述的下面结合附图和附表,对本发明作详细说明:Referring to Figure 1 to Figure 6, a preferred embodiment of the present invention is described below in conjunction with the accompanying drawings and the accompanying tables, and the present invention is described in detail:

一种基于火焰图像识别的燃烧工况监测装置和方法,其原理流程图如图1 所示,具体包括如下步骤:A combustion condition monitoring device and method based on flame image recognition, the principle flow chart of which is shown in Figure 1, and specifically includes the following steps:

步骤1、利用图像监测装置(如图2所示)采集炉膛火焰图像,并记录燃烧工况。所采集的图像经预处理后分为训练集、验证集和测试集,其中,训练集中的图像不带标签,而验证集和测试集中的图像带标签;Step 1. Use an image monitoring device (as shown in Figure 2) to collect images of furnace flames, and record combustion conditions. The collected images are divided into training set, validation set and test set after preprocessing, wherein, the images in the training set are unlabeled, while the images in the validation set and test set are labeled;

步骤2、利用训练集图像无监督训练卷积稀疏自编码(CSAE),其网络结构如图3所示;将稀疏惩罚项(SPT)、均方误差(MSE)与结构相似度(SSIM)相结合作为损失函数,表示为:Step 2. Unsupervised training of convolutional sparse autoencoder (CSAE) using training set images, and its network structure is shown in Figure 3; the sparse penalty term (SPT), mean square error (MSE) and structural similarity (SSIM) Combined as a loss function, expressed as:

L=LSPT+LMSE+LSSIM (1)L=L SPT + L MSE + L SSIM (1)

式中,LSPT表示稀疏惩罚项,LMSE表示均方误差,LSSIM表示结构相似度;In the formula, L SPT represents the sparse penalty term, L MSE represents the mean square error, and L SSIM represents the structural similarity;

步骤3、利用完成训练的CSAE网络提取验证集图像的深层特征,并结合图像标签训练Soft-max分类器;Step 3. Use the CSAE network that has completed the training to extract the deep features of the validation set images, and train the Soft-max classifier in combination with the image labels;

步骤4、已训练的Soft-max分类器能够对新图像特征进行分类,实现单张火焰图像的燃烧工况识别,并利用测试集对其性能进行检验。Step 4. The trained Soft-max classifier can classify the new image features, realize the combustion condition recognition of a single flame image, and use the test set to test its performance.

所述步骤1中的图像监测装置:The image monitoring device in the step 1:

由冷却套管、光学视镜和工业相机组成。冷却套管与传统结构不同,采用独特的45°拐角设计,有效克服安装位置的局限性,并具有风冷与水冷两种冷却方式,以确保图像采集系统稳定运行。光学视镜配备90°视角的耐高温镜头,完全覆盖燃烧主反应区域。Consists of cooling jacket, optical sight glass and industrial camera. Different from the traditional structure, the cooling jacket adopts a unique 45° corner design, which effectively overcomes the limitations of the installation position, and has two cooling methods, air cooling and water cooling, to ensure the stable operation of the image acquisition system. The optical sight glass is equipped with a high temperature resistant lens with a viewing angle of 90°, which completely covers the main combustion reaction area.

所述步骤2中的卷积稀疏自编码(CSAE)网络的特征提取步骤为:The feature extraction steps of the convolutional sparse self-encoding (CSAE) network in the step 2 are:

步骤2.1、将训练集X送至卷积编码,首先由k1个a1×a1窗口大小、步长为 s1的卷积过滤器(C1(k1@a1×a1+s1))进行特征提取,然后使用ReLU函数(y(x)= max(0,x))进行神经元激活,最后由b1×b1窗口大小、步长为f1的最大池化层 (P1(b1×b1+f1))进行特征降维,得到特征向量h1Step 2.1. Send the training set X to the convolutional coding. First, perform feature extraction by k1 convolutional filters (C1(k1@a1×a1+s1)) with a window size of a1×a1 and a stride of s1, and then use The ReLU function (y(x)=max(0,x)) performs neuron activation, and finally the feature reduction is performed by the maximum pooling layer (P1(b1×b1+f1)) with a window size of b1×b1 and a step size of f1 dimension, get the feature vector h 1 ;

步骤2.2、特征向量h1送至编码器2号和编码器3号进一步处理,处理过程与编码器1号相似,最终得到特征向量h3Step 2.2, the feature vector h 1 is sent to the encoder No. 2 and the encoder No. 3 for further processing, the processing process is similar to that of the encoder No. 1, and finally the feature vector h 3 is obtained;

步骤2.3、特征向量h3送至解码器1号,首先由g1×g1窗口大小的上采样层 (U1(g1×g1))实现特征维度提升,然后分别经卷积过滤器(C4(k4@a4×a4+s4))、 ReLU激活函数处理,得到特征向量h4Step 2.3 . The feature vector h3 is sent to the decoder No. 1. First, the feature dimension is improved by the upsampling layer of the g1×g1 window size (U1(g1×g1)), and then the convolution filter (C4(k4@ a4×a4+s4)), ReLU activation function processing, to obtain feature vector h 4 ;

步骤2.4、特征向量h4送至解码器2号和解码器3号进一步处理,处理过程与解码器1号相似,最终得到重建图像Z;值得注意的是,解码器3号的激活函数采用Sigmoid函数(y(x)=1/1+e-x)以确保输入图像与输出图像的数值范围一致。Step 2.4, the feature vector h 4 is sent to the decoder No. 2 and the decoder No. 3 for further processing. The processing process is similar to that of the decoder No. 1, and finally the reconstructed image Z is obtained; it is worth noting that the activation function of the decoder No. 3 adopts Sigmoid function (y(x)=1/1+e -x ) to ensure that the input image and the output image have the same range of values.

所述步骤2中的稀疏惩罚项LSPT表达为:The sparse penalty term L SPT in the step 2 is expressed as:

Figure RE-GDA0002489801740000071
Figure RE-GDA0002489801740000071

式中,β表示稀疏率;F表示隐藏层神经元数量;p表示稀疏目标常数;基于i 轴的神经元平均激活量pj表达为:In the formula, β represents the sparsity rate; F represents the number of neurons in the hidden layer; p represents the sparse target constant; the average activation of neurons based on the i-axis p j is expressed as:

Figure RE-GDA0002489801740000072
Figure RE-GDA0002489801740000072

式中,E表示训练集图像数量;sij(i∈(1,E),j∈(1,F))表示第j位置的神经元的激活量;KL散度表达为:In the formula, E represents the number of images in the training set; s ij (i∈(1, E), j∈(1, F)) represents the activation of the neuron at the jth position; the KL divergence is expressed as:

Figure RE-GDA0002489801740000081
Figure RE-GDA0002489801740000081

所述步骤2中的损失函数LMSE表达为:The loss function L MSE in the step 2 is expressed as:

Figure RE-GDA0002489801740000082
Figure RE-GDA0002489801740000082

式中,Xij与Zij分别表示尺寸为A×T的输入图像与重建图像中第(i,j)位置的灰度。In the formula, X ij and Z ij represent the grayscale of the (i, j)th position in the input image and the reconstructed image with size A×T, respectively.

所述步骤2中的损失函数LSSIM表达为:The loss function L SSIM in the step 2 is expressed as:

Figure RE-GDA0002489801740000083
Figure RE-GDA0002489801740000083

式中,c1=(k1r)2与c2=(k2r)2,k1与k2为小于1的常数,r表示图像灰度的动态范围;μX与μZ表示输入图像与重建图像的平均值;σX与σZ表示输入图像与重建图像的方差;σXZ表示输入图像与重建图像的协方差。In the formula, c 1 =(k 1 r) 2 and c 2 =(k 2 r) 2 , k 1 and k 2 are constants less than 1, r represents the dynamic range of image grayscale; μ X and μ Z represent the input The average value of the image and the reconstructed image; σ X and σ Z represent the variance of the input image and the reconstructed image; σ XZ represents the covariance of the input image and the reconstructed image.

所述步骤3中的Soft-max分类器定义为:The Soft-max classifier in step 3 is defined as:

Figure RE-GDA0002489801740000084
Figure RE-GDA0002489801740000084

式中,yi表示第i个样本xi的输出;k表示训练样本的类别数量。In the formula, yi represents the output of the ith sample xi; k represents the number of categories of training samples.

实施例1Example 1

本发明基于火焰图像识别的燃烧工况监测装置和方法,包括以下步骤:The combustion condition monitoring device and method based on flame image recognition of the present invention comprises the following steps:

步骤1)利用图像监测装置采集三种燃烧工况下的火焰图像,如图4所示,图像分辨率为384*260*3。图5展示了本实施例的数据集结构。数据集包含1500 张(每种工况500张),经过图像预处理(图像尺寸压缩到256*256*3,数值范围归一化至0-1范围内)后,分为训练集、验证集和测试集。训练集是包含1080 (360×3)张无标签图像,用于无监督训练CSAE网络;验证集包含120(40×3)张标签图像,用于监督训练Soft-max分类器;测试集包含300(100×3)张标签图像,用于CSAE-Soft-max模型性能测试。Step 1) Using an image monitoring device to collect flame images under three combustion conditions, as shown in Figure 4, the image resolution is 384*260*3. FIG. 5 shows the data set structure of this embodiment. The data set contains 1500 images (500 images for each working condition), which are divided into training set and validation set after image preprocessing (image size is compressed to 256*256*3, and the value range is normalized to the range of 0-1). and test set. The training set contains 1080 (360×3) unlabeled images for unsupervised training of the CSAE network; the validation set contains 120 (40×3) labeled images for supervised training of the Soft-max classifier; the test set contains 300 (100×3) label images for CSAE-Soft-max model performance testing.

步骤2)卷积稀疏自编码网络结构如图3所示,网络参数汇总于表1。将稀疏惩罚项、均方误差与结构相似度相结合作为CSAE网络的损失函数,不同训练次数下的训练损失如图6所示。结果表明,CSAE网络在训练第80次时,损失值已收敛,因此确定训练次数为80次;Step 2) The structure of the convolutional sparse self-encoding network is shown in Figure 3, and the network parameters are summarized in Table 1. The sparse penalty term, mean square error and structural similarity are combined as the loss function of CSAE network, and the training loss under different training times is shown in Figure 6. The results show that when the CSAE network is trained for the 80th time, the loss value has converged, so the number of training times is determined to be 80;

编码器1Encoder 1 编码器2Encoder 2 编码器3Encoder 3 解码器1Decoder 1 解码器2Decoder 2 解码器3Decoder 3 C1(8@3×3+1)C1(8@3×3+1) C2(4@3×3+1)C2(4@3×3+1) C3(1@3×3+1)C3(1@3×3+1) U1(32×32)U1(32×32) U2(64×64)U2(64×64) U3(128×128)U3(128×128) ReLUReLU ReLUReLU ReLUReLU C4(4@3×3+1)C4(4@3×3+1) C5(4@3×3+1)C5(4@3×3+1) C6(3@3×3+1)C6(3@3×3+1) P1(2×2+2)P1(2×2+2) P2(2×2+2)P2(2×2+2) P3(2×2+2)P3(2×2+2) ReLUReLU ReLUReLU Sigmiod Sigmiod

表1Table 1

步骤3)利用已训练的CSAE网络提取验证集图像的深层特征h3,特征维度为64,并结合对应的图像标签,对Soft-max分类器进行监督训练;Step 3) Utilize the trained CSAE network to extract the deep feature h 3 of the verification set image, the feature dimension is 64, and combine the corresponding image labels, and perform supervised training on the Soft-max classifier;

步骤4)已训练的Soft-max分类器能够对新图像特征进行分类,实现单张火焰图像的燃烧工况识别。利用测试集对其性能进行检验,识别精度达到99.3%。Step 4) The trained Soft-max classifier can classify the new image features and realize the combustion condition recognition of a single flame image. Using the test set to test its performance, the recognition accuracy reaches 99.3%.

实验结果表明,本发明所建立的卷积稀疏自编码网络的智能监测模型,能够准确识别单张火焰图像的燃烧工况。The experimental results show that the intelligent monitoring model of the convolutional sparse self-encoding network established by the present invention can accurately identify the combustion conditions of a single flame image.

以上所述仅是本发明的优选实施方式,并不用于限制本发明,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明技术原理的前提下,还可以做出若干改进和变型,这些改进和变型也应视为本发明的保护范围。The above are only the preferred embodiments of the present invention and are not intended to limit the present invention. It should be pointed out that for those skilled in the art, some improvements can be made without departing from the technical principles of the present invention. These improvements and modifications should also be regarded as the protection scope of the present invention.

Claims (7)

1.一种基于火焰图像和深度学习的燃烧工况监测方法,其特征在于:包括以下步骤:1. a combustion condition monitoring method based on flame image and deep learning, is characterized in that: comprise the following steps: 步骤1、利用图像监测装置采集炉膛火焰图像,并记录燃烧工况,所采集的图像经预处理后分为训练集、验证集和测试集,其中,训练集中的图像不带标签,而验证集和测试集中的图像带标签;Step 1. Use an image monitoring device to collect images of furnace flames and record the combustion conditions. The collected images are preprocessed and divided into training sets, validation sets and test sets. The images in the training set do not have labels, while the validation set and the images in the test set are labeled; 步骤2、利用训练集图像无监督训练卷积稀疏自编码;将稀疏惩罚项、均方误差与结构相似度相结合作为损失函数,表示为:Step 2. Unsupervised training of convolutional sparse auto-encoding with training set images; the combination of sparse penalty term, mean square error and structural similarity as a loss function, expressed as: L=LSPT+LMSE+LSSIM L=L SPT +L MSE +L SSIM 式中,LSPT表示稀疏惩罚项,LMSE表示均方误差,LSSIM表示结构相似度;In the formula, L SPT represents the sparse penalty term, L MSE represents the mean square error, and L SSIM represents the structural similarity; 步骤3、利用完成训练的CSAE网络提取验证集图像的深层特征,并结合图像标签训练Soft-max分类器;Step 3. Use the CSAE network that has completed the training to extract the deep features of the validation set images, and train the Soft-max classifier in combination with the image labels; 步骤4、已训练的Soft-max分类器能够对新图像特征进行分类,实现单张火焰图像的燃烧工况识别,并利用测试集对其性能进行检验。Step 4. The trained Soft-max classifier can classify the new image features, realize the combustion condition recognition of a single flame image, and use the test set to test its performance. 2.一种用于权利要求1所述的基于火焰图像和深度学习的燃烧工况监测方法的图像监测装置,其特征在于:所述图像监测装置由冷却套管(1)、光学视镜(2)和工业相机(3)组成,所述光学视镜(2)和工业相机(3)相连,冷却套管(1)套接在工业相机(3)和光学视镜(2)外侧;2. An image monitoring device for the combustion condition monitoring method based on flame image and deep learning according to claim 1, characterized in that: the image monitoring device is composed of a cooling jacket (1), an optical sight glass ( 2) It is composed of an industrial camera (3), the optical sight glass (2) is connected with the industrial camera (3), and the cooling sleeve (1) is sleeved on the outside of the industrial camera (3) and the optical sight glass (2); 其中冷却套管(1)顶端采用45°拐角设计,冷却套管(1)分为水夹层(4)和气夹层(5),水夹层(4)为密封结构,气夹层(5)为开放结构,并依次对应水冷与风冷两种冷却方式,其中风冷气夹层(5)在内侧,水冷水夹层(4)在外侧,并且在冷却套管(1)尾端上设有冷却水进口(6)和冷却风进口(7),在冷却套管(1)顶端设有冷却风出口(8),其中冷却水进口(6)和水夹层(4)连通,冷却风进口(7)以及冷却风出口(8)和气夹层(5)连通。The top of the cooling jacket (1) is designed with a 45° corner, the cooling jacket (1) is divided into a water interlayer (4) and an air interlayer (5), the water interlayer (4) is a sealed structure, and the gas interlayer (5) is an open structure , and corresponding to the two cooling methods of water cooling and air cooling in turn, in which the air-cooled air interlayer (5) is on the inside, the water-cooled interlayer (4) is on the outside, and a cooling water inlet (6) is provided on the rear end of the cooling jacket (1). ) and a cooling air inlet (7), a cooling air outlet (8) is provided at the top of the cooling jacket (1), wherein the cooling water inlet (6) is communicated with the water interlayer (4), the cooling air inlet (7) and the cooling air The outlet (8) communicates with the gas interlayer (5). 光学视镜(2)顶端配备90°视角的耐高温镜头;The top of the optical sight glass (2) is equipped with a high temperature resistant lens with a viewing angle of 90°; 工业相机(3)尾端设有电源接口(9)和视频信号接口(10)。The rear end of the industrial camera (3) is provided with a power interface (9) and a video signal interface (10). 3.根据权利要求1所述的一种基于火焰图像和深度学习的燃烧工况监测方法,其特征在于,所述步骤2中的卷积稀疏自编码网络的特征提取步骤为:3. a kind of combustion condition monitoring method based on flame image and deep learning according to claim 1, is characterized in that, the feature extraction step of the convolutional sparse self-encoding network in described step 2 is: 步骤2.1、将训练集X送至卷积编码,首先由k1个a1×a1窗口大小、步长为s1的卷积过滤器(C1(k1@a1×a1+s1))进行特征提取,然后使用ReLU函数(y(x)=max(0,x))进行神经元激活,最后由b1×b1窗口大小、步长为f1的最大池化层(P1(b1×b1+f1))进行特征降维,得到特征向量h1Step 2.1. Send the training set X to the convolutional coding. First, perform feature extraction by k1 convolutional filters (C1(k1@a1×a1+s1)) with a1×a1 window size and step size s1, and then use The ReLU function (y(x)=max(0,x)) performs neuron activation, and finally the feature reduction is performed by the maximum pooling layer (P1(b1×b1+f1)) with a window size of b1×b1 and a step size of f1 dimension, get the feature vector h 1 ; 步骤2.2、特征向量h1送至编码器2号和编码器3号进一步处理,处理过程与编码器1号相似,最终得到特征向量h3Step 2.2, the feature vector h 1 is sent to the encoder No. 2 and the encoder No. 3 for further processing, the processing process is similar to that of the encoder No. 1, and finally the feature vector h 3 is obtained; 步骤2.3、特征向量h3送至解码器1号,首先由g1×g1窗口大小的上采样层(U1(g1×g1))实现特征维度提升,然后分别经卷积过滤器(C4(k4@a4×a4+s4))、ReLU激活函数处理,得到特征向量h4Step 2.3 . The feature vector h3 is sent to the decoder No. 1. First, the feature dimension is improved by the upsampling layer (U1(g1×g1)) of the g1×g1 window size, and then the convolution filter (C4(k4@ a4×a4+s4)), ReLU activation function processing, obtain feature vector h 4 ; 步骤2.4、特征向量h4送至解码器2号和解码器3号进一步处理,处理过程与解码器1号相似,最终得到重建图像z。Step 2.4, the feature vector h 4 is sent to the decoder No. 2 and the decoder No. 3 for further processing. The processing process is similar to that of the decoder No. 1, and finally the reconstructed image z is obtained. 4.根据权利要求1所述的一种基于火焰图像和深度学习的燃烧工况监测方法,其特征在于,所述步骤2中的稀疏惩罚项LSPT表达为:4. a kind of combustion condition monitoring method based on flame image and deep learning according to claim 1, is characterized in that, the sparse penalty term L SPT in described step 2 is expressed as:
Figure FDA0002367274750000021
Figure FDA0002367274750000021
式中,β表示稀疏率;F表示隐藏层神经元数量;p表示稀疏目标常数;基于i轴的神经元平均激活量pj表达为:In the formula, β represents the sparsity rate; F represents the number of neurons in the hidden layer; p represents the sparse target constant; the average activation of neurons based on the i-axis p j is expressed as:
Figure FDA0002367274750000022
Figure FDA0002367274750000022
式中,E表示训练集图像数量;sij(i∈(1,E),j∈(1,F))表示第j位置的神经元的激活量;KL散度表达为:In the formula, E represents the number of images in the training set; s ij (i∈(1, E), j∈(1, F)) represents the activation of the neuron at the jth position; the KL divergence is expressed as:
Figure FDA0002367274750000031
Figure FDA0002367274750000031
5.根据权利要求1所述的一种基于火焰图像和深度学习的燃烧工况监测方法,其特征在于,所述步骤2中的损失函数LMSE表达为:5. a kind of combustion condition monitoring method based on flame image and deep learning according to claim 1, is characterized in that, the loss function L MSE in described step 2 is expressed as:
Figure FDA0002367274750000032
Figure FDA0002367274750000032
式中,Xij与zij分别表示尺寸为A×T的输入图像与重建图像中第(i,j)位置的灰度。In the formula, X ij and z ij represent the grayscale of the (i, j)th position in the input image and the reconstructed image with size A×T, respectively.
6.根据权利要求1所述的一种基于火焰图像和深度学习的燃烧工况监测方法,其特征在于,所述步骤2中的损失函数LSSIM表达为:6. a kind of combustion condition monitoring method based on flame image and deep learning according to claim 1, is characterized in that, the loss function L SSIM in described step 2 is expressed as:
Figure FDA0002367274750000033
Figure FDA0002367274750000033
式中,c1=(k1r)2与c2=(k2r)2,k1与k2为小于1的常数,r表示图像灰度的动态范围;μx与μz表示输入图像与重建图像的平均值;σx与σz表示输入图像与重建图像的方差;σxz表示输入图像与重建图像的协方差。In the formula, c 1 =(k 1 r) 2 and c 2 =(k 2 r) 2 , k 1 and k 2 are constants less than 1, r represents the dynamic range of image grayscale; μ x and μ z represent the input The average value of the image and the reconstructed image; σ x and σ z represent the variance of the input image and the reconstructed image; σ xz represent the covariance of the input image and the reconstructed image.
7.根据权利要求1所述的一种基于火焰图像和深度学习的燃烧工况监测方法,其特征在于,所述步骤3中的Soft-max分类器定义为:7. a kind of combustion condition monitoring method based on flame image and deep learning according to claim 1, is characterized in that, the Soft-max classifier in described step 3 is defined as:
Figure FDA0002367274750000034
Figure FDA0002367274750000034
式中,yi表示第i个样本xi的输出;k表示训练样本的类别数量。In the formula, yi represents the output of the ith sample xi; k represents the number of categories of training samples.
CN202010039651.7A 2020-01-15 2020-01-15 A method and device for monitoring combustion conditions based on flame images and deep learning Pending CN111401126A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010039651.7A CN111401126A (en) 2020-01-15 2020-01-15 A method and device for monitoring combustion conditions based on flame images and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010039651.7A CN111401126A (en) 2020-01-15 2020-01-15 A method and device for monitoring combustion conditions based on flame images and deep learning

Publications (1)

Publication Number Publication Date
CN111401126A true CN111401126A (en) 2020-07-10

Family

ID=71430281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010039651.7A Pending CN111401126A (en) 2020-01-15 2020-01-15 A method and device for monitoring combustion conditions based on flame images and deep learning

Country Status (1)

Country Link
CN (1) CN111401126A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434740A (en) * 2020-11-26 2021-03-02 西北大学 Depth learning-based Qin tomb warriors fragment classification method
CN113669164A (en) * 2021-09-06 2021-11-19 南京林业大学 Pulse engine combustion stability control system based on artificial intelligence
CN113963291A (en) * 2021-09-29 2022-01-21 湖南大学 Coal-fired working condition identification method based on video image quality and model modeling method thereof
CN114052762A (en) * 2021-11-30 2022-02-18 燕山大学 Method for predicting size of narrow blood vessel and size of instrument based on Swin-T
CN114372941A (en) * 2021-12-16 2022-04-19 佳源科技股份有限公司 Low-illumination image enhancement method, device, equipment and medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163278A (en) * 2019-05-16 2019-08-23 东南大学 A kind of flame holding monitoring method based on image recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163278A (en) * 2019-05-16 2019-08-23 东南大学 A kind of flame holding monitoring method based on image recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOJING BAI 等: "Multi-mode combustion process monitoring on a pulverised fuel combustion test facility based on flame imaging and random weight network techniques", FUEL, vol. 202, 15 August 2017 (2017-08-15), pages 656 - 664, XP085028879, DOI: 10.1016/j.fuel.2017.03.091 *
冯嘉良;朱定局;廖丽华;: "基于多尺度空洞卷积自编码神经网络的森林烟火监测", 计算机与数字工程, no. 12, 20 December 2019 (2019-12-20), pages 3142 - 3148 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434740A (en) * 2020-11-26 2021-03-02 西北大学 Depth learning-based Qin tomb warriors fragment classification method
CN113669164A (en) * 2021-09-06 2021-11-19 南京林业大学 Pulse engine combustion stability control system based on artificial intelligence
CN113963291A (en) * 2021-09-29 2022-01-21 湖南大学 Coal-fired working condition identification method based on video image quality and model modeling method thereof
CN113963291B (en) * 2021-09-29 2024-07-02 湖南大学 Coal-fired working condition identification method based on video image quality and model modeling method thereof
CN114052762A (en) * 2021-11-30 2022-02-18 燕山大学 Method for predicting size of narrow blood vessel and size of instrument based on Swin-T
CN114372941A (en) * 2021-12-16 2022-04-19 佳源科技股份有限公司 Low-illumination image enhancement method, device, equipment and medium
CN114372941B (en) * 2021-12-16 2024-04-26 佳源科技股份有限公司 Low-light image enhancement method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN111401126A (en) A method and device for monitoring combustion conditions based on flame images and deep learning
WO2022037233A1 (en) Small sample visual target identification method based on self-supervised knowledge transfer
Huang et al. A projective and discriminative dictionary learning for high-dimensional process monitoring with industrial applications
Jin et al. A multi-scale convolutional neural network for bearing compound fault diagnosis under various noise conditions
CN110163278B (en) Flame stability monitoring method based on image recognition
CN100573100C (en) Method for Discriminating Gas-liquid Two Phase Flow based on digital image processing techniques
CN110826642B (en) Unsupervised anomaly detection method for sensor data
CN110765587A (en) Complex petrochemical process fault diagnosis method based on dynamic regularization judgment local retention projection
CN114266297A (en) Semantic knowledge base, construction method and zero-sample fault diagnosis method for thermal power equipment
Lei et al. Deeplab-YOLO: a method for detecting hot-spot defects in infrared image PV panels by combining segmentation and detection
Wang et al. Support-sample-assisted domain generalization via attacks and defenses: Concepts, algorithms, and applications to pipeline fault diagnosis
CN115165366A (en) A method and system for diagnosing faults in variable working conditions of rotating machinery
CN117095191A (en) Image clustering method based on projection distance regularized low-rank representation
CN114298219A (en) A fault diagnosis method for wind turbines based on deep spatiotemporal feature extraction
Han et al. A hybrid deep neural network based prediction of 300 MW coal-fired boiler combustion operation condition
CN112364195B (en) A Zero-Shot Image Retrieval Method Based on Attribute-Guided Adversarial Hash Networks
CN117994655A (en) Bridge disease detection system and method based on improved Yolov s model
Sun et al. Semi-supervised facial expression recognition by exploring false pseudo-labels
CN111860591A (en) Cervical cell image classification method based on interval adaptive feature selection fusion
CN211479145U (en) Combustion condition image monitoring device based on flame image and deep learning
Zhang et al. A new network intrusion detection based on semi-supervised dimensionality reduction and tri-LightGBM
CN115314287A (en) Confrontation anomaly detection system based on deep clustering
CN118260700B (en) Method and terminal for early warning of abnormality of DC/DC equipment
CN117668753A (en) System and method based on industrial park equipment performance analysis
CN118626898A (en) A cloud platform fault root cause analysis method based on abnormal correlation graph and graph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination