CN113707176B - A Transformer Fault Detection Method Based on Acoustic Signal and Deep Learning Technology - Google Patents
A Transformer Fault Detection Method Based on Acoustic Signal and Deep Learning Technology Download PDFInfo
- Publication number
- CN113707176B CN113707176B CN202111026413.3A CN202111026413A CN113707176B CN 113707176 B CN113707176 B CN 113707176B CN 202111026413 A CN202111026413 A CN 202111026413A CN 113707176 B CN113707176 B CN 113707176B
- Authority
- CN
- China
- Prior art keywords
- transformer
- fault detection
- data
- sound
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 77
- 238000013135 deep learning Methods 0.000 title claims abstract description 19
- 238000005516 engineering process Methods 0.000 title claims abstract description 12
- 238000012549 training Methods 0.000 claims abstract description 39
- 238000007781 pre-processing Methods 0.000 claims abstract description 22
- 230000005236 sound signal Effects 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 31
- 238000000034 method Methods 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 20
- 238000001914 filtration Methods 0.000 claims description 16
- 238000011176 pooling Methods 0.000 claims description 16
- 239000000284 extract Substances 0.000 claims description 15
- 238000009432 framing Methods 0.000 claims description 15
- 238000005070 sampling Methods 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 14
- 230000003044 adaptive effect Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 108010076504 Protein Sorting Signals Proteins 0.000 claims description 6
- 238000013461 design Methods 0.000 claims description 6
- 230000005284 excitation Effects 0.000 claims description 6
- 230000014509 gene expression Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 5
- 238000012546 transfer Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000002945 steepest descent method Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 5
- 238000013024 troubleshooting Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
- G10L2015/0631—Creating reference templates; Clustering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
- Y04S10/52—Outage or fault management, e.g. fault detection or location
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
Abstract
Description
技术领域technical field
本发明涉及变压器故障检测技术领域,具体来说是一种基于声信号及深度学习技术的变压器故障检测方法。The invention relates to the technical field of transformer fault detection, in particular to a transformer fault detection method based on acoustic signals and deep learning technology.
背景技术Background technique
电网中的变压器具有使用量大、使用种类和规格繁多的特点,并且使用时间较长,这导致变压器在电网系统中发生故障的频率过高。根据统计,每200台运行时间超过4年的变压器就有大约5台变压器发生故障。因此,变压器的故障排除和检修成为电网稳定工作的一个重要程序。The transformers in the power grid have the characteristics of a large amount of use, a variety of types and specifications, and a long use time, which leads to an excessively high frequency of transformer failures in the power grid system. According to statistics, for every 200 transformers that have been in operation for more than 4 years, about 5 transformers fail. Therefore, the troubleshooting and maintenance of the transformer has become an important procedure for the stability of the power grid.
针对变压器故障排除,传统的处理方法是需要人工前往待排除地点,并依靠人工经验,根据变压器发出的声音来诊断该变压器是否有故障。该方式不仅需要耗费大量的时间与精力,而且会受到人为因素的干扰,有可能故障诊断错误。For transformer troubleshooting, the traditional processing method is to manually go to the location to be eliminated, and rely on manual experience to diagnose whether the transformer is faulty based on the sound emitted by the transformer. This method not only consumes a lot of time and energy, but also suffers from the interference of human factors, which may cause fault diagnosis errors.
近年来,深度学习发展日新月异,该处理信号的方式具有高效性的特点。若能采用深度学习方式对变压器声音进行处理,其将会自动化分析变压器发出声音的特征,并对特征进行归类,即可快捷地诊断出有故障的变压器,能够大大降低了人工成本且能更快维护电网的稳定。In recent years, deep learning has developed rapidly, and the method of processing signals has the characteristics of high efficiency. If the deep learning method can be used to process the sound of the transformer, it will automatically analyze the characteristics of the sound emitted by the transformer, and classify the characteristics, so that the faulty transformer can be quickly diagnosed, which can greatly reduce the labor cost and improve the performance of the transformer. Quickly maintain the stability of the power grid.
现有技术中,虽存在一些声纹检测方法,其多是利用混合高斯模型和隐马尔可夫模型作为有效的声学模型来进行声纹检测,而在深度学习兴起之后,包括卷积神经网络、自编码器、递归神经网络(Recurrent Neural Network,RNN)等,越来越多的深度学习网络应用在声音检测领域。从声音检测的准确率和精确度等指标来看,基于深度学习的声纹检测方法远比传统声学模型的声纹检测方法要好得多,但现有技术中基于深度学习的声纹检测方法多采用振动数据,尚无利用音频数据来进行故障检测。In the prior art, although there are some voiceprint detection methods, most of them use mixed Gaussian models and hidden Markov models as effective acoustic models for voiceprint detection, and after the rise of deep learning, including convolutional neural networks, Autoencoder, Recurrent Neural Network (RNN), etc., more and more deep learning networks are applied in the field of sound detection. From the perspective of the accuracy and precision of voice detection, the voiceprint detection method based on deep learning is much better than the voiceprint detection method of traditional acoustic models, but there are many voiceprint detection methods based on deep learning in the existing technology. Using vibration data, no audio data has been used for fault detection.
因此,如何将深度学习技术更好地应用于变压器声纹故障检测已经成为急需解决的技术问题。Therefore, how to better apply deep learning technology to transformer voiceprint fault detection has become an urgent technical problem.
发明内容SUMMARY OF THE INVENTION
本发明的目的是为了解决现有技术中难以应用声纹信号对变压器故障进行准确检测的缺陷,提供一种基于声信号及深度学习技术的变压器故障检测方法来解决上述问题。The purpose of the present invention is to solve the defect in the prior art that it is difficult to use voiceprint signals to accurately detect transformer faults, and to provide a transformer fault detection method based on acoustic signals and deep learning technology to solve the above problems.
为了实现上述目的,本发明的技术方案如下:In order to achieve the above object, technical scheme of the present invention is as follows:
一种基于声信号及深度学习技术的变压器故障检测方法,包括以下步骤:A transformer fault detection method based on acoustic signal and deep learning technology, comprising the following steps:
11)电力变压器声音数据的采集获取:通过声纹采集传感器实地采集获取变压器声音数据,经过标注分为“正常”和“故障”两类,并将其定义为训练样本集;11) Acquisition and acquisition of sound data of power transformers: The sound data of transformers are collected and obtained on the spot through the voiceprint acquisition sensor, and are marked into two categories: "normal" and "fault", and are defined as training sample sets;
12)训练样本集内声信号的预处理:运用分段、分帧、声音加窗和自适应滤波法的预处理方法对所采集的电力变压器声音数据进行去噪预处理;再通过切割、加噪、调音处理对声信号进行数据增强;12) Preprocessing of the acoustic signals in the training sample set: Denoising preprocessing is performed on the collected sound data of power transformers by using the preprocessing methods of segmentation, framing, sound windowing and adaptive filtering; Noise and tuning processing to enhance the sound signal;
13)声信号数据的声音特征提取:通过采用梅拉尔倒谱系数对预处理后的电力变压器声音数据进行声音特征提取,提取出MFCC系数;13) Sound feature extraction of sound signal data: Extract the sound feature of the preprocessed power transformer sound data by using the Meral cepstral coefficient, and extract the MFCC coefficient;
14)构建变压器故障检测模型:基于双门控卷积网络模型和变压器声信号的特点构建变压器故障检测模型;14) Build a transformer fault detection model: build a transformer fault detection model based on the double-gated convolutional network model and the characteristics of the transformer acoustic signal;
15)变压器故障检测模型的训练:将提取到的MFCC系数输入变压器故障检测模型进行训练;15) Training of the transformer fault detection model: input the extracted MFCC coefficients into the transformer fault detection model for training;
16)待检测变压器声信号数据的获取及预处理:获取待检测变压器声信号数据,并进行去噪预处理,对去噪预处理后的待检测变压器声信号数据提取MFCC系数;16) Acquisition and preprocessing of the acoustic signal data of the transformer to be detected: Acquire the acoustic signal data of the transformer to be detected, perform denoising preprocessing, and extract the MFCC coefficients from the denoised preprocessed acoustic signal data of the transformer to be detected;
17)待检测变压器故障检测结果的获得:将MFCC系数输入训练后的变压器故障检测模型,得到变压器故障检测的结果。17) Obtaining the fault detection result of the transformer to be detected: Input the MFCC coefficients into the trained transformer fault detection model to obtain the transformer fault detection result.
所述训练样本集内声信号的预处理包括以下步骤:The preprocessing of the acoustic signal in the training sample set includes the following steps:
21)对采集的电力变压器声音数据s(t)进行分段操作:21) Perform a segmented operation on the collected power transformer sound data s(t):
对获得的电力变压器声音数据进行分段,segment the obtained power transformer sound data,
切分成s(t)={s1(t),s2(t),...,sq(t),...sr(t)},计算声纹数据的总长度L,其计算公式如下:Divide into s(t)={s 1 (t), s 2 (t),...,s q (t),...s r (t)}, and calculate the total length L of the voiceprint data, which is Calculated as follows:
L=time×fSSample=r×rL,L=time×fS Sample =r×r L ,
其中,fssample为该声音的采样频率,time是采样时间,r为分段数量,rL为分段长度;Among them, fs sample is the sampling frequency of the sound, time is the sampling time, r is the number of segments, and r L is the length of the segment;
22)对已经分段的变压器声音数据sq(t)进行分帧处理:22) Framing processing of the segmented transformer sound data s q (t):
将变压器声纹帧长设为500ms,进行分帧处理为Set the transformer voiceprint frame length to 500ms, and perform framing processing as
sq(t)={sq1(t),sq2(t),...,sqp(t),...sqLength(t)},s q (t)={s q1 (t),s q2 (t),...,s qp (t),...s qLength (t)},
其中,每一帧长度设定为500ms,每一段分为Length帧;Among them, the length of each frame is set to 500ms, and each segment is divided into Length frames;
23)对分帧后变压器声音加窗处理:23) Windowing the transformer sound after framing:
对分帧数据进行端点平滑的加窗处理,使用Hamming窗对帧进行加窗处理,Hamming窗的函数w(t)表示如下:The end-point smoothing window processing is performed on the framed data, and the frame is windowed by the Hamming window. The function w(t) of the Hamming window is expressed as follows:
其中M为帧长,t为时间;where M is the frame length and t is the time;
针对每一帧得到时域信号fqp(t),其表达式如下:The time domain signal f qp (t) is obtained for each frame, and its expression is as follows:
fqp(t)=sqp(t)*w(n),f qp (t)=s qp (t)*w(n),
其中,fqp(t)是第q段信号的第p帧的时域信号,w(n)为加窗函数,sqp(t)为第q段信号的第p帧的信号值;Wherein, fqp (t) is the time domain signal of the pth frame of the qth segment signal, w(n) is the windowing function, and sqp (t) is the signal value of the pth frame of the qth segment signal;
24)对加窗处理后的声音使用自适应滤波器进行去噪处理:24) Use an adaptive filter to denoise the windowed sound:
对fqp(t)进行采样得到Xi(n)数字信号序列,并设置滤波器初值如下:Sampling f qp (t) to obtain the X i (n) digital signal sequence, and set the initial value of the filter as follows:
其中,W(k)为最优的权系数,μ为收敛因子,λ为相关矩阵的最大特征值;Among them, W(k) is the optimal weight coefficient, μ is the convergence factor, and λ is the maximum eigenvalue of the correlation matrix;
计算滤波器输出的实际输出估计值:Compute the actual output estimate of the filter output:
y(k)=WT(k)Xi(n),y(k)=W T (k)Xi(n),
其中,y(k)是实际输出估计值,WT(k)为最优权系数的转置,Xi(n)为输入信号序列;Among them, y(k) is the actual output estimated value, W T (k) is the transpose of the optimal weight coefficient, and X i (n) is the input signal sequence;
e(k)是误差信号,为计算误差:e(k) is the error signal, which is the calculation error:
e(k)=d(k)-y(k),e(k)=d(k)-y(k),
其中,d(k)是期望输出值,更新k+1时刻滤波器系数:Among them, d(k) is the expected output value, and the filter coefficients at time k+1 are updated:
W(k+1)=W(k)+μe(k)X(k),W(k+1)=W(k)+μe(k)X(k),
其中,W(k+1)是k+1时刻的最优权系数,W(k)为k时刻的最优权系数,e(k)为k时刻的误差,X(k)为k时刻的输入序列;Among them, W(k+1) is the optimal weight coefficient at time k+1, W(k) is the optimal weight coefficient at time k, e(k) is the error at time k, and X(k) is the error at time k input sequence;
使用最陡下降法不断迭代求解,使得误差信号最小,得到自适应滤波去噪后的输出y(k);Use the steepest descent method to iteratively solve the problem to minimize the error signal, and obtain the output y(k) after adaptive filtering and denoising;
25)对自适应滤波去噪的输出y(k)通过切割、加噪、调音处理对声信号进行数据增强。25) Perform data enhancement on the acoustic signal by cutting, adding noise, and tuning the output y(k) of the adaptive filtering and denoising.
所述声信号数据的声音特征提取包括以下步骤:The sound feature extraction of the sound signal data includes the following steps:
31)对进行数据增强后的y(k)进行逆变换生成s(t),s(t)进行预加重操作,其计算公式如下:31) Perform inverse transformation on y(k) after data enhancement to generate s(t), and perform pre-emphasis operation on s(t), and the calculation formula is as follows:
y(z)=s(z)H(z),y(z)=s(z)H(z),
其中,y(z)是预加重的输出,s(z)是信号s(t)的Z域表达,H(z)为高通滤波器的传递函数,该传递函数为:where y(z) is the output of the pre-emphasis, s(z) is the Z-domain representation of the signal s(t), and H(z) is the transfer function of the high-pass filter, which is:
该高通滤波器在z域的形式,其中μ的值介于0.9-1.0之间;This high-pass filter is in the form of the z-domain, where the value of μ is between 0.9-1.0;
32)对预加重后的变压器声音数据sq(t)进行分帧处理:32) Perform frame division processing on the pre-emphasized transformer sound data s q (t):
将变压器声纹帧长设为30ms,进行变压器声音数据的分帧处理,处理后结构为每一帧30ms;Set the frame length of the transformer voiceprint to 30ms, and perform frame-by-frame processing of the transformer voice data. The processed structure is 30ms per frame;
33)对分帧后的声音数据加窗:使用hamming窗对帧进行加窗处理,假设经过前两步预处理得到的输出为S(n),窗函数为W(n),则计算公式为:33) Windowing the sound data after framing: use the hamming window to window the frame. Assuming that the output obtained after the first two steps of preprocessing is S(n) and the window function is W(n), the calculation formula is :
S'(n)=S(n)×W(n);S'(n)=S(n)×W(n);
其中窗函数W(n)为:where the window function W(n) is:
该式中n代表n时刻,其取值范围0≤n≤N-1,N为采样点数,且根据设置不同的a值得到不同的Hamming窗,n表示序列自变量,a为设定的常数;In this formula, n represents time n, and its value range is 0≤n≤N-1, N is the number of sampling points, and different Hamming windows are obtained according to different a values, n represents the sequence independent variable, and a is the set constant ;
34)对加窗后的数据进行快速傅里叶变换:34) Perform fast Fourier transform on the windowed data:
对于每一帧信号进行FFT得到频域信号X(k):Perform FFT on each frame signal to get the frequency domain signal X(k):
其中X(k)代表频域输出,x(n)代表时域输入,N是采样点数;where X(k) represents the frequency domain output, x(n) represents the time domain input, and N is the number of sampling points;
35)对快速傅里叶变换后的数据进行梅尔滤波,梅尔滤波转换公式为:35) Perform mel filtering on the data after the fast Fourier transform, and the conversion formula of mel filtering is:
其中,f代表物理频率,Mel(f)代表梅尔频率;将得到的梅尔频率通过M个梅尔尺度的三角形滤波器组进行滤波,滤波器组的公式为:Among them, f represents the physical frequency, and Mel(f) represents the mel frequency; the obtained mel frequency is filtered through M triangular filter banks of mel scale, and the formula of the filter bank is:
在该式中: In this formula:
再计算每个滤波器组输出的对数能量:Then calculate the logarithmic energy of each filter bank output:
其中,E(m)为对数能量,Hm(k)为滤波器组;Among them, E(m) is the logarithmic energy, and Hm (k) is the filter bank;
36)对梅尔滤波后的数据进行倒谱分析,提取出MFCC系数C(n),MFCC系数C(n)通过进行离散余弦变换DCT来实现提取:36) Perform cepstral analysis on the Mel-filtered data to extract the MFCC coefficient C(n), and the MFCC coefficient C(n) is extracted by performing discrete cosine transform DCT:
其中,M代表滤波器的个数。Among them, M represents the number of filters.
所述构建变压器故障检测模型包括以下步骤:The construction of the transformer fault detection model includes the following steps:
41)基于双门控卷积神经网络设定变压器故障检测模型,其包括两个卷积门控层、两个池化层、一个全连接层和一个输出层;41) A transformer fault detection model is set based on a double-gated convolutional neural network, which includes two convolution gating layers, two pooling layers, a fully connected layer and an output layer;
42)设定卷积门控层:卷积门控层通过对输入数据与卷积核的卷积操作来提取特征,得到特征图,特征图的深度取决于设定的卷积核的数量;42) Setting the convolution gating layer: the convolution gating layer extracts features through the convolution operation of the input data and the convolution kernel to obtain a feature map, and the depth of the feature map depends on the number of the set convolution kernels;
假设输入为X∈RA×B,其中A与B分别代表输入数据的长与宽,卷积操作定义为:Assuming that the input is X∈R A×B , where A and B represent the length and width of the input data, respectively, the convolution operation is defined as:
其中:代表第l层卷积层的第j个特征平面图,代表第l-1层卷积层的第i个平面特征图,M代表平面特征图的个数,代表卷积核,代表偏置,和的具体参数通过训练优化来确定,f代表激励层函数,常用的激励函数有ReLu与sigmoid函数,其表达式如下:in: represents the jth feature plane of the lth convolutional layer, represents the ith plane feature map of the l-1th convolutional layer, M represents the number of plane feature maps, represents the convolution kernel, stands for bias, and The specific parameters of is determined by training optimization, f represents the excitation layer function, the commonly used excitation functions are ReLu and sigmoid functions, and their expressions are as follows:
门控的输出表示为:The gated output is represented as:
其中:h(X)表示Gated CNN的输出,W和V表示不同的卷积核,b和c表示不同的偏置,表示矩阵的元素对应相乘,σ表示sigmoid门控函数,X表示上一层的输出;Among them: h(X) represents the output of Gated CNN, W and V represent different convolution kernels, b and c represent different biases, Represents the corresponding multiplication of the elements of the matrix, σ represents the sigmoid gating function, and X represents the output of the previous layer;
43)设计池化层:采用最大池化或平均池化方法来设定池化层;43) Design the pooling layer: use the maximum pooling or average pooling method to set the pooling layer;
44)搭建全连接层和采用softmax分类器作为输出层进行最后分类输出。44) Build a fully connected layer and use the softmax classifier as the output layer for final classification output.
所述变压器故障检测模型的训练包括以下步骤:The training of the transformer fault detection model includes the following steps:
51)将训练样本集提取后的MFCC系数输入变压器故障检测模型;51) Input the MFCC coefficients extracted from the training sample set into the transformer fault detection model;
52)MFCC系数在变压器故障检测模型内进行前向传播和反向传播不断迭代,进行参数优化得到训练后的变压器故障检测模型。52) The MFCC coefficients are continuously iterated through forward propagation and backward propagation in the transformer fault detection model, and parameter optimization is performed to obtain the trained transformer fault detection model.
有益效果beneficial effect
本发明的一种基于声信号及深度学习技术的变压器故障检测方法,与现有技术相比能够基于声纹信号(非振动信号)进行变压器的故障检测;通过采集变压器声音并进行分类标注为“正常”和“故障”两类,构建变压器工况声音数据集;针对环境噪声的干扰,变压器工况声音数据集进行预先处理进行去噪处理;考虑到训练模型的实用性,为了增强后续搭建的神经网络的鲁棒性,对数据进行了一些列数据增强操作;之后,变压器声音数据集经过MFCC特征提取送入网络训练;基础CNN网络在用于情况比较复杂的故障检测时,由于音频数据量很少或者音频时长比较长,存在训练效率低下和过拟合现象,检测的准确率低。A transformer fault detection method based on acoustic signals and deep learning technology of the present invention, compared with the prior art, can perform fault detection of transformers based on voiceprint signals (non-vibration signals); Two types of “normal” and “fault” are used to construct the sound data set of transformer working conditions; for the interference of environmental noise, the sound data set of transformer working conditions is pre-processed for denoising; considering the practicability of the training model, in order to enhance the subsequent construction Due to the robustness of the neural network, a series of data enhancement operations are performed on the data; after that, the transformer sound data set is sent to the network for training through MFCC feature extraction; when the basic CNN network is used for complex fault detection, due to the amount of audio data Very few or the audio duration is relatively long, there are low training efficiency and overfitting, and the detection accuracy is low.
本发明针对变压器故障检测效果,设计了双门控卷积神经网络变压器声音检测模型,为实现变压器故障在线监测提供了有利方法基础和检测模型实现。Aiming at the transformer fault detection effect, the invention designs a double-gated convolutional neural network transformer sound detection model, which provides a favorable method basis and detection model realization for realizing the transformer fault online monitoring.
附图说明Description of drawings
图1为本发明的方法顺序图;Fig. 1 is the method sequence diagram of the present invention;
图2为本发明的方法实现逻辑流程图;Fig. 2 is the method realization logic flow chart of the present invention;
图3为本发明所涉及的双门控卷积神经网络结构图。FIG. 3 is a structural diagram of a double-gated convolutional neural network involved in the present invention.
具体实施方式Detailed ways
为使对本发明的结构特征及所达成的功效有更进一步的了解与认识,用以较佳的实施例及附图配合详细的说明,说明如下:In order to have a further understanding and understanding of the structural features of the present invention and the effects achieved, the preferred embodiments and accompanying drawings are used in conjunction with detailed descriptions, and the descriptions are as follows:
如图1和图2所示,本发明所述的一种基于声信号及深度学习技术的变压器故障检测方法,包括以下步骤:As shown in Figure 1 and Figure 2, a transformer fault detection method based on acoustic signal and deep learning technology according to the present invention includes the following steps:
第一步,电力变压器声音数据的采集获取。通过声纹采集传感器实地采集获取变压器声音数据,经过标注分为“正常”和“故障”两类,并将其定义为训练样本集。在实验室环节,可以将获取变压器声音数据并划分为两类,正常工作状态和故障工作状态,构建实验数据库,并对该实验数据库进行训练集和测试集按照一定比例划分。The first step is the acquisition of sound data of power transformers. The transformer sound data is collected on the spot by the voiceprint acquisition sensor, which is divided into two categories: "normal" and "fault" after labeling, and is defined as a training sample set. In the laboratory, the transformer sound data can be obtained and divided into two categories, normal working state and faulty working state, an experimental database can be constructed, and the training set and test set of the experimental database can be divided according to a certain proportion.
第二步,训练样本集内声信号的预处理。运用分段、分帧、声音加窗和自适应滤波法的预处理方法对所采集的电力变压器声音数据进行去噪预处理;再通过切割、加噪、调音处理对声信号进行数据增强。其具体步骤如下:The second step is the preprocessing of the acoustic signals in the training sample set. The preprocessing methods of segmentation, framing, sound windowing and self-adaptive filtering are used to denoise and preprocess the collected power transformer sound data; and then the sound signal is enhanced by cutting, adding noise and tuning. The specific steps are as follows:
(1)对采集的电力变压器声音数据s(t)进行分段操作:(1) Perform a segmented operation on the collected power transformer sound data s(t):
对获得的电力变压器声音数据进行分段,segment the obtained power transformer sound data,
切分成s(t)={s1(t),s2(t),...,sq(t),...sr(t)},计算声纹数据的总长度L,其计算公式如下:Divide into s(t)={s 1 (t), s 2 (t),...,s q (t),...s r (t)}, and calculate the total length L of the voiceprint data, which is Calculated as follows:
L=time×fSSample=r×rL,L=time×fS Sample =r×r L ,
其中,fssample为该声音的采样频率,time是采样时间,r为分段数量,rL为分段长度。Among them, fs sample is the sampling frequency of the sound, time is the sampling time, r is the number of segments, and r L is the segment length.
(2)对已经分段的变压器声音数据sq(t)进行分帧处理:(2) Framing processing of the segmented transformer sound data s q (t):
将变压器声纹帧长设为500ms,进行分帧处理为Set the transformer voiceprint frame length to 500ms, and perform framing processing as
sq(t)={sq1(t),sq2(t),...,sqp(t),...sqLength(t)},s q (t)={s q1 (t),s q2 (t),...,s qp (t),...s qLength (t)},
其中,每一帧长度设定为500ms,每一段分为Length帧。Among them, the length of each frame is set to 500ms, and each segment is divided into Length frames.
(3)对分帧后变压器声音加窗处理:(3) Windowing the transformer sound after framing:
对分帧数据进行端点平滑的加窗处理,使用Hamming窗对帧进行加窗处理,Hamming窗的函数w(t)表示如下:The end-point smoothing window processing is performed on the framed data, and the frame is windowed by the Hamming window. The function w(t) of the Hamming window is expressed as follows:
其中M为帧长,t为时间;where M is the frame length and t is the time;
针对每一帧得到时域信号fqp(t),其表达式如下:The time domain signal f qp (t) is obtained for each frame, and its expression is as follows:
fqp(t)=sqp(t)*w(n),f qp (t)=s qp (t)*w(n),
其中,fqp(t)是第q段信号的第p帧的时域信号,w(n)为加窗函数,sqp(t)为第q段信号的第p帧的信号值。Among them, f qp (t) is the time domain signal of the p-th frame of the q-th signal, w(n) is the windowing function, and s qp (t) is the signal value of the p-th frame of the q-th signal.
(4)对加窗处理后的声音使用自适应滤波器进行去噪处理:(4) Use an adaptive filter to denoise the windowed sound:
对fqp(t)进行采样得到Xi(n)数字信号序列,并设置滤波器初值如下:Sampling f qp (t) to obtain the X i (n) digital signal sequence, and set the initial value of the filter as follows:
其中,W(k)为最优的权系数,μ为收敛因子,λ为相关矩阵的最大特征值;Among them, W(k) is the optimal weight coefficient, μ is the convergence factor, and λ is the maximum eigenvalue of the correlation matrix;
计算滤波器输出的实际输出估计值:Compute the actual output estimate of the filter output:
y(k)=WT(k)Xi(n),y(k)=W T (k)Xi(n),
其中,y(k)是实际输出估计值,WT(k)为最优权系数的转置,Xi(n)为输入信号序列;Among them, y(k) is the actual output estimated value, W T (k) is the transpose of the optimal weight coefficient, and X i (n) is the input signal sequence;
e(k)是误差信号,为计算误差:e(k) is the error signal, which is the calculation error:
e(k)=d(k)-y(k),e(k)=d(k)-y(k),
其中,d(k)是期望输出值,更新k+1时刻滤波器系数:Among them, d(k) is the expected output value, and the filter coefficients at time k+1 are updated:
W(k+1)=W(k)+μe(k)X(k),W(k+1)=W(k)+μe(k)X(k),
其中,W(k+1)是k+1时刻的最优权系数,W(k)为k时刻的最优权系数,e(k)为k时刻的误差,X(k)为k时刻的输入序列;Among them, W(k+1) is the optimal weight coefficient at time k+1, W(k) is the optimal weight coefficient at time k, e(k) is the error at time k, and X(k) is the error at time k input sequence;
使用最陡下降法不断迭代求解,使得误差信号最小,得到自适应滤波去噪后的输出y(k)。The steepest descent method is used to iteratively solve the problem, so that the error signal is minimized, and the output y(k) after adaptive filtering and denoising is obtained.
(5)对自适应滤波去噪的输出y(k)通过切割、加噪、调音处理对声信号进行数据增强。(5) Perform data enhancement on the acoustic signal by cutting, adding noise, and tuning the output y(k) of the adaptive filtering and denoising.
第三步,声信号数据的声音特征提取:与语音识别相同,对于音频数据,在故障检测时有两种应用方法,第一种是直接输入一维信号进行处理,第二种是将音频数据转化为图像来提取音频特征从而进行后续的处理,其中最常用的一种特征是梅尔倒谱系数Mel-scale Frequency Cepstral Coefficients,简称MFCC。通过采用梅拉尔倒谱系数对预处理后的电力变压器声音数据进行声音特征提取,提取出MFCC系数,提取得到的MFCC系数后期送入网络以产生特征图。其具体步骤如下:The third step, sound feature extraction of sound signal data: same as speech recognition, for audio data, there are two application methods for fault detection, the first is to directly input one-dimensional signals for processing, and the second is to Convert it into an image to extract audio features for subsequent processing. One of the most commonly used features is Mel-scale Frequency Cepstral Coefficients, or MFCC for short. By using the Meral cepstral coefficients to extract the sound features of the preprocessed power transformer sound data, the MFCC coefficients are extracted, and the extracted MFCC coefficients are sent to the network later to generate feature maps. The specific steps are as follows:
(1)对进行数据增强后的y(k)进行逆变换生成s(t),s(t)进行预加重操作,其计算公式如下:(1) Perform inverse transformation on y(k) after data enhancement to generate s(t), and perform pre-emphasis on s(t). The calculation formula is as follows:
y(z)=s(z)H(z),y(z)=s(z)H(z),
其中,y(z)是预加重的输出,s(z)是信号s(t)的Z域表达,H(z)为高通滤波器的传递函数,该传递函数为:where y(z) is the output of the pre-emphasis, s(z) is the Z-domain representation of the signal s(t), and H(z) is the transfer function of the high-pass filter, which is:
该高通滤波器在z域的形式,其中μ的值介于0.9-1.0之间。将声音数据经过一个高通滤波器,从而提升高频部分的能量,尽量使得高频部分与低频部分的能量保持一致,能够在整个频带用相同的信噪比求得频谱。This high-pass filter is in the form of the z-domain, where the value of μ is between 0.9-1.0. Pass the sound data through a high-pass filter to increase the energy of the high-frequency part, try to keep the energy of the high-frequency part and the low-frequency part consistent, and obtain the spectrum with the same signal-to-noise ratio in the entire frequency band.
(2)对预加重后的变压器声音数据sq(t)进行分帧处理:(2) Framing processing of the pre-emphasized transformer sound data s q (t):
将变压器声纹帧长设为30ms,进行变压器声音数据的分帧处理,处理后结构为每一帧30ms。The frame length of the transformer voiceprint is set to 30ms, and the transformer voice data is divided into frames. The processed structure is 30ms per frame.
(3)对分帧后的声音数据加窗。加窗函数的目的是增加每一帧声音左端与右端的连续性,具体做法是将每一帧声音与窗函数相乘。(3) Windowing the framed sound data. The purpose of the windowing function is to increase the continuity between the left and right ends of each frame of sound by multiplying each frame of sound by the window function.
使用hamming窗对帧进行加窗处理,假设经过前两步预处理得到的输出为S(n),窗函数为W(n),则计算公式为:Use the hamming window to window the frame. Assuming that the output obtained after the first two steps of preprocessing is S(n) and the window function is W(n), the calculation formula is:
S'(n)=S(n)×W(n);S'(n)=S(n)×W(n);
其中窗函数W(n)为:where the window function W(n) is:
该式中n代表n时刻,其取值范围0≤n≤N-1,N为采样点数,且根据设置不同的a值得到不同的Hamming窗,n表示序列自变量,a为设定的常数。In this formula, n represents time n, and its value range is 0≤n≤N-1, N is the number of sampling points, and different Hamming windows are obtained according to different a values, n represents the sequence independent variable, and a is the set constant .
(4)对加窗后的数据进行快速傅里叶变换。快速傅里叶变换(fast Fouriertransform,即FFT)是一种常用的将时域信号转变到频域的方式。(4) Fast Fourier transform is performed on the windowed data. Fast Fourier transform (fast Fourier transform, or FFT) is a commonly used way to transform a time domain signal into the frequency domain.
对于每一帧信号进行FFT得到频域信号X(k):Perform FFT on each frame signal to get the frequency domain signal X(k):
其中X(k)代表频域输出,x(n)代表时域输入,N是采样点数。Where X(k) represents the frequency domain output, x(n) represents the time domain input, and N is the number of sampling points.
(5)对快速傅里叶变换后的数据进行梅尔滤波,梅尔滤波转换公式为:(5) Perform mel filtering on the data after fast Fourier transform, and the conversion formula of mel filtering is:
其中,f代表物理频率,Mel(f)代表梅尔频率;将得到的梅尔频率通过M个梅尔尺度的三角形滤波器组进行滤波,滤波器组的公式为:Among them, f represents the physical frequency, and Mel(f) represents the mel frequency; the obtained mel frequency is filtered through M triangular filter banks of mel scale, and the formula of the filter bank is:
在该式中: In this formula:
再计算每个滤波器组输出的对数能量:Then calculate the logarithmic energy of each filter bank output:
其中,E(m)为对数能量,Hm(k)为滤波器组。where E(m) is the logarithmic energy and Hm (k) is the filter bank.
(6)对梅尔滤波后的数据进行倒谱分析,提取出MFCC系数C(n),MFCC系数通过进行离散余弦变换DCT来实现提取:(6) Perform cepstral analysis on the Mel-filtered data, extract the MFCC coefficient C(n), and extract the MFCC coefficient by performing discrete cosine transform DCT:
其中,M代表滤波器的个数。M越大,每一帧提取的特征值就越多,信息量就越多,从而越能准确地描述信号。Among them, M represents the number of filters. The larger M is, the more feature values are extracted in each frame, the more information is, and the more accurate the signal can be described.
第四步,构建变压器故障检测模型:基于双门控卷积网络模型和变压器声信号的特点构建变压器故障检测模型。The fourth step is to build a transformer fault detection model: build a transformer fault detection model based on the double-gated convolutional network model and the characteristics of the transformer acoustic signal.
变压器声信号能够反应出变压器工作状况,传统人工排查变压器故障需有经验人工进行现场声音检测。该检测变压器故障的方式存在一定滞后性,即通常情况下某处位置变压器发生故障后需有经验的人工介入进行变压器的逐一排查。拥有丰富变压器故障排查经验的人力需要多年实操培养,耗费大量时间成本。变压器在线监测系统利用深度学习方法能够快捷方便地进行故障监测,目前深度学习方法与传统的故障检测方法类似,大多数使用的仍旧是振动数据,仅利用音频数据来进行故障检测的研究还比较少。而相比于振动数据,音频数据更容易获取且成本低廉,更加适合于大规模实际应用。因此本发明提出了一种基于双门控卷积神经网络利用变压器音频数据来进行故障检测的方法,并且设计了一个基于声信号来进行变压器故障检测与识别的模型。The sound signal of the transformer can reflect the working status of the transformer. Traditional manual troubleshooting of transformer faults requires experienced manual on-site sound detection. This method of detecting transformer faults has a certain lag, that is, under normal circumstances, an experienced manual intervention is required to check the transformers one by one after a transformer fault occurs at a certain location. Manpower with rich experience in transformer troubleshooting requires years of practical training, which consumes a lot of time and cost. The transformer online monitoring system can quickly and easily monitor faults by using deep learning methods. At present, deep learning methods are similar to traditional fault detection methods. Most of them still use vibration data, and there are relatively few studies on fault detection using only audio data. . Compared with vibration data, audio data is easier to obtain and low cost, and is more suitable for large-scale practical applications. Therefore, the present invention proposes a method for fault detection using transformer audio data based on a double-gated convolutional neural network, and designs a model for transformer fault detection and identification based on acoustic signals.
由于变压器通常放置在户外,其声信号往往具有很大的噪声,当噪声的能量过大,就会导致变压器本机发出的真实声信号被淹没,从而导致训练出来的检测模型效果不高。本发明前面步骤对采集而来的声信号进行了去噪预处理,采用自适应滤波法,降低噪声能量,保留变压器本机声信号。变压器声音数据在去除噪声的前提下,采用一些列数据增强方式,将数据集样本进行扩充,使得网络在训练时,输入更加复杂情况的样本集,以增强变压器检测模型的鲁棒性。Because the transformer is usually placed outdoors, its acoustic signal often has a lot of noise. When the energy of the noise is too large, the real acoustic signal emitted by the transformer itself will be submerged, resulting in a low effect of the trained detection model. The preceding steps of the present invention perform denoising preprocessing on the collected acoustic signals, and adopt adaptive filtering method to reduce noise energy and preserve the transformer's own acoustic signals. Transformer sound data, on the premise of removing noise, uses a series of data enhancement methods to expand the data set samples, so that the network can input more complex sample sets during training to enhance the robustness of the transformer detection model.
针对变压器音频数据的输入处理,若音频直接采用一维方式输入进网络训练,这样会导致训练所需要的内存过大,物理上难以实现。一维方式的声音数据作为时域信号若直接输入网络,网络只能提取少部分的特征信息,不利于网络模型的训练。故在数据输入网络之前,采用MFCC系数特征提取方式,将时域信号转变为时频信号,以二维信号方式输入网络。MFCC系数是时频信号,包含了更多的特征信息,结果使得网络提取到更多有效的特征,训练出来的模型更好。For the input processing of the audio data of the transformer, if the audio is directly input into the network training in a one-dimensional manner, the memory required for the training will be too large, which is physically difficult to achieve. If the one-dimensional sound data is directly input into the network as a time domain signal, the network can only extract a small part of the feature information, which is not conducive to the training of the network model. Therefore, before the data is input into the network, the MFCC coefficient feature extraction method is used to convert the time-domain signal into a time-frequency signal, which is input into the network in a two-dimensional signal manner. The MFCC coefficients are time-frequency signals, which contain more feature information. As a result, the network can extract more effective features and the trained model is better.
一般的卷积神经网络因受音频格式的影响,往往会出现训练效率低下和过拟合现象,从到导致故障检测准确率低。不同于一般的卷积神经网络,本发明采用门控卷积的网络结构思想,在卷积网络的输出上加入门控开关。门控开关的作用是在信息在训练传递时,实现一个缓冲控制的效果,从而解决过拟合现象并提升训练效率。针对变压器声音的特点,本发明在单门控卷积网络的基础上,增加一个卷积门控层,设计了双门控卷积神经网络。双门控卷积神经网络采取两个卷积门控层,其能够有效地提取到变压器本机声音的深层次特征,提升训练效果,从而得到优良效果的训练模型。Due to the influence of the audio format, the general convolutional neural network often suffers from low training efficiency and overfitting, which leads to low fault detection accuracy. Different from the general convolutional neural network, the present invention adopts the network structure idea of gated convolution, and adds a gated switch to the output of the convolutional network. The function of the gating switch is to achieve a buffer control effect when the information is transmitted during training, so as to solve the overfitting phenomenon and improve the training efficiency. Aiming at the characteristics of the transformer sound, the present invention adds a convolution gating layer on the basis of the single-gated convolutional network, and designs a double-gated convolutional neural network. The double-gated convolutional neural network adopts two convolutional gating layers, which can effectively extract the deep features of the transformer's native sound, improve the training effect, and obtain a training model with excellent results.
其具体步骤如下:The specific steps are as follows:
(1)如图3所示,基于双门控卷积神经网络设定变压器故障检测模型,其包括两个卷积门控层、两个池化层、一个全连接层和一个输出层。(1) As shown in Figure 3, a transformer fault detection model is set based on a double-gated convolutional neural network, which includes two convolution gating layers, two pooling layers, a fully connected layer and an output layer.
(2)设定卷积门控层:卷积门控层通过对输入数据与卷积核的卷积操作来提取特征,得到特征图,特征图的深度取决于设定的卷积核的数量;(2) Set the convolution gating layer: The convolution gating layer extracts features through the convolution operation between the input data and the convolution kernel, and obtains a feature map. The depth of the feature map depends on the number of set convolution kernels. ;
假设输入为X∈RA×B,其中A与B分别代表输入数据的长与宽,卷积操作定义为:Assuming that the input is X∈R A×B , where A and B represent the length and width of the input data, respectively, the convolution operation is defined as:
其中:代表第l层卷积层的第j个特征平面图,代表第l-1层卷积层的第i个平面特征图,M代表平面特征图的个数,代表卷积核,代表偏置,和的具体参数通过训练优化来确定,f代表激励层函数,常用的激励函数有ReLu与sigmoid函数,其表达式如下:in: represents the jth feature plane of the lth convolutional layer, represents the ith plane feature map of the l-1th convolutional layer, M represents the number of plane feature maps, represents the convolution kernel, stands for bias, and The specific parameters of is determined by training optimization, f represents the excitation layer function, the commonly used excitation functions are ReLu and sigmoid functions, and their expressions are as follows:
门控的输出表示为:The gated output is represented as:
其中:h(X)表示Gated CNN的输出,W和V表示不同的卷积核,b和c表示不同的偏置,表示矩阵的元素对应相乘,σ表示sigmoid门控函数,X表示上一层的输出。经过门控结构,降低了模型的梯度弥散,促进了模型的训练,简化了模型的结构,同时保留了模型的非线性表示能力;Among them: h(X) represents the output of Gated CNN, W and V represent different convolution kernels, b and c represent different biases, The elements of the matrix represent the corresponding multiplication, σ represents the sigmoid gating function, and X represents the output of the previous layer. After the gated structure, the gradient dispersion of the model is reduced, the training of the model is promoted, the structure of the model is simplified, and the nonlinear representation ability of the model is retained;
(3)设计池化层:当卷积核比较小时,经过卷积层之后的特征图仍然很大,这时候就需要经过池化操作来进行降维,减少特征数量、增强网络对于输入特征的鲁棒性,在此采用最大池化或平均池化方法来设定池化层。(3) Design the pooling layer: When the convolution kernel is relatively small, the feature map after the convolution layer is still very large. At this time, a pooling operation is required to reduce the dimension, reduce the number of features, and enhance the network's ability to input features. Robustness, where max pooling or average pooling is used to set the pooling layer.
(4)搭建全连接层和采用softmax分类器作为输出层进行最后分类输出。(4) Build a fully connected layer and use the softmax classifier as the output layer for the final classification output.
第五步,变压器故障检测模型的训练:将提取到的MFCC系数输入变压器故障检测模型进行训练。其具体步骤如下:The fifth step is the training of the transformer fault detection model: the extracted MFCC coefficients are input into the transformer fault detection model for training. The specific steps are as follows:
(1)将训练样本集提取后的MFCC系数输入变压器故障检测模型;(1) Input the MFCC coefficients extracted from the training sample set into the transformer fault detection model;
(2)MFCC系数在变压器故障检测模型内进行前向传播和反向传播不断迭代,进行参数优化得到训练后的变压器故障检测模型。(2) The MFCC coefficients are iterated through forward propagation and back propagation in the transformer fault detection model, and the parameters are optimized to obtain the trained transformer fault detection model.
第六步,待检测变压器声信号数据的获取及预处理:获取待检测变压器声信号数据,并进行去噪预处理,对去噪预处理后的待检测变压器声信号数据提取MFCC系数。The sixth step, acquisition and preprocessing of the acoustic signal data of the transformer to be detected: Acquire the acoustic signal data of the transformer to be detected, perform denoising preprocessing, and extract MFCC coefficients from the denoised preprocessed acoustic signal data of the transformer to be detected.
第七步,待检测变压器故障检测结果的获得:将MFCC系数输入训练后的变压器故障检测模型,得到变压器故障检测的结果。Step 7: Obtaining the fault detection result of the transformer to be detected: Input the MFCC coefficients into the trained transformer fault detection model to obtain the transformer fault detection result.
以上显示和描述了本发明的基本原理、主要特征和本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是本发明的原理,在不脱离本发明精神和范围的前提下本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明的范围内。本发明要求的保护范围由所附的权利要求书及其等同物界定。The foregoing has shown and described the basic principles, main features and advantages of the present invention. It should be understood by those skilled in the art that the present invention is not limited by the above-mentioned embodiments. The above-mentioned embodiments and descriptions describe only the principles of the present invention. Without departing from the spirit and scope of the present invention, there are various Variations and improvements are intended to fall within the scope of the claimed invention. The scope of protection claimed by the present invention is defined by the appended claims and their equivalents.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111026413.3A CN113707176B (en) | 2021-09-02 | 2021-09-02 | A Transformer Fault Detection Method Based on Acoustic Signal and Deep Learning Technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111026413.3A CN113707176B (en) | 2021-09-02 | 2021-09-02 | A Transformer Fault Detection Method Based on Acoustic Signal and Deep Learning Technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113707176A CN113707176A (en) | 2021-11-26 |
CN113707176B true CN113707176B (en) | 2022-09-09 |
Family
ID=78657439
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111026413.3A Active CN113707176B (en) | 2021-09-02 | 2021-09-02 | A Transformer Fault Detection Method Based on Acoustic Signal and Deep Learning Technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113707176B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113466616A (en) * | 2021-06-22 | 2021-10-01 | 海南电网有限责任公司乐东供电局 | Method and device for quickly positioning cable fault point |
CN114155878B (en) * | 2021-12-03 | 2022-06-10 | 北京中科智易科技有限公司 | Artificial intelligence detection system, method and computer program |
CN114638256B (en) * | 2022-02-22 | 2024-05-31 | 合肥华威自动化有限公司 | Transformer fault detection method and system based on acoustic wave signal and attention network |
CN114842870A (en) * | 2022-03-15 | 2022-08-02 | 国网安徽省电力有限公司 | Voiceprint anomaly detection method based on multi-band self-supervision |
CN114543983A (en) * | 2022-03-29 | 2022-05-27 | 阿里云计算有限公司 | Vibration signal identification method and device |
CN115358110A (en) * | 2022-07-25 | 2022-11-18 | 国网江苏省电力有限公司淮安供电分公司 | Transformer fault detection system based on acoustic sensor array |
CN115392293B (en) * | 2022-08-01 | 2024-08-13 | 中国南方电网有限责任公司超高压输电公司昆明局 | Transformer fault monitoring method, device, computer equipment and storage medium |
CN115424635B (en) * | 2022-11-03 | 2023-02-10 | 南京凯盛国际工程有限公司 | Cement plant equipment fault diagnosis method based on sound characteristics |
CN116189711B (en) * | 2023-04-26 | 2023-06-30 | 四川省机场集团有限公司 | Transformer fault identification method and device based on acoustic wave signal monitoring |
CN116645978B (en) * | 2023-06-20 | 2024-02-02 | 方心科技股份有限公司 | Electric power fault sound class increment learning system and method based on super-computing parallel environment |
CN117909806A (en) * | 2023-12-07 | 2024-04-19 | 国网青海省电力公司海北供电公司 | Acoustic-vibration combined transformer detection method based on bone conduction and optical fiber sensing |
CN117894317B (en) * | 2024-03-14 | 2024-05-24 | 沈阳智帮电气设备有限公司 | Box-type transformer on-line monitoring method and system based on voiceprint analysis |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109961034B (en) * | 2019-03-18 | 2022-12-06 | 西安电子科技大学 | Video target detection method based on convolution gating cyclic neural unit |
CN110335617A (en) * | 2019-05-24 | 2019-10-15 | 国网新疆电力有限公司乌鲁木齐供电公司 | A kind of noise analysis method in substation |
CN110534118B (en) * | 2019-07-29 | 2021-10-08 | 安徽继远软件有限公司 | Transformer/reactor fault diagnosis method based on voiceprint recognition and neural network |
CN110514924B (en) * | 2019-08-12 | 2021-04-27 | 武汉大学 | Power transformer winding fault location method based on deep convolutional neural network fusion visual identification |
CN112910812B (en) * | 2021-02-25 | 2021-10-22 | 电子科技大学 | Modulation mode identification method for deep learning based on space-time feature extraction |
CN113192532A (en) * | 2021-03-29 | 2021-07-30 | 安徽理工大学 | Mine hoist fault acoustic analysis method based on MFCC-CNN |
-
2021
- 2021-09-02 CN CN202111026413.3A patent/CN113707176B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113707176A (en) | 2021-11-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113707176B (en) | A Transformer Fault Detection Method Based on Acoustic Signal and Deep Learning Technology | |
CN110245608B (en) | Underwater target identification method based on half tensor product neural network | |
CN108680245A (en) | Whale globefish class Click classes are called and traditional Sonar Signal sorting technique and device | |
CN111723701B (en) | Underwater target identification method | |
CN108630209B (en) | Marine organism identification method based on feature fusion and deep confidence network | |
CN107421741A (en) | A kind of Fault Diagnosis of Roller Bearings based on convolutional neural networks | |
WO2019232846A1 (en) | Speech differentiation method and apparatus, and computer device and storage medium | |
CN113192532A (en) | Mine hoist fault acoustic analysis method based on MFCC-CNN | |
CN110987434A (en) | Rolling bearing early fault diagnosis method based on denoising technology | |
CN110795843A (en) | Method and device for identifying faults of rolling bearing | |
CN112674780A (en) | Automatic atrial fibrillation signal detection method in electrocardiogram abnormal signals | |
WO2018166316A1 (en) | Speaker's flu symptoms recognition method fused with multiple end-to-end neural network structures | |
CN114863937B (en) | Hybrid bird song recognition method based on deep transfer learning and XGBoost | |
CN115602152B (en) | Voice enhancement method based on multi-stage attention network | |
CN108694953A (en) | A kind of chirping of birds automatic identifying method based on Mel sub-band parameter features | |
CN111986699A (en) | Sound event detection method based on fully convolutional network | |
CN116741148A (en) | Voice recognition system based on digital twinning | |
CN114487129A (en) | Damage identification method for flexible materials based on acoustic emission technology | |
CN112735468A (en) | MFCC-based automobile seat motor abnormal noise detection method | |
CN111365624A (en) | A kind of intelligent terminal and method for leakage detection of brine pipeline | |
CN115457980A (en) | Automatic voice quality evaluation method and system without reference voice | |
CN116935892A (en) | Industrial valve anomaly detection method based on audio key feature dynamic aggregation | |
CN113111786A (en) | Underwater target identification method based on small sample training image convolutional network | |
CN109741733B (en) | Speech Phoneme Recognition Method Based on Consistent Routing Network | |
CN115376526A (en) | A power equipment fault detection method and system based on voiceprint recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |