[go: up one dir, main page]

CN111222285A - Transformer high active value prediction method based on voiceprint and neural network - Google Patents

Transformer high active value prediction method based on voiceprint and neural network Download PDF

Info

Publication number
CN111222285A
CN111222285A CN201911402536.5A CN201911402536A CN111222285A CN 111222285 A CN111222285 A CN 111222285A CN 201911402536 A CN201911402536 A CN 201911402536A CN 111222285 A CN111222285 A CN 111222285A
Authority
CN
China
Prior art keywords
neural network
high active
audio data
pooling
transformer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911402536.5A
Other languages
Chinese (zh)
Inventor
季坤
张晨晨
丁国成
朱太云
李坚林
陈庆涛
吴兴旺
杨海涛
尹睿涵
秦少瑞
付成成
王维佳
胡心颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
State Grid Anhui Electric Power Co Ltd
Anhui Jiyuan Software Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
State Grid Anhui Electric Power Co Ltd
Anhui Jiyuan Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd, State Grid Anhui Electric Power Co Ltd, Anhui Jiyuan Software Co Ltd filed Critical Electric Power Research Institute of State Grid Anhui Electric Power Co Ltd
Priority to CN201911402536.5A priority Critical patent/CN111222285A/en
Publication of CN111222285A publication Critical patent/CN111222285A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本发明公开了一种基于声音信息和神经网络的变压器高有功值预测方法,包括以下步骤:(1)、采集变压器的高有功值持续时间内对应的音频数据;(2)、将时间均匀切分为多个时间片段,并依据时间片段划分为训练集、测试集和验证集;(3)、对音频数据进行特征提取,获得音频数据的Filterbank特征;(4)、构建由输入层、四组卷积‑池化单元、全局平均池化层、全连接层、输出层构成的卷积神经网络;(5)、将训练数据频谱图和真实高有功值输入卷积神经网络,经过训练获得变压器高有功值预测模型;(6)、将测试集输入至预测模型进行验证。本发明能够避免复杂电力环境的影响,并具有预测结果准确性高的优点。

Figure 201911402536

The invention discloses a method for predicting the high active value of a transformer based on sound information and a neural network, comprising the following steps: (1) collecting audio data corresponding to the duration of the high active value of the transformer; (2) uniformly cutting the time It is divided into multiple time segments, and is divided into training set, test set and validation set according to the time segment; (3), perform feature extraction on audio data, and obtain the Filterbank feature of audio data; (4), construct the input layer, four A convolutional neural network composed of a group convolution-pooling unit, a global average pooling layer, a fully connected layer, and an output layer; (5), input the training data spectrogram and the real high active value into the convolutional neural network, and obtain after training Transformer high active value prediction model; (6), input the test set to the prediction model for verification. The invention can avoid the influence of complex power environment and has the advantages of high accuracy of prediction results.

Figure 201911402536

Description

Transformer high active value prediction method based on voiceprint and neural network
Technical Field
The invention relates to the field of power equipment parameter monitoring methods, in particular to a transformer high-active-value prediction method based on voiceprints and a neural network.
Background
When the power transformer operates, the high active value is the representation of the load value and can reflect the electricity utilization condition, and when the high active value is larger, the load factor of the power transformer is larger, and the transmission power of the power transformer is also larger. The traditional high active value monitoring is obtained based on voltage and current value calculation, and a voltage and current monitoring device needs to be installed on the site of a power transformer. Because there is electromagnetic radiation in power equipment and the circuit, there is the error in the data that on-the-spot monitoring devices gathered to voltage, current monitoring devices and power equipment and circuit connection receive the impact of instantaneous big voltage and electric current easily when high active value exists, also can cause the inaccurate problem of monitoring data. There is therefore a need for a method that enables non-contact measurement of the high active value of a power transformer.
Disclosure of Invention
The invention aims to provide a transformer high-active-value prediction method based on voiceprints and a neural network, and aims to solve the problem that the measurement of power transformer high-active-value monitoring data in the prior art is inaccurate.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the transformer high active value prediction method based on the voiceprint and the neural network is characterized by comprising the following steps of: the method comprises the following steps:
(1) acquiring real high active value of the transformer and audio data corresponding to the high active value within the duration time;
(2) uniformly dividing the duration of the high active value into a plurality of time segments, and dividing the audio data with the high active value corresponding to the time segments into a training set, a test set and a verification set;
(3) extracting the characteristics of the audio data to obtain the Filterbank characteristics of the audio data, wherein the characteristics are multidimensional tensors and can be regarded as a spectrogram;
(4) the method comprises the steps of constructing a convolutional neural network, wherein the convolutional neural network is composed of an input layer, four groups of convolution-pooling units, a global average pooling layer, a full-connection layer and an output layer, the input layer is a Filterbank characteristic spectrum diagram of audio data, each group of convolution-pooling units is composed of a convolution layer and a pooling layer, the pooling layers in each group of convolution-pooling units are AvgPooling layers, and the output layer is a 1-dimensional high-activity value;
(5) taking the Filterbank characteristic spectrogram of the audio data set obtained in the step (3) as a training data spectrogram, and training the convolutional neural network constructed in the step (4) based on the training data spectrogram and the corresponding real high active value to obtain a high active value prediction model of the transformer;
(6) and inputting the test set into the prediction model and then comparing the test set with the verification set to verify the prediction model.
The transformer high active value prediction method based on the voiceprint and the neural network is characterized by comprising the following steps of: and (2) simultaneously acquiring high active values in the step (1), and quantizing the acquired high active values according to a set step length, so that the continuous high active values are mapped into discrete values, and audio data corresponding to the discrete high active values are obtained.
The transformer high active value prediction method based on the voiceprint and the neural network is characterized by comprising the following steps of: in the step (3), the Filterbank features are extracted by firstly performing framing on the audio data, calculating short-time fourier transform, then calculating multidimensional mel logarithmic energy, and finally expressing the audio data of each time slice as a multidimensional tensor, namely a spectrogram.
The transformer high active value prediction method based on the voiceprint and the neural network is characterized by comprising the following steps of: in the process of extracting the Filterbank characteristics in the step (3), the shape of the filter is triangular, and the initial frequency of each filter is distributed at equal intervals at the mel frequency.
The transformer high active value prediction method based on the voiceprint and the neural network is characterized by comprising the following steps of: and (4) constructing a convolutional neural network, wherein the convolutional neural network comprises a plurality of convolutional pooling units, different features of corresponding voice segments are extracted from the spectrogram by the convolutional layers in each convolutional pooling unit, the pooling layers are used for carrying out average pooling on the spectrogram, and finally, high active prediction values are output through the global average pooling layers and the full-connection layers.
Compared with the prior art, the method has the advantages that the high active value of the transformer can be predicted through the prediction model by acquiring the audio data of the transformer within the duration of the high active value and constructing the prediction model based on the neural network, a monitoring device connected with power equipment and a line does not need to be installed on site, the influence of a complex power environment can be avoided, and the accuracy of the prediction result is high.
Drawings
Fig. 1 is a histogram of distribution of high power values in step (1) according to an embodiment of the present invention.
FIG. 2 is a data set distribution diagram in step (2) according to an embodiment of the present invention.
Fig. 3 is a diagram of the Filterbank feature extraction process in step (3) according to an embodiment of the present invention.
Fig. 4 is a diagram of the Filterbank characteristic spectrum in step (3) according to the embodiment of the present invention.
FIG. 5 is a diagram of the convolutional neural network in step (4) according to an embodiment of the present invention.
FIG. 6 is a diagram illustrating a high power prediction in an embodiment of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The transformer high active value prediction method based on the voiceprint and the neural network comprises the following steps:
(1) and acquiring audio data corresponding to the duration time of the high active value of the transformer, acquiring the high active value at the same time, and quantizing the acquired high active value according to a set step length, so that the continuous high active value is mapped into discrete values, and audio data corresponding to the discrete high active values are obtained.
Since the amplitude of the high active value is continuously changed, the high active value needs to be quantized for better classification, when the quantization is performed according to the step size of 1, the distribution histogram is shown in fig. 1, and it can be seen from the histogram that the high active value is mainly concentrated in 22-28 in one day, and when the high active value is greater than 40, the corresponding data samples are few, and the difference of the high active value data can be dozens of times. Therefore, when the high active value is quantized with the step size of 1, the problem of data imbalance can cause difficulty in neural network model classification.
In order to better utilize data, the high active value is quantized with the step length of 3, so that the problem of unbalanced data distribution is relieved to a certain extent, and meanwhile, the change trend of the high active value can be reflected, and a quantization table refers to the table 1 below.
TABLE 1 quantization rules Table
Interval(s) Quantized value Interval(s) Quantized value Interval(s) Quantized value
(20.5,23.5) 22 (29.5,32.5) 31 (38.5,41.5) 40
(23.5,26.5) 25 (32.5,35.5) 34 (41.5,44.5) 43
(26.5,29.5) 28 (35.5,38.5) 37 (44.5,47.5) 46
Quantization is a process of mapping continuous values into discrete values, and is a basic concept in digital signal processing, and different data formats can be more uniform by quantizing data, so that the processing of a computer is facilitated. In the embodiment of the invention, the high active value is quantized with the step size of 3. Therefore, the quantized high active values result in 9 categories of data.
(2) And uniformly dividing the duration of the high active value into a plurality of time segments, and dividing the audio data with the high active value corresponding to the time segments into a training set, a test set and a verification set.
The data set used in the embodiment of the invention is audio data collected from a transformer on Fuyang in 3 months in 2019, and the sampling frequency is 48 kHz. Although the high active value changes with time, it is considered that the value is relatively constant within 10s because it changes slowly. In the embodiment of the invention, the recorded audio with the time duration of 15 minutes is segmented to obtain a plurality of segments with the time duration of 10s, all audio files are divided according to the ratio of 4:1:1 to obtain a training set, a testing set and a verification set, and when the data sets are divided, all segments from the same original 15-minute audio can be divided into only one data set. The data distribution of these three sets is shown in fig. 2.
(3) Filterbank feature extraction
1) Pre-emphasis, which can compensate the high frequency part of the voice signal suppressed by the pronunciation system and can highlight the formant of the high frequency; 2) framing, namely performing frame fetching on input audio with the frame length of 200ms and the frame shift of 25 ms; 3) windowing, namely adding a Hamming window to each frame of signal to enable two ends of each frame of signal to be attenuated to be close to 0; 4) calculating short-time Fourier transform of each frame, and converting a time domain signal into a frequency domain signal; 5) mel filtering, filtering the signal by a set of linearly distributed triangular window filters on Mel frequency scales, and calculating Mel logarithmic energy of each frame, wherein the dimensionality is 256 dimensions. Finally each 10s audio sample will be divided into 400 frames, which in turn are expressed as a 1x400x256 dimensional tensor. The flow of the Filterbank feature extraction and the resulting spectrogram are shown in fig. 3 and 4, respectively.
(4) And constructing a convolutional neural network, wherein the convolutional neural network is composed of an input layer, four groups of convolution-pooling units, a global average pooling layer, a full-connection layer and an output layer, each group of convolution-pooling units is respectively composed of a convolution layer and a pooling layer, and the pooling layer in each group of convolution-pooling units is an AvgPooling layer.
Fig. 5 shows a convolutional neural network for high-activity value prediction according to an embodiment of the present invention, where the convolutional neural network includes 4 convolutional layers and 4 pooling layers, where the convolutional layer convolutional core has a size of 5, the number of channels is 32,64,128, and 256 in sequence, the pooling layer is an AvgPooling layer with a step size of 2, after passing through all convolutional layers and pooling layers, the global average pooling layer immediately following the convolutional layer reduces both time and frequency dimensions to 1, and the last fully-connected layer maps a feature vector with 512 dimensions to an output high-activity value.
(5) And (3) extracting the Filterbank characteristics from the training set obtained in the step (2) to obtain a spectrogram of the audio data, and inputting the spectrogram of the audio data and the corresponding real high-activity value into a convolutional neural network to obtain a prediction model.
(6) And inputting the test set into the prediction model and then comparing the test set with the verification set to verify the prediction model.
In order to demonstrate the practical effect of the invention, the invention reserves a continuous speech segment when dividing the data set. They do not appear in any of the training set, test set, and validation set described above. Compared with the three sets, the distribution of the part of data in time is continuous, and the high active value changes with time to present a sinusoidal law.
As shown in fig. 6, the tested audio is audio data of 62 hours from 19 th 3, 19 th 22 th 19 th 3, 22 th 12 th 19 th 3, 19 th 22 th 19 th 12 th 19 th, the solid line represents the real change rule of the high active value (normalized to the [0,1] interval) along with the time, the dotted line represents the prediction of the high active value according to the audio file, and it can be seen from fig. 6 that the prediction of the high active value by the present invention is basically correct, and the change trend of the high active value can be reflected.
The embodiments of the present invention are described only for the preferred embodiments of the present invention, and not for the limitation of the concept and scope of the present invention, and various modifications and improvements made to the technical solution of the present invention by those skilled in the art without departing from the design concept of the present invention shall fall into the protection scope of the present invention, and the technical content of the present invention which is claimed is fully set forth in the claims.

Claims (5)

1. The transformer high active value prediction method based on the voiceprint and the neural network is characterized by comprising the following steps of: the method comprises the following steps:
(1) acquiring real high active value of the transformer and audio data corresponding to the high active value within the duration time;
(2) uniformly dividing the duration of the high active value into a plurality of time segments, and dividing the audio data with the high active value corresponding to the time segments into a training set, a test set and a verification set;
(3) extracting the characteristics of the audio data to obtain the Filterbank characteristics of the audio data, wherein the characteristics are multidimensional tensors and can be regarded as a spectrogram;
(4) the method comprises the steps of constructing a convolutional neural network, wherein the convolutional neural network is composed of an input layer, four groups of convolution-pooling units, a global average pooling layer, a full-connection layer and an output layer, the input layer is a Filterbank characteristic spectrum diagram of audio data, each group of convolution-pooling units is composed of a convolution layer and a pooling layer, the pooling layers in each group of convolution-pooling units are AvgPooling layers, and the output layer is a 1-dimensional high-activity value;
(5) taking the Filterbank characteristic spectrogram of the audio data set obtained in the step (3) as a training data spectrogram, and training the convolutional neural network constructed in the step (4) based on the training data spectrogram and the corresponding real high active value to obtain a high active value prediction model of the transformer;
(6) and inputting the test set into the prediction model and then comparing the test set with the verification set to verify the prediction model.
2. The transformer high-active-value prediction method based on the voiceprint and the neural network as claimed in claim 1, wherein: and (2) simultaneously acquiring high active values in the step (1), and quantizing the acquired high active values according to a set step length, so that the continuous high active values are mapped into discrete values, and audio data corresponding to the discrete high active values are obtained.
3. The transformer high-active-value prediction method based on the voiceprint and the neural network as claimed in claim 1, wherein: in the step (3), the Filterbank features are extracted by firstly performing framing on the audio data, calculating short-time fourier transform, then calculating multidimensional mel logarithmic energy, and finally expressing the audio data of each time slice as a multidimensional tensor, namely a spectrogram.
4. The transformer high-active-value prediction method based on the voiceprint and the neural network as claimed in claim 1, wherein: in the process of extracting the Filterbank characteristics in the step (3), the shape of the filter is triangular, and the initial frequency of each filter is distributed at equal intervals at the mel frequency.
5. The transformer high-active-value prediction method based on the voiceprint and the neural network as claimed in claim 1, wherein: and (4) constructing a convolutional neural network, wherein the convolutional neural network comprises a plurality of convolutional pooling units, different features of corresponding voice segments are extracted from the spectrogram by the convolutional layers in each convolutional pooling unit, the pooling layers are used for carrying out average pooling on the spectrogram, and finally, high active prediction values are output through the global average pooling layers and the full-connection layers.
CN201911402536.5A 2019-12-31 2019-12-31 Transformer high active value prediction method based on voiceprint and neural network Pending CN111222285A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911402536.5A CN111222285A (en) 2019-12-31 2019-12-31 Transformer high active value prediction method based on voiceprint and neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911402536.5A CN111222285A (en) 2019-12-31 2019-12-31 Transformer high active value prediction method based on voiceprint and neural network

Publications (1)

Publication Number Publication Date
CN111222285A true CN111222285A (en) 2020-06-02

Family

ID=70827929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911402536.5A Pending CN111222285A (en) 2019-12-31 2019-12-31 Transformer high active value prediction method based on voiceprint and neural network

Country Status (1)

Country Link
CN (1) CN111222285A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116110432A (en) * 2022-12-16 2023-05-12 上汽大众汽车有限公司 A neural network-based method and system for processing vehicle noise data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016163511A (en) * 2015-03-05 2016-09-05 中国電力株式会社 Power demand amount prediction system, power demand amount prediction method, and program
CN107578124A (en) * 2017-08-28 2018-01-12 国网山东省电力公司电力科学研究院 Short-term power load forecasting method based on multi-layer improved GRU neural network
US20180330234A1 (en) * 2017-05-11 2018-11-15 Hussein Al-barazanchi Partial weights sharing convolutional neural networks
CN109612708A (en) * 2018-12-28 2019-04-12 东北大学 On-line detection system and method of power transformer based on improved convolutional neural network
CN109785181A (en) * 2017-11-03 2019-05-21 罗斯蒙特公司 For predicting the trend analysis function of the health status of electric power asset
CN109800929A (en) * 2019-03-25 2019-05-24 国网河北省电力有限公司经济技术研究院 A kind of Load Forecasting, device and calculate equipment
CN109840691A (en) * 2018-12-31 2019-06-04 天津求实智源科技有限公司 Non-intrusion type subitem electricity estimation method based on deep neural network
CN110376457A (en) * 2019-06-28 2019-10-25 同济大学 Non-intrusion type load monitoring method and device based on semi-supervised learning algorithm
CN110415709A (en) * 2019-06-26 2019-11-05 深圳供电局有限公司 Transformer working state identification method based on voiceprint identification model
CN110534118A (en) * 2019-07-29 2019-12-03 安徽继远软件有限公司 Transformer/reactor method for diagnosing faults based on Application on Voiceprint Recognition and neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016163511A (en) * 2015-03-05 2016-09-05 中国電力株式会社 Power demand amount prediction system, power demand amount prediction method, and program
US20180330234A1 (en) * 2017-05-11 2018-11-15 Hussein Al-barazanchi Partial weights sharing convolutional neural networks
CN107578124A (en) * 2017-08-28 2018-01-12 国网山东省电力公司电力科学研究院 Short-term power load forecasting method based on multi-layer improved GRU neural network
CN109785181A (en) * 2017-11-03 2019-05-21 罗斯蒙特公司 For predicting the trend analysis function of the health status of electric power asset
CN109612708A (en) * 2018-12-28 2019-04-12 东北大学 On-line detection system and method of power transformer based on improved convolutional neural network
CN109840691A (en) * 2018-12-31 2019-06-04 天津求实智源科技有限公司 Non-intrusion type subitem electricity estimation method based on deep neural network
CN109800929A (en) * 2019-03-25 2019-05-24 国网河北省电力有限公司经济技术研究院 A kind of Load Forecasting, device and calculate equipment
CN110415709A (en) * 2019-06-26 2019-11-05 深圳供电局有限公司 Transformer working state identification method based on voiceprint identification model
CN110376457A (en) * 2019-06-28 2019-10-25 同济大学 Non-intrusion type load monitoring method and device based on semi-supervised learning algorithm
CN110534118A (en) * 2019-07-29 2019-12-03 安徽继远软件有限公司 Transformer/reactor method for diagnosing faults based on Application on Voiceprint Recognition and neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
宋知用: "《MATLAB语音信息分析与合成》", 北京航空航天大学出版社, pages: 38 - 43 *
江涛等: "电力系统谐波及其检测技术", 《江西电力职业技术学院学报》, no. 03, 28 September 2006 (2006-09-28) *
贾京龙等: "基于卷积神经网络的变压器故障诊断方法", 《电测与仪表》 *
贾京龙等: "基于卷积神经网络的变压器故障诊断方法", 《电测与仪表》, no. 13, 10 July 2017 (2017-07-10) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116110432A (en) * 2022-12-16 2023-05-12 上汽大众汽车有限公司 A neural network-based method and system for processing vehicle noise data

Similar Documents

Publication Publication Date Title
CN109256138B (en) Identity verification method, terminal device and computer readable storage medium
CN107610715B (en) Similarity calculation method based on multiple sound characteristics
CN113314144B (en) Voice recognition and power equipment fault early warning method, system, terminal and medium
CN108761287B (en) Transformer partial discharge type identification method
CN111325095A (en) Method and system for intelligent detection of equipment health status based on acoustic signal
CN110490071A (en) A kind of substation's Abstraction of Sound Signal Characteristics based on MFCC
CN113327626A (en) Voice noise reduction method, device, equipment and storage medium
CN111912519B (en) Transformer fault diagnosis method and device based on voiceprint frequency spectrum separation
Zhang et al. Fault identification based on PD ultrasonic signal using RNN, DNN and CNN
CN112052712B (en) Power equipment state monitoring and fault identification method and system
CN115840120B (en) A high-voltage cable partial discharge abnormal monitoring and early warning method
CN112147474A (en) XLPE power cable typical defect partial discharge type identification system and method
CN116230013A (en) Transformer fault voiceprint detection method based on x-vector
CN112908344A (en) Intelligent recognition method, device, equipment and medium for bird song
CN114352486B (en) A classification-based method for detecting wind turbine blade audio faults
CN111696580A (en) Voice detection method and device, electronic equipment and storage medium
CN116340812A (en) Transformer partial discharge fault mode identification method and system
CN108764184A (en) A kind of separation method of heart and lung sounds signal, device, equipment and storage medium
CN112486137A (en) Method and system for constructing fault feature library of active power distribution network and fault diagnosis method
CN115954017A (en) HHT-based engine small sample sound abnormal fault identification method and system
Chen et al. An audio scene classification framework with embedded filters and a DCT-based temporal module
Zhang et al. Temporal Transformer Networks for Acoustic Scene Classification.
CN117316178A (en) Voiceprint recognition method, device, equipment and medium for power equipment
CN117074870A (en) Cable diagnosis method and system
CN111222285A (en) Transformer high active value prediction method based on voiceprint and neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200602