CN117373484B - A switch cabinet voiceprint fault detection method based on feature transformation - Google Patents
A switch cabinet voiceprint fault detection method based on feature transformation Download PDFInfo
- Publication number
- CN117373484B CN117373484B CN202311292303.0A CN202311292303A CN117373484B CN 117373484 B CN117373484 B CN 117373484B CN 202311292303 A CN202311292303 A CN 202311292303A CN 117373484 B CN117373484 B CN 117373484B
- Authority
- CN
- China
- Prior art keywords
- frequency
- frame
- time
- voiceprint
- gaussian window
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000009466 transformation Effects 0.000 title claims abstract description 35
- 238000001514 detection method Methods 0.000 title claims abstract description 21
- 239000011159 matrix material Substances 0.000 claims abstract description 127
- 238000001228 spectrum Methods 0.000 claims abstract description 44
- 239000013598 vector Substances 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 13
- 230000009467 reduction Effects 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims description 27
- 230000037433 frameshift Effects 0.000 claims description 17
- 238000000354 decomposition reaction Methods 0.000 claims description 11
- 230000003595 spectral effect Effects 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 3
- 230000007935 neutral effect Effects 0.000 claims description 2
- 230000008859 change Effects 0.000 abstract description 13
- 238000013528 artificial neural network Methods 0.000 abstract description 8
- 238000004458 analytical method Methods 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 24
- 230000005236 sound signal Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000009432 framing Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 239000012634 fragment Substances 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/18—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
- Y04S10/52—Outage or fault management, e.g. fault detection or location
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
Abstract
The invention discloses a switch cabinet voiceprint fault detection method based on feature transformation, and relates to the technical field of power equipment fault analysis. Comprising the following steps: collecting an observed voiceprint signal of a switch cabinet, initializing a Gaussian window, and dividing the observed voiceprint signal into a plurality of frames through the Gaussian window; obtaining time-frequency matrixes of all frames according to generalized S transformation; obtaining a final time-frequency matrix according to the time-frequency matrix correlation coefficient and the instantaneous frequency change condition of the adjacent frames; and performing dimension reduction processing on the final time-frequency matrix to obtain a feature vector, and inputting the feature vector into a residual neural network to perform fault detection of the switch cabinet. According to the invention, the size of the Gaussian window is adjusted according to the instantaneous frequency change, the generalized S transformation is improved by the self-adaptive Gaussian window size, so that the time-frequency matrix obtained according to the generalized S transformation can accurately reflect the frequency spectrum characteristics of the voiceprint signal, and the characteristic obtained according to the time-frequency matrix is more representative.
Description
Technical Field
The invention relates to the technical field of power equipment fault analysis, in particular to a switch cabinet voiceprint fault detection method based on feature transformation.
Background
Switch cabinets are important equipment in a power system, and faults of the switch cabinets can affect normal operation of the power system and reliability of the equipment, so that researchers start focusing on application of voiceprint technology in fault detection of the switch cabinets in order to improve efficiency and accuracy of fault detection.
Voiceprint technology is fault recognition and classification based on characteristics of sound signals, and faults in a switch cabinet can be detected by analyzing and comparing parameters such as frequency, amplitude, harmonic waves and the like of the sound signals. Voiceprint technology is a non-invasive detection method, does not require destructive or contact detection of equipment, and can perform fault detection only by collecting sound signals generated by the equipment, which means that the detection can be performed under the condition that the equipment is normally operated, and the equipment is not required to be stopped or disassembled. And the voiceprint technology can collect and analyze the sound signals of the equipment in real time, can detect and diagnose the faults of the switch cabinet before the partial discharge and overheat phenomena occur, and early warn the potential faults in advance, thereby avoiding the production stop or accident occurrence caused by the equipment faults.
In existing voiceprint technology, voiceprint recognition generally includes the steps of: and extracting features of the collected sound signals, and then utilizing the extracted features to carry out voiceprint recognition. When the generalized s transformation is used, firstly dividing the sound signal into continuous frames, and determining the size of each frame by a window function; then, generalized s transformation is carried out on the signals in each frame to generate a corresponding time-frequency matrix; based on the time-frequency matrix, further performing feature extraction operation, and then performing voiceprint recognition by using the extracted features.
However, in the above steps, the size of the window function is fixed after being set manually, so that self-adaption cannot be realized, when the size of the window is selected too large, the voiceprint information frequency contained in one window may be caused to fluctuate greatly, the time-frequency matrix obtained based on the window contains a plurality of features, and the distinguishing capability is lacking, so that the extracted features are further not represented; conversely, if the window size is selected to be too small, it may result in a high repetition of voiceprint information captured by multiple windows, resulting in redundancy of information, which may mask some fine but important voiceprint features, resulting in the loss of the distinguishing ability of the time-frequency matrix obtained based on the window, resulting in non-representativeness of the extracted features.
Disclosure of Invention
The invention provides a feature transformation-based switch cabinet voiceprint fault detection method, which can solve the problem of poor distinguishing capability of a time-frequency matrix obtained according to a window with a fixed size in the prior art.
The invention provides a switch cabinet voiceprint fault detection method based on feature transformation, which comprises the following steps:
Collecting an observed voiceprint signal of a switch cabinet, initializing a Gaussian window with random size, and dividing the observed voiceprint signal into a plurality of frames with fixed length through the Gaussian window;
Obtaining time-frequency matrixes corresponding to all frames according to generalized S transformation, and calculating time-frequency matrix correlation coefficients r (A T,BT) corresponding to adjacent frames A and B;
If the correlation coefficient r (A T,BT) of the time-frequency matrix corresponding to the adjacent frame A and the adjacent frame B is smaller than the set first threshold, the time-frequency matrix A T、BT corresponding to the current adjacent frame A and the current adjacent frame B is a final time-frequency matrix;
If the correlation coefficient r (A T,BT) of the time-frequency matrix corresponding to the adjacent frame A and the adjacent frame B is not smaller than the set first threshold, the lengths of the adjacent frame A and the adjacent frame B are adjusted through iterating the size of the current Gaussian window, the comprehensive instantaneous frequency of the adjacent frame A and the adjacent frame B after the lengths are adjusted is calculated, when the difference between the comprehensive instantaneous frequency corresponding to the size of the Gaussian window of a certain iteration and the comprehensive instantaneous frequency corresponding to the size of the Gaussian window of the last iteration is larger than the set second threshold, iteration is stopped, an optimal Gaussian window is obtained, and a final time-frequency matrix is obtained based on the optimal window;
and performing dimension reduction processing on the final time-frequency matrix to obtain a feature vector, and inputting the feature vector into a trained residual neutral network to perform fault detection of the switch cabinet.
Further, the method further comprises the following steps: noise reduction pretreatment is carried out on the collected observed voiceprint signals:
converting the observation voiceprint signal into a frequency domain by using short-time Fourier transform, and extracting the frequency spectrum of the observation voiceprint signal;
spectrum decomposition is carried out on the frequency spectrum of the observed voiceprint signal:
F=WH (1)
wherein,
F is the spectrum of the observed voiceprint signal;
W is a source spectrum non-negative matrix obtained by decomposing the frequency spectrum of the observation voiceprint signal, and each column of the source spectrum non-negative matrix represents the energy distribution of one source signal on the frequency domain;
H is a non-negative matrix of mixing coefficients obtained after spectral decomposition of the observed voiceprint signal, each row of which represents a period of mixing coefficients for describing the contribution of each source signal in the observed voiceprint signal;
Designing an objective function, and obtaining a non-negative matrix of a target source spectrum by minimizing the objective function through a gradient descent method And target mixing coefficient non-negative matrix
Wherein,
Argmin represents finding the parameter that minimizes the objective function;
i F-WH 2 is the objective function;
a source spectrum non-negative matrix and a mixing coefficient non-negative matrix, respectively, that minimize the objective function;
Non-negative matrix of target source spectrum by multiplication inverse spectrogram transformation And target mixing coefficient non-negative matrixMultiplying to obtain spectrograms of the switch cabinet voiceprint signals after noise separation:
And performing inverse short-time Fourier transform on the spectrogram of the noise-separated switch cabinet voiceprint signal, and converting the spectrogram into a time domain signal to obtain an observed voiceprint time domain signal after noise reduction.
Further, the initializing a random size gaussian window, dividing the observed voiceprint signal into a plurality of fixed length frames by moving the gaussian window, comprises:
Manually setting parameter values of frame shifts, wherein the frame shifts are used for determining time intervals between adjacent frames A and B;
according to the parameter value of frame shift, shifting an initialized random-sized Gaussian window along an observed voiceprint signal, and multiplying the initialized random-sized Gaussian window with the observed voiceprint signal at a corresponding position to obtain a plurality of frames with fixed lengths:
x(n)=s(n)*g(n) (3)
where s (n) is the observed voiceprint signal, g (n) is a gaussian window, and x (n) is the obtained frame.
Further, the obtaining the time-frequency matrix corresponding to all frames according to the generalized S transformation includes:
Each frame is transformed using a generalized S transform:
wherein,
GST k (t, f) is the energy value obtained by generalized s-transformation from the observed voiceprint signal of the kth frame at a given time t and frequency f;
x k (u) is the observed voiceprint of the kth frame, u is the argument of the multiplicative function;
g (t, u) is a gaussian window function, and σ is the standard deviation corresponding to the gaussian window function;
e -i2πfu is the frequency modulation term, i is the imaginary unit, pi is the circumference ratio;
And integrating all energy values of each frame to form a time-frequency matrix of each frame, wherein the rows of the time-frequency matrix represent time, the columns represent frequency, and each element of the time-frequency matrix is an energy value at the corresponding time t and frequency f.
Further, the calculating the correlation coefficient r (a T,BT) of the time-frequency matrix corresponding to the adjacent frame a and the adjacent frame B includes:
wherein,
R (A T,BT) is the correlation coefficient of the time-frequency matrix corresponding to the adjacent frame A and the adjacent frame B;
a T、BT is a time-frequency matrix corresponding to the adjacent frame a and frame B;
cov (a T,BT) represents covariance of the time-frequency matrix a T、BT corresponding to the adjacent frame a and frame B;
std (A T)、std(BT) is the standard deviation of the time-frequency matrix A T、BT corresponding to the adjacent frame A and frame B respectively.
Further, the adjusting the lengths of the adjacent frames a and B by iterating the size of the current gaussian window includes:
iterating the current Gaussian window size by a step length p, and expanding the Gaussian window size corresponding to the frame A in the adjacent frames A and B by p lengths and shrinking the Gaussian window size corresponding to the frame B by p lengths in each iteration;
in each iteration, the Gaussian window with changed size is multiplied by the observed voiceprint signal at the corresponding position, and the adjacent frames A and B with adjusted lengths are obtained.
Further, the calculating the integrated instantaneous frequency of the adjacent frame a and frame B after the adjustment length includes:
wherein,
Beta ip is the comprehensive instantaneous frequency obtained after the length of the Gaussian window corresponding to the frame A is increased by ip and the length of the Gaussian window corresponding to the frame B is reduced by ip;
the size of the Gaussian window corresponding to the frame A after the ip lengths are enlarged, and the size of the Gaussian window corresponding to the frame B after the ip lengths are reduced;
The instantaneous frequency of each time in the frame A and the frame B after the length adjustment is respectively.
Further, the instantaneous frequency includes:
Performing short-time Fourier transform on the adjacent frames A and B to obtain frequency spectrum representations of the frames A and B;
calculating phase difference of adjacent frames A and B
Wherein, Respectively representing phases of the frequency index k of the frame A and the frame B in the frequency spectrum;
Using phase differences Calculating a local frequency estimate f_approx (k):
wherein Δt is the frame shift;
and performing linear time interpolation on the local frequency estimation value f_approx (k) to obtain the instantaneous frequency of each moment of the frame A and the frame B.
Further, when the difference between the integrated instantaneous frequency corresponding to the size of the gaussian window of a certain iteration and the integrated instantaneous frequency corresponding to the size of the gaussian window of a last iteration is greater than a set second threshold, stopping the iteration to obtain an optimal gaussian window, including:
Determining an iteration stop condition:
|βip-β(i-1)p|>θ2 (10)
wherein,
Beta ip is the comprehensive instantaneous frequency obtained after the length of the Gaussian window corresponding to the frame A is increased by ip and the length of the Gaussian window corresponding to the frame B is reduced by ip;
Beta (i-1)p is the comprehensive instantaneous frequency obtained by expanding the Gaussian window corresponding to the frame A by (i-1) p lengths and shrinking the Gaussian window corresponding to the frame B by (i-1) p lengths;
θ 2 is a set second threshold;
The Gaussian window with (i-1) p length enlarged and the Gaussian window with (i-1) p length reduced are taken as the optimal Gaussian window.
Further, the performing the dimension reduction processing on the time-frequency matrix to obtain a feature vector includes:
Carrying out standardization processing on each row of the time-frequency matrix to ensure that the mean value is 0 and the variance is 1;
Calculating a covariance matrix for the normalized time-frequency matrix;
and obtaining the eigenvalue and the corresponding eigenvector by carrying out eigenvalue decomposition on the covariance matrix.
The embodiment of the invention provides a switch cabinet voiceprint fault detection method based on feature transformation, which has the following beneficial effects compared with the prior art:
According to the invention, firstly, a sound signal is divided into frames through a Gaussian window function with random size, generalized s transformation is carried out on the signal in each frame to generate a time-frequency matrix corresponding to each frame, then, the time-frequency matrix correlation coefficient corresponding to an adjacent frame is calculated, when the time-frequency matrix correlation coefficient corresponding to the adjacent frame is larger than a set first threshold value, the Gaussian window is adjusted, an optimal Gaussian window is obtained through the transient frequency change condition in the process of adjusting the size of the Gaussian window, a final time-frequency matrix is obtained according to the optimal Gaussian window, dimension reduction processing is carried out on the final time-frequency matrix to obtain feature vectors, and the feature vectors are input into a trained residual neural network to carry out fault detection of a switch cabinet.
The invention measures the similarity degree of adjacent frames through the time-frequency matrix correlation coefficient corresponding to the adjacent frames, and when the adjacent frames are similar, one window can contain more similar voiceprint information by enlarging the window size of the corresponding frame, so that the time-frequency matrix obtained according to the frames containing more similar voiceprint information is more differentiated. And then the range of the accurate adjustment window is enlarged according to the change of the instantaneous frequency, when the change of the instantaneous frequency in the process of adjusting the size of the window is larger, the important frequency components in the sound signal can be distinguished by stopping the adjustment window, the optimal window size is determined according to the change condition of the instantaneous frequency of the sound track signal, and the time-frequency matrix obtained based on the window can more accurately reflect the frequency spectrum characteristic of the sound track signal, so that the obtained characteristic according to the time-frequency matrix is more representative, and the stability and the reliability of the sound track characteristic are improved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
In the drawings:
FIG. 1 is a flow chart of a method for detecting voiceprint faults of a switch cabinet based on feature transformation;
FIG. 2 is a denoising process flow chart of a switch cabinet voiceprint fault detection method based on feature transformation;
FIG. 3 is a time-frequency matrix acquisition flow chart of a switch cabinet voiceprint fault detection method based on feature transformation;
fig. 4 is a framing schematic diagram of a method for detecting a voiceprint fault of a switch cabinet based on feature transformation.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, but it should be understood that the protection scope of the present invention is not limited by the specific embodiments.
A switch cabinet voiceprint fault detection method based on feature transformation specifically comprises the following steps:
Step 1, collecting observation voiceprint data of a switch cabinet, including observation voiceprint data in normal and fault states, dividing continuous observation voiceprint data into voiceprint fragments, classifying different types of voiceprint fragments, and constructing an observation voiceprint data set of the switch cabinet, wherein the method specifically comprises the following steps:
And 1.1, recording continuous observation voiceprint data of the switch cabinet by using high-quality recording equipment, wherein the continuous observation voiceprint data comprise observation voiceprint data of the switch cabinet in normal and fault states.
And 1.2, dividing the continuous observation voiceprint data into voiceprint fragments through an audio analysis tool Praat-Py, setting normal and fault labels for different types of observation voiceprint data, and dividing the normal and fault labels.
And 1.3, dividing the acquired data into a training set, a verification set and a test set, wherein the dividing ratio is 70% of the training set, 15% of the verification set and 15% of the test set.
And step 2, performing denoising operation on the acquired observation voiceprint data by using a Mono nonlinear hybrid model, and removing redundant information and interference.
Since the switch cabinet is a high-voltage device, various noise sources may exist around the switch cabinet, such as high-voltage arc sound, motor noise, mechanical vibration noise, air flow noise, environmental noise, etc., and the spectral characteristics and time-varying property of these noise sources may interfere with the sound generated by the switch cabinet in the working state and the fault state, the collected observation voiceprint data needs to be denoised at first, which specifically includes:
step 2.1, converting the observation voiceprint signal into a frequency domain by using short-time Fourier transform (STFT), and extracting a frequency spectrum of the observation voiceprint signal:
F=fft(E) (1)
wherein,
E is observed voiceprint data of the switch cabinet voiceprint;
F is the frequency spectrum corresponding to the switch cabinet observation voiceprint data.
And 2.2, carrying out spectrogram decomposition on the frequency spectrum of the observed voiceprint signal, wherein in the spectrogram decomposition method, the mixed audio is assumed to be approximately expressed as the product form of the source spectrum and the mixed coefficient. The matrix W represents the source spectrum, each column of which represents the energy distribution of a source signal in the frequency domain; the matrix H represents mixing coefficients, each row of which represents a period of mixing coefficients for describing the extent to which each source signal contributes in the mixed audio. By performing spectrogram decomposition on the mixed audio, the mixed signal can be decomposed into spectrograms of different source signals, and meanwhile, mixing coefficients describing a mixing process are obtained, and the method specifically comprises the following steps:
F=WH (2)
wherein,
F is the spectrum of the observed voiceprint signal;
W is a source spectrum non-negative matrix obtained by decomposing the frequency spectrum of the observation voiceprint signal, and each column of the source spectrum non-negative matrix represents the energy distribution of one source signal on the frequency domain;
h is a non-negative matrix of mixing coefficients obtained after spectral decomposition of the observed voiceprint signal, each row of which represents a time period of the mixing coefficients, for describing the extent of contribution of each source signal in the observed voiceprint signal.
Step 2.3, randomly initializing a source spectrum non-negative matrix W and a mixing coefficient non-negative matrix H.
Step 2.4, designing an objective function, and iteratively updating the source spectrum non-negative matrix W and the mixing coefficient non-negative matrix H by minimizing the objective function until the preset iteration times are reached or the termination condition is met, so as to obtain the objective source spectrum non-negative matrixAnd target mixing coefficient non-negative matrix
Wherein,
Argmin represents finding the parameter that minimizes the objective function;
I F-WH 2 is the objective function;
a source spectrum non-negative matrix and a mixing coefficient non-negative matrix, respectively, that minimize the objective function;
The term |· | denotes the norms, here the euclidean norms, representing the square root of the sum of the square elements of the matrix.
Step 2.5, transforming the target source spectrum into a non-negative matrix through multiplication inverse spectrogramAnd target mixing coefficient non-negative matrixAnd carrying out product reconstruction to obtain a spectrogram of the switch cabinet voiceprint signal after noise separation.
And 2.6, performing inverse short-time Fourier transform on the spectrogram of the noise-separated switch cabinet voiceprint signal, and converting the spectrogram into a time domain signal to obtain a time domain signal of the observed voiceprint after noise reduction.
And 3, framing the denoised switch cabinet voiceprint time domain signal.
Framing is a common technique in the processing of voiceprint signals for dividing a continuous voiceprint signal into a plurality of fixed length frames. The voice print signal is divided into frames, so that when a certain time period of the voice print signal is researched, only voice print signal data in the time period is needed to be focused, the long time domain characteristic of the whole voice print signal is not needed to be considered, and the local change and the short time characteristic of the voice print signal in time can be better described; after the voiceprint signal is divided into frames, the time variability of the voiceprint signal can be better analyzed, and the voiceprint signal is divided into frames to help extract and analyze the time variation pattern of the signal because the voiceprint signal can have different frequencies and amplitudes in different time periods. The method specifically comprises the following steps:
And 3.1, manually setting parameter values of frame shift for determining the time interval between adjacent frames. In the process of framing the voiceprint signal, frame shift (also called frame step length) is set to realize overlapping and continuity between adjacent frames in the time domain, the frame shift is the distance that the initial position of a frame moves relative to the position of the previous frame, if frame shift is not performed, the edge between frames can introduce extra window effect, so that the voiceprint signal information at the boundary is inaccurate, the boundary effect can be relieved to a certain extent by setting the frame shift, the transition between different frames is smoother, and more sampling points in time can be obtained by setting the frame shift, so that the time domain resolution is improved, when the frame shift is smaller, the overlapping between frames is more, and the rapid change and instantaneous characteristics of the signal can be captured more accurately.
Step 3.2, according to the parameter value of frame shift, moving an initialized random Gaussian window along the observed voiceprint signal, and multiplying the initialized random Gaussian window with the observed voiceprint signal at a corresponding position to obtain a plurality of frames with fixed length, wherein each frame represents the local characteristics of the voiceprint signal in a period of time:
x(n)=s(n)*g(n) (4)
Where s (n) is an observed voiceprint signal, g (n) is a gaussian window, x (n) is an obtained frame, and after dividing the observed voiceprint signal into frames by a window function, the resolution of the observed voiceprint signal in the frequency domain can be improved, frequency leakage can be reduced in the frequency domain, and the resolution of the frequency band can be increased within the frame.
And 4, processing the frame signal to obtain a final time-frequency matrix.
A Time-Frequency Matrix (Time-Frequency Matrix) is a two-dimensional Matrix that represents signals in the Time and Frequency domains. It visualizes the information of the signal in time and frequency and provides an intuitive way to analyze the time-frequency characteristics and time-frequency structure of the signal. The time-frequency matrix uses time as the horizontal axis and frequency as the vertical axis, and uses color or gray scale to represent the time-frequency energy or phase of the signal. It can clearly show the variation of the signal at different times and frequencies, as well as the dynamic characteristics of the spectrum. The time-frequency matrix is the basis of voiceprint signal processing and feature extraction, and the time-frequency features of the signals can be extracted by further processing, analyzing and feature extraction of the time-frequency matrix, and can be used in the application fields of signal classification, fault diagnosis and the like. The method comprises the following steps:
Step 4.1, each frame is transformed using a generalized S transform.
The generalized s-transform (Generalized S Transform, GST) is one of time-frequency analysis methods, and is used for describing characteristics of a signal in a time domain and a frequency domain, in the GST, the signal is convolved with a window function to perform weighting processing, and modulated in the frequency domain to obtain a time-frequency analysis result, and the generalized s-transform can select the time-frequency characteristic of interest to analyze by adjusting the shape, the size and the position of the window function, and specifically includes:
wherein,
GST k (t, f) is the energy value obtained by generalized s-transformation from the observed voiceprint signal of the kth frame at a given time t and frequency f;
x k (u) is the observed voiceprint of the kth frame, u is the argument of the multiplicative function;
g (t, u) is a gaussian window function, and σ is the standard deviation corresponding to the gaussian window function;
e -i2πfu is the frequency modulation term, i is the imaginary unit, and pi is the circumference ratio.
And 4.2, integrating all the energy values obtained by the transformed generalized S to form a time-frequency matrix, wherein the rows of the time-frequency matrix represent the time, the columns represent the frequency, and each element of the time-frequency matrix is the energy value at the corresponding time t and the frequency f.
And 4.3, calculating the correlation coefficient of the time-frequency matrix corresponding to the adjacent frames.
In the voiceprint signal, the correlation coefficients of the time-frequency matrix corresponding to the adjacent frames represent the similarity degree between the adjacent frames. Specifically, if the time-frequency matrices corresponding to two adjacent frames have similar energy distribution and harmonic structure at the same frequency point, the correlation coefficient between them will be higher, which means that they have similar characteristics in the frequency domain, so by calculating the correlation coefficient of the time-frequency matrix corresponding to the continuous frame, the spectrum similarity of the voiceprint signal at different time points can be evaluated, and the formula of calculating the correlation coefficient r (a T,BT) of the time-frequency matrix corresponding to the adjacent frames a and B is as follows:
Wherein r (A T,BT) is the correlation coefficient of the time-frequency matrix corresponding to the adjacent frame A and the adjacent frame B; a T、BT is a time-frequency matrix corresponding to the adjacent frame a and frame B; cov (a T,BT) represents covariance of the time-frequency matrix a T、BT corresponding to the adjacent frame a and frame B; std (A T)、std(BT) is the standard deviation of the time-frequency matrix A T、BT corresponding to the adjacent frame A and frame B respectively.
Step 4.4, if the correlation coefficient r (a T,BT) of the time-frequency matrix corresponding to the adjacent frame A, B is smaller than the set first threshold, the time-frequency matrix corresponding to the current adjacent frame A, B is the final time-frequency matrix; if the correlation coefficient r (A T,BT) of the time-frequency matrix corresponding to the adjacent frame A, B is not less than the set first threshold, step 4.5 and step 4.6 are executed.
If the correlation coefficient r (a T,BT) of the time-frequency matrix corresponding to the adjacent frames A, B is not smaller than the set first threshold, it indicates that the correlation coefficient r (a T,BT) of the frequency matrix corresponding to the two frames is larger, which indicates that the two frames have similar energy distribution and harmonic structure at the same frequency point, which indicates that they have similar characteristics in the frequency domain, and further indicates that the gaussian window can be enlarged at this time, so that one window can contain more similar voiceprint information.
And 4.5, obtaining an optimal Gaussian window according to the transient frequency change condition of the adjacent frames A, B.
The resizing of the window based on the time-frequency matrix correlation coefficient may lead to information loss because the correlation coefficient is an indirect quantity calculated from the signal frequency, which is relatively insensitive to signal variations. The sensitivity to signal variations can be improved by adjusting the window size by the instantaneous frequency.
If the window size of a certain frame is changed, but the corresponding instantaneous frequency change is not large, which indicates that the voiceprint signal may not have obvious frequency change or frequency component drift in the corresponding time period, and the voiceprint signal itself remains stable in the corresponding time period, in this case, the window may be enlarged so that one window contains voiceprint signals with more specific similar characteristics, but if the instantaneous frequency change is obvious in the window enlarging process, components with faster frequency change exist in the surface voiceprint signal in the corresponding time period, and the window is stopped from being enlarged. The method specifically comprises the following steps:
step 4.5.1, enlarging the size of the Gaussian window corresponding to any one frame in the adjacent frames A, B by a fixed step p, and reducing the size of the Gaussian window corresponding to the other frame by the same step p;
Step 4.5.2, calculating the instantaneous frequencies at all times in the expanded and contracted adjacent frames A, B once for each time of expanding and contracting the gaussian window size corresponding to the adjacent frames A, B, respectively, including:
performing short-time Fourier transform on the adjacent frames A, B to obtain a spectrum representation of the frame A, B;
Calculating phase difference of adjacent frames A, B
Wherein, Respectively representing phases of the frequency index k of the frame A and the frame B in the frequency spectrum;
Using phase differences Calculating a local frequency estimate f_approx (k):
wherein Δt is the frame shift;
and performing linear time interpolation on the local frequency estimation value f_approx (k) to obtain the instantaneous frequency of each moment of the frame A and the frame B.
Step 4.5.3, weighting the sum of the instantaneous frequencies of all the moments in the adjacent frames A, B according to the gaussian window sizes corresponding to the adjacent frames A, B, respectively, to obtain a comprehensive instantaneous frequency:
wherein,
Beta ip is the integrated instantaneous frequency obtained after the gaussian window corresponding to the adjacent frame A, B is enlarged and reduced by ip lengths;
The Gaussian window sizes of the adjacent frames A, B after the ip lengths of the Gaussian windows corresponding to the adjacent frames A, B are enlarged and reduced respectively;
The Gaussian window corresponding to the adjacent frame A, B is expanded and contracted to the instant frequency at each moment in A, B after ip lengths;
the gaussian window corresponding to the adjacent frame A, B is expanded and contracted by the sum of the instantaneous frequencies at all times in A, B.
Step 4.5.4, stopping expanding and shrinking the size of the gaussian window corresponding to the adjacent frame A, B when the integrated instantaneous frequency difference value corresponding to the adjacent step is smaller than the set second threshold value:
|βip-β(i-1)p|<θ2 (13)
wherein,
Beta ip is the integrated instantaneous frequency obtained after the gaussian window corresponding to the adjacent frame A, B is enlarged and reduced by ip lengths;
Beta (i-1)p enlarges and reduces the Gaussian window corresponding to the adjacent frame A, B by (i-1) p lengths to obtain comprehensive instantaneous frequency;
The Gaussian window with (i-1) p length enlarged and the Gaussian window with (i-1) p length reduced are taken as the optimal Gaussian window.
And 4.6, re-framing the voiceprint signal according to the optimal Gaussian window, and then obtaining a final time-frequency matrix by using generalized s transformation based on the new adjacent frames.
And 5, extracting the eigenvectors in the final time-frequency matrix by using principal component analysis, wherein the method specifically comprises the following steps of:
and 5.1, carrying out standardization processing on each row of the time-frequency matrix to ensure that the mean value is 0 and the variance is 1.
Step 5.2, calculating a covariance matrix for the standardized time-frequency matrix;
and 5.3, obtaining a characteristic value and a corresponding characteristic vector by carrying out characteristic decomposition on the covariance matrix.
Step 6, constructing and training a residual neural network, which comprises the following steps:
step 6.1, constructing a neural network architecture, comprising:
input layer: for receiving an input voiceprint feature vector;
The residual error module is used for extracting characteristics and is formed by connecting a plurality of residual error blocks in series, and each residual error block comprises a main path and jump connection:
The main path is used for learning the characteristics of input data, and comprises a convolution layer and an activation function;
the jump connection is used for transmitting information in a cross-layer manner, and the input voiceprint feature vector bypasses a main path and is directly connected to the output;
a pooling layer for downsampling and reducing the dimension of the feature map;
the global average pooling layer is used for converting the output feature map of the last residual error module into a feature vector with fixed length;
The full-connection layer is used for establishing a mapping relation between the feature vector of the global average pooling layer and the final output category;
And the Softmax layer is used for outputting the category probability of the voiceprint feature vector.
And 6.2, inputting the characteristic vector of the switch cabinet voiceprint signal in the training set into the constructed residual neural network for training, and updating parameters of the residual neural network through a back propagation algorithm by adopting a cross entropy loss function and a random gradient descent method.
And 6.3, evaluating the performance of the trained residual neural network by using the verification set, and calculating an evaluation index, wherein the evaluation index comprises the following steps of: and (3) adjusting and improving the model according to the accuracy, the precision, the recall rate and the F1 value and the evaluation result.
And 7, denoising the voiceprint of the 0 kilovolt switch cabinet to be detected, extracting the characteristics, and inputting the extracted characteristic vector into a residual neural network for detection.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.
Claims (10)
1. The method for detecting the voiceprint faults of the switch cabinet based on the feature transformation is characterized by comprising the following steps of:
Collecting an observed voiceprint signal of a switch cabinet, initializing a Gaussian window with random size, and dividing the observed voiceprint signal into a plurality of frames with fixed length through the Gaussian window;
obtaining time-frequency matrixes corresponding to all frames according to generalized S transformation, and calculating time-frequency matrix correlation coefficients r (A T,BT) corresponding to adjacent frames A and B;
If the correlation coefficient r (A T,BT) of the time-frequency matrix corresponding to the adjacent frame A and the adjacent frame B is smaller than the set first threshold, the time-frequency matrix A T、BT corresponding to the current adjacent frame A and the current adjacent frame B is a final time-frequency matrix;
If the correlation coefficient r (A T,BT) of the time-frequency matrix corresponding to the adjacent frame A and the adjacent frame B is not smaller than the set first threshold, the lengths of the adjacent frame A and the adjacent frame B are adjusted through iterating the size of the current Gaussian window, the comprehensive instantaneous frequency of the adjacent frame A and the adjacent frame B after the lengths are adjusted is calculated, when the difference between the comprehensive instantaneous frequency corresponding to the size of the Gaussian window of a certain iteration and the comprehensive instantaneous frequency corresponding to the size of the Gaussian window of the last iteration is larger than the set second threshold, iteration is stopped, an optimal Gaussian window is obtained, and a final time-frequency matrix is obtained based on the optimal window;
and performing dimension reduction processing on the final time-frequency matrix to obtain a feature vector, and inputting the feature vector into a trained residual neutral network to perform fault detection of the switch cabinet.
2. The method for detecting the voiceprint fault of the switch cabinet based on the feature transformation according to claim 1, further comprising: noise reduction pretreatment is carried out on the collected observed voiceprint signals:
converting the observation voiceprint signal into a frequency domain by using short-time Fourier transform, and extracting the frequency spectrum of the observation voiceprint signal;
spectrum decomposition is carried out on the frequency spectrum of the observed voiceprint signal:
f=wh (1) wherein,
F is the spectrum of the observed voiceprint signal;
W is a source spectrum non-negative matrix obtained by decomposing the frequency spectrum of the observation voiceprint signal, and each column of the source spectrum non-negative matrix represents the energy distribution of one source signal on the frequency domain;
H is a non-negative matrix of mixing coefficients obtained after spectral decomposition of the observed voiceprint signal, each row of which represents a period of mixing coefficients for describing the contribution of each source signal in the observed voiceprint signal;
Designing an objective function, and obtaining a non-negative matrix of a target source spectrum by minimizing the objective function through a gradient descent method And target mixing coefficient non-negative matrix
Wherein,
Argmin represents finding the parameter that minimizes the objective function;
I F-WH 2 is the objective function;
a source spectrum non-negative matrix and a mixing coefficient non-negative matrix, respectively, that minimize the objective function;
Non-negative matrix of target source spectrum by multiplication inverse spectrogram transformation And target mixing coefficient non-negative matrixMultiplying to obtain spectrograms of the switch cabinet voiceprint signals after noise separation:
And performing inverse short-time Fourier transform on the spectrogram of the noise-separated switch cabinet voiceprint signal, and converting the spectrogram into a time domain signal to obtain an observed voiceprint time domain signal after noise reduction.
3. The method for detecting a voiceprint fault of a switchgear based on feature transformation of claim 1, wherein said initializing a random size gaussian window divides an observed voiceprint signal into a plurality of fixed length frames by moving the gaussian window comprises:
Manually setting parameter values of frame shifts, wherein the frame shifts are used for determining time intervals between adjacent frames A and B;
according to the parameter value of frame shift, shifting an initialized random-sized Gaussian window along an observed voiceprint signal, and multiplying the initialized random-sized Gaussian window with the observed voiceprint signal at a corresponding position to obtain a plurality of frames with fixed lengths:
x(n)=s(n)*g(n) (3)
where s (n) is the observed voiceprint signal, g (n) is a gaussian window, and x (n) is the obtained frame.
4. The method for detecting the voiceprint fault of the switch cabinet based on the feature transformation according to claim 1, wherein the step of obtaining the time-frequency matrix corresponding to all frames according to the generalized S transformation comprises the following steps:
Each frame is transformed using a generalized S transform:
wherein,
GST k (t, f) is the energy value obtained by generalized s-transformation from the observed voiceprint signal of the kth frame at a given time t and frequency f;
x k (u) is the observed voiceprint of the kth frame, u is the argument of the multiplicative function;
g (t, u) is a gaussian window function, and σ is the standard deviation corresponding to the gaussian window function;
e -i2πfu is the frequency modulation term, i is the imaginary unit, pi is the circumference ratio;
And integrating all energy values of each frame to form a time-frequency matrix of each frame, wherein the rows of the time-frequency matrix represent time, the columns represent frequency, and each element of the time-frequency matrix is an energy value at the corresponding time t and frequency f.
5. The method for detecting a voiceprint fault of a switchgear based on feature transformation according to claim 1, wherein the calculating a time-frequency matrix correlation coefficient r (a T,BT) corresponding to adjacent frames a and B includes:
wherein,
R (A T,BT) is the correlation coefficient of the time-frequency matrix corresponding to the adjacent frame A and the adjacent frame B;
a T、BT is a time-frequency matrix corresponding to the adjacent frame a and frame B;
cov (a T,BT) represents covariance of the time-frequency matrix a T、BT corresponding to the adjacent frame a and frame B;
std (A T)、std(BT) is the standard deviation of the time-frequency matrix A T、BT corresponding to the adjacent frame A and frame B respectively.
6. The method for detecting a voiceprint fault of a switchgear based on feature transformation according to claim 1, wherein said adjusting the lengths of adjacent frames a and B by iterating the size of the current gaussian window comprises:
iterating the current Gaussian window size by a step length p, and expanding the Gaussian window size corresponding to the frame A in the adjacent frames A and B by p lengths and shrinking the Gaussian window size corresponding to the frame B by p lengths in each iteration;
in each iteration, the Gaussian window with changed size is multiplied by the observed voiceprint signal at the corresponding position, and the adjacent frames A and B with adjusted lengths are obtained.
7. The method for detecting the voiceprint fault of the switch cabinet based on the feature transformation according to claim 1, wherein the calculating the integrated instantaneous frequency of the adjacent frames a and B after the adjustment length comprises:
wherein,
Beta ip is the comprehensive instantaneous frequency obtained after the length of the Gaussian window corresponding to the frame A is increased by ip and the length of the Gaussian window corresponding to the frame B is reduced by ip;
the size of the Gaussian window corresponding to the frame A after the ip lengths are enlarged, and the size of the Gaussian window corresponding to the frame B after the ip lengths are reduced;
The instantaneous frequency of each time in the frame A and the frame B after the length adjustment is respectively.
8. The method for detecting the voiceprint fault of the switch cabinet based on the characteristic transformation according to claim 7, wherein the instantaneous frequency comprises:
Performing short-time Fourier transform on the adjacent frames A and B to obtain frequency spectrum representations of the frames A and B;
calculating phase difference of adjacent frames A and B
Wherein, Respectively representing phases of the frequency index k of the frame A and the frame B in the frequency spectrum;
Using phase differences Calculating a local frequency estimate f_approx (k):
wherein Δt is the frame shift;
and performing linear time interpolation on the local frequency estimation value f_approx (k) to obtain the instantaneous frequency of each moment of the frame A and the frame B.
9. The method for detecting the voiceprint fault of the switch cabinet based on the feature transformation according to claim 1, wherein when a difference between the integrated instantaneous frequency corresponding to the size of the gaussian window of a certain iteration and the integrated instantaneous frequency corresponding to the size of the gaussian window of a previous iteration is greater than a set second threshold, stopping the iteration to obtain an optimal gaussian window, comprising:
Determining an iteration stop condition:
|βip-β(i-1)p|>θ2 (10)
wherein,
Beta ip is the comprehensive instantaneous frequency obtained after the length of the Gaussian window corresponding to the frame A is increased by ip and the length of the Gaussian window corresponding to the frame B is reduced by ip;
Beta (i-1)p is the comprehensive instantaneous frequency obtained by expanding the Gaussian window corresponding to the frame A by (i-1) p lengths and shrinking the Gaussian window corresponding to the frame B by (i-1) p lengths;
θ 2 is a set second threshold;
The Gaussian window with (i-1) p length enlarged and the Gaussian window with (i-1) p length reduced are taken as the optimal Gaussian window.
10. The method for detecting the voiceprint fault of the switch cabinet based on the feature transformation according to claim 1, wherein the step of performing dimension reduction on the final time-frequency matrix to obtain the feature vector comprises the following steps:
Carrying out standardization processing on each row of the time-frequency matrix to ensure that the mean value is 0 and the variance is 1;
Calculating a covariance matrix for the normalized time-frequency matrix;
and obtaining the eigenvalue and the corresponding eigenvector by carrying out eigenvalue decomposition on the covariance matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311292303.0A CN117373484B (en) | 2023-10-08 | 2023-10-08 | A switch cabinet voiceprint fault detection method based on feature transformation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311292303.0A CN117373484B (en) | 2023-10-08 | 2023-10-08 | A switch cabinet voiceprint fault detection method based on feature transformation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117373484A CN117373484A (en) | 2024-01-09 |
CN117373484B true CN117373484B (en) | 2024-11-15 |
Family
ID=89397456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311292303.0A Active CN117373484B (en) | 2023-10-08 | 2023-10-08 | A switch cabinet voiceprint fault detection method based on feature transformation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117373484B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN120011795B (en) * | 2025-04-21 | 2025-07-04 | 中国人民解放军国防科技大学 | Differential spectrum driven short-time Fourier window length dynamic adjustment method, device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111487046A (en) * | 2020-02-27 | 2020-08-04 | 广西电网有限责任公司电力科学研究院 | A fault diagnosis method based on the fusion of circuit breaker voiceprint and vibration entropy features |
CN112036296A (en) * | 2020-08-28 | 2020-12-04 | 合肥工业大学 | Motor bearing fault diagnosis method based on generalized S transformation and WOA-SVM |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5224950B2 (en) * | 2007-08-28 | 2013-07-03 | 本田技研工業株式会社 | Signal processing device |
JP5718126B2 (en) * | 2011-03-31 | 2015-05-13 | 沖電気工業株式会社 | Fine vibration feature value calculation apparatus, fine vibration feature value calculation method, and program |
CN112289329A (en) * | 2020-10-22 | 2021-01-29 | 国网青海省电力公司海西供电公司 | A fault diagnosis method of high voltage circuit breaker based on GWO-KFCM |
CN112259088B (en) * | 2020-10-28 | 2024-05-17 | 瑞声新能源发展(常州)有限公司科教城分公司 | Audio accent recognition method, device, equipment and medium |
EP4281801A1 (en) * | 2021-01-22 | 2023-11-29 | Mayo Foundation for Medical Education and Research | Shear wave phase velocity estimation with extended bandwidth using generalized stockwell transform and slant frequency wavenumber analysis |
CN116720100A (en) * | 2023-05-15 | 2023-09-08 | 国网宁夏电力有限公司超高压公司 | A method for fault diagnosis of converter transformer |
-
2023
- 2023-10-08 CN CN202311292303.0A patent/CN117373484B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111487046A (en) * | 2020-02-27 | 2020-08-04 | 广西电网有限责任公司电力科学研究院 | A fault diagnosis method based on the fusion of circuit breaker voiceprint and vibration entropy features |
CN112036296A (en) * | 2020-08-28 | 2020-12-04 | 合肥工业大学 | Motor bearing fault diagnosis method based on generalized S transformation and WOA-SVM |
Also Published As
Publication number | Publication date |
---|---|
CN117373484A (en) | 2024-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11409933B2 (en) | Method for diagnosing analog circuit fault based on cross wavelet features | |
CN110792563B (en) | Wind turbine generator blade fault audio monitoring method based on convolution generation countermeasure network | |
CN103558529B (en) | A kind of mode identification method of three-phase cartridge type supertension GIS partial discharge altogether | |
CN110543860B (en) | Mechanical fault diagnosis method and system based on TJM transfer learning | |
Millioz et al. | Circularity of the STFT and spectral kurtosis for time-frequency segmentation in Gaussian environment | |
CN103559888A (en) | Speech enhancement method based on non-negative low-rank and sparse matrix decomposition principle | |
KR20090078075A (en) | Fault diagnosis method of induction motor using DFT and wavelet | |
Gargoom et al. | Investigation of effective automatic recognition systems of power-quality events | |
CN111044814A (en) | Method and system for identifying transformer direct-current magnetic bias abnormality | |
Ma et al. | A novel blind source separation method for single-channel signal | |
CN117373484B (en) | A switch cabinet voiceprint fault detection method based on feature transformation | |
CN101299055A (en) | Simulation integrated switch current circuit testing method based on wavelet-neural net | |
CN111025100A (en) | Transformer UHF Partial Discharge Signal Pattern Recognition Method and Device | |
CN106548013A (en) | Using the voltage sag source identification method for improving incomplete S-transformation | |
de Oliveira et al. | Second order blind identification algorithm with exact model order estimation for harmonic and interharmonic decomposition with reduced complexity | |
CN118896684A (en) | A noise matching and separation method for substations | |
CN116153329A (en) | CWT-LBP-based sound signal time-frequency texture feature extraction method | |
CN112697270A (en) | Fault detection method and device, unmanned equipment and storage medium | |
CN109658944B (en) | Helicopter acoustic signal enhancement method and device | |
Guo et al. | Order-crossing removal in Gabor order tracking by independent component analysis | |
CN114184838B (en) | Power system harmonic detection method, system and medium based on SN interconvolution window | |
CN108776801A (en) | It is a kind of based on owing to determine the analog circuit fault features extracting method of blind source separating | |
CN114674410A (en) | A Time-varying Component Number Time-varying Instantaneous Frequency Estimation Method for Underwater Acoustic Signals | |
CN113657208A (en) | Vibration signal underdetermined blind source separation method for solving unknown source number of wind power tower | |
CN109034216A (en) | Electrical energy power quality disturbance analysis method based on WT and SVM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |