CN118155633A - Fault detection method, device, computer equipment and storage medium - Google Patents
Fault detection method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN118155633A CN118155633A CN202410369294.9A CN202410369294A CN118155633A CN 118155633 A CN118155633 A CN 118155633A CN 202410369294 A CN202410369294 A CN 202410369294A CN 118155633 A CN118155633 A CN 118155633A
- Authority
- CN
- China
- Prior art keywords
- fault
- fault detection
- data
- characteristic data
- sound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 184
- 230000015654 memory Effects 0.000 claims abstract description 62
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000004590 computer program Methods 0.000 claims abstract description 26
- 238000012549 training Methods 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims description 49
- 230000002441 reversible effect Effects 0.000 claims description 26
- 238000001228 spectrum Methods 0.000 claims description 21
- 230000009466 transformation Effects 0.000 claims description 19
- 238000001914 filtration Methods 0.000 claims description 5
- 239000010410 layer Substances 0.000 description 35
- 238000012545 processing Methods 0.000 description 19
- 238000001816 cooling Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 13
- 230000005236 sound signal Effects 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 6
- 230000002457 bidirectional effect Effects 0.000 description 6
- 230000007787 long-term memory Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000002123 temporal effect Effects 0.000 description 5
- 238000009432 framing Methods 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 239000002826 coolant Substances 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000007774 longterm Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 230000006403 short-term memory Effects 0.000 description 3
- 125000003275 alpha amino acid group Chemical group 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000003990 capacitor Substances 0.000 description 2
- 150000001875 compounds Chemical class 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M99/00—Subject matter not provided for in other groups of this subclass
- G01M99/002—Thermal testing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/04—Training, enrolment or model building
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/18—Artificial neural networks; Connectionist approaches
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L21/0232—Processing in the frequency domain
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/18—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/24—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
Abstract
The present application relates to a fault detection method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: acquiring voiceprint time sequence data of equipment to be detected; extracting sound characteristics of the voiceprint time sequence data to obtain sound characteristic data; taking the sound characteristic data as input, and calling a trained fault detection model to perform fault detection to obtain a fault detection result; the trained fault detection model is obtained by training an initial fault detection model through historical sound characteristic data of equipment to be detected, wherein the historical sound characteristic data carries fault detection result labels, and the initial fault detection model is constructed based on a convolutional neural network and a two-way long-short-term memory network. By adopting the method, the accuracy of the fault detection result can be improved.
Description
Technical Field
The present application relates to the field of power system fault identification technology, and in particular, to a fault detection method, apparatus, computer device, storage medium, and computer program product.
Background
With the development of power systems, high-voltage direct-current transmission has become a key part in the power systems, and a converter valve in a valve cooling system is core equipment for realizing the high-voltage direct-current transmission. However, the converter valve generates a lot of heat during the rectifying and inverting process, and the heat may affect the normal operation of the converter valve, so that the converter valve needs to be timely detected for faults.
The traditional fault detection scheme mainly relies on experience judgment of operation and maintenance personnel in a station and professionals of factories to carry out fault diagnosis on the converter valve, or adopts fault detection modes such as vibration analysis, infrared imaging analysis, oil analysis and the like to carry out fault investigation on a valve cooling system.
However, the above-mentioned fault detection scheme is easily affected by human factors or environmental factors, for example, fault detection is performed by using a vibration analysis method, and is easily affected by the installation condition of the converter valve itself, equipment noise and environmental noise, so that the accuracy of the fault detection result is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a fault detection method, apparatus, computer device, computer-readable storage medium, and computer program product that can improve the accuracy of the fault detection result.
In a first aspect, the present application provides a fault detection method. The method comprises the following steps:
acquiring voiceprint time sequence data of equipment to be detected;
Extracting sound characteristics of the voiceprint time sequence data to obtain sound characteristic data;
Taking the sound characteristic data as input, and calling a trained fault detection model to perform fault detection to obtain a fault detection result;
The trained fault detection model is obtained by training an initial fault detection model through historical sound characteristic data of equipment to be detected, wherein the historical sound characteristic data carries fault detection result labels, and the initial fault detection model is constructed based on a convolutional neural network and a two-way long-short-term memory network.
In one embodiment, the calling the trained fault detection model to perform fault detection to obtain a fault detection result includes:
Performing spatial feature extraction on the sound feature data through the convolutional neural network to obtain fault spatial features, and performing time feature extraction on the sound feature data through the two-way long-short-term memory network to obtain fault time feature data;
and determining a fault detection result according to the fault space characteristic data and the fault time characteristic data.
In one embodiment, the calling the trained fault detection model to perform fault detection to obtain a fault detection result includes:
Performing spatial feature extraction on the sound feature data through the convolutional neural network to obtain fault spatial feature data;
performing time feature extraction on the fault space feature data through the two-way long-short-term memory network to obtain fault feature data;
and determining a fault detection result according to the fault characteristic data.
In one embodiment, the two-way long and short term memory network includes a forward propagation layer and a backward propagation layer;
The step of extracting the time characteristics of the fault space characteristic data through the two-way long-short-term memory network to obtain the fault characteristic data comprises the following steps:
Carrying out forward time feature extraction on the fault space feature data through the forward propagation layer to obtain a forward hiding state of the fault space feature data;
performing reverse time feature extraction on the fault space feature data through the reverse propagation layer to obtain a reverse hiding state of the fault space feature data;
And determining fault characteristic data according to the forward hiding state and the reverse hiding state.
In one embodiment, the fault detection model includes a fully connected layer;
the determining a fault detection result according to the fault characteristic data comprises the following steps:
and carrying out fault classification on the fault characteristic data through the full connection layer to obtain the fault type of the equipment to be detected, wherein the fault detection result comprises the fault type.
In one embodiment, the extracting the sound characteristic of the voiceprint time sequence data to obtain sound characteristic data includes:
Performing Fourier transform on the voiceprint time sequence data to obtain first characteristic data of the voiceprint time sequence data on a linear frequency spectrum;
Performing Mel filtering on the first characteristic data to obtain second characteristic data of the voiceprint time sequence data on Mel frequency spectrum;
and carrying out logarithmic transformation and discrete cosine transformation on the second characteristic data to determine the sound characteristic data.
In a second aspect, the application further provides a fault detection device. The device comprises:
the data acquisition module is used for acquiring voiceprint time sequence data of the equipment to be detected;
the feature extraction module is used for extracting sound features of the voiceprint time sequence data to obtain sound feature data;
The fault detection module is used for calling a trained fault detection model to perform fault detection by taking the sound characteristic data as input to obtain a fault detection result;
The trained fault detection model is obtained by training an initial fault detection model through historical sound characteristic data of equipment to be detected, wherein the historical sound characteristic data carries fault detection result labels, and the initial fault detection model is constructed based on a convolutional neural network and a two-way long-short-term memory network.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the above described fault detection method embodiments when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the above-described fault detection method embodiments.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the above-described fault detection method embodiments.
The fault detection method, the device, the computer equipment, the storage medium and the computer program product are used for constructing the initial fault detection model based on the convolutional neural network and the two-way long-short-term memory network in advance, combining the advantages of the convolutional neural network and the two-way long-short-term memory network, and not only can efficiently and accurately process local spatial characteristics of sound characteristic data, but also can efficiently and accurately process temporal characteristics of the sound characteristic data, so that after the initial fault detection model is trained, the initial fault detection model is applied to a fault detection task, and a fault detection result with high accuracy can be obtained. In addition, by extracting the sound characteristic of the sound track time sequence data of the equipment to be detected, the noise signal in the sound track time sequence data can be reduced, so that high-quality sound characteristic data is obtained, and then the high-quality sound characteristic data is used as the input of a trained fault detection model, so that the accuracy of a fault detection result can be further improved.
Drawings
FIG. 1 is a diagram of an application environment for a fault detection method in one embodiment;
FIG. 2 is a flow chart of a fault detection method in one embodiment;
FIG. 3 is a flow chart of a fault detection method according to another embodiment;
FIG. 4 is a flow diagram of determining fault signature data in one embodiment;
FIG. 5 is a flow chart of determining sound characteristic data in one embodiment;
FIG. 6 is a flow chart illustrating determining fault signature data in one embodiment;
FIG. 7 is a model structure of a fault detection model in one embodiment;
FIG. 8 is a block diagram of a fault detection device in one embodiment;
fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The fault detection method provided by the embodiment of the application can be applied to an application environment shown in figure 1. Wherein the data collection terminal 102 communicates with the server 104 through a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server.
Specifically, the data collection end 102 collects voiceprint time sequence data of the device 106 to be detected, and the staff uploads the voiceprint time sequence data to the server 104, and the server 104 performs voice feature extraction on the voiceprint time sequence data to obtain voice feature data. In addition, a trained fault detection model may be stored in the data storage system of the server 104 in advance, and the server 104 invokes the trained fault detection model to perform fault detection on the sound feature data, so as to obtain a fault detection result. The trained fault detection model is obtained by pre-training an initial fault detection model through historical sound characteristic data of equipment to be detected, wherein the historical sound characteristic data carries fault detection result labels, and the initial fault detection model is constructed based on a convolutional neural network and a two-way long-short-term memory network.
The data acquisition end 102 may be, but not limited to, various voiceprint detection devices and the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a fault detection method is provided, and the method is applied to the server 104 in fig. 1 for illustration, and includes the following steps:
s200, voiceprint time sequence data of equipment to be detected are obtained.
The device to be detected may be a valve cooling device, and the voiceprint time sequence data may be a sound signal of the valve cooling device recorded by the detecting device, such as a microphone or a sensor, during operation of the valve cooling device, where the sound signal is a time sequence signal, and the voiceprint time sequence data includes duration, frequency, amplitude, and the like of the sound signal.
For example, the acoustic signal may be generated during operation of the valve cooling device, and the gun body condenser microphone may be used to collect the acoustic signal of the valve cooling device to obtain voiceprint timing data. The sampling frequency of the gun body capacitor microphone can be set to be 48000 Hz, the sampling number is 24 bits, and the corresponding frequency domain range of the gun body capacitor microphone is 20-20000 Hz.
S400, extracting sound characteristics of the voiceprint time sequence data to obtain sound characteristic data.
The extracting of the sound features may be extracting sound features in the voiceprint time sequence data to reduce noise signals in the voiceprint time sequence data, so as to obtain high-quality sound feature data.
In addition, a series of preprocessing, such as signal framing and windowing, can be performed on the voiceprint time series data before the voice feature extraction is performed. In the process of framing the signal, the frame length of the voiceprint time sequence data needs to be determined first, if the frame length is too short, important features in the sound signal may not be captured, and if the frame length is too long, the accuracy of the captured features may be reduced due to the change of the sound signal. In the present embodiment, in consideration of the noise signal of the valve cooling device being more stable, a longer frame length may be used to improve the accuracy of sound characteristic data, for example, the frame length is set to 500 milliseconds. In addition, the frame overlap rate may be set to 40%, which means that there is a 40% overlap between adjacent frames to improve the continuity and smoothness of the voiceprint time series data. After the signal is framed, windowing treatment, such as Hamming window treatment, can be carried out on the voiceprint time sequence data after framing, so that the end effect and the frequency spectrum leakage phenomenon of the voiceprint time sequence data on the time domain can be reduced, and the continuity of two ends of the voiceprint time sequence data is increased.
Through the preprocessing, voiceprint time sequence data with better quality can be obtained, and at the moment, the voiceprint time sequence data can be further subjected to sound feature processing, for example, the spectral features of the voiceprint time sequence data or sound feature extraction modes such as a voiceprint envelope are extracted by using a mel cepstrum coefficient, so that sound feature data are obtained.
S600, calling a trained fault detection model to perform fault detection by taking sound characteristic data as input to obtain a fault detection result.
The two-way long-short-term memory network is a variant of a cyclic neural network, and is provided with a memory unit and a gating mechanism, compared with the traditional long-short-term memory network, the two-way long-short-term memory network can better capture long-term dependency in sound characteristic data, so that a fault detection result with higher accuracy is obtained. The trained fault detection model is obtained by training an initial fault detection model through historical sound characteristic data of equipment to be detected, wherein the historical sound characteristic data carries fault detection result labels, and the initial fault detection model is constructed based on a convolutional neural network and a two-way long-short-term memory network. Specifically, the historical sound characteristic data may be divided into a training data set and a verification data set, and the historical sound characteristic data in the training data set and the fault detection result label carried by the historical sound characteristic data are input into the initial fault detection model, so that the initial fault detection model learns the association between the historical sound characteristic data and the fault detection result label. In addition, the training process further comprises calculating an error between the output of the initial fault detection model and the actual fault detection result label to obtain a corresponding loss function, and continuously adjusting parameters of the initial fault detection model in the process of optimizing the loss function. After a certain degree of training, the performance of the initial fault detection model can be evaluated by adopting a verification data set, and the super-parameters of the initial fault detection model can be adjusted according to the evaluation result. The training process may be iterated until a predetermined stop condition is reached, e.g. the number of training cycles reaches a predetermined value, at which time a trained fault detection model may be obtained.
It should be noted that, by analyzing the historical voiceprint time sequence data and the historical sound characteristic data, it can be known that the generated sound signals have large difference in frequency spectrum under the two conditions of normal operation and fault operation of the valve cooling device. For example, in the normal operation condition of the valve cooling equipment, the frequency spectrum distribution of voiceprint time sequence data is mainly concentrated in the range of 0-1000 Hz, and the characteristic is that the frequency spectrum distribution is prominent near 100 Hz; in the event of a malfunction of the cooling medium pump of the generating device, the spectral distribution of the voiceprint time series data thereof is also mainly concentrated in the range of 0-1000 hz, but is highlighted in the vicinity of 20 hz, 820 hz and 1920 hz. And after the voiceprint time sequence data in the two cases are subjected to voice feature extraction, the obtained voice feature data also have large differences, namely whether the valve cooling equipment is in fault or not can be judged through the voice feature data.
Specifically, when the sound feature data is input into the trained fault detection model, since the trained fault detection model can identify the association between the sound feature data and the fault type of the valve cooling device, the output result of the model can be a fault type label corresponding to the sound feature data, and the fault type label can be used for indicating whether the valve cooling device has a fault or not and if the fault type corresponding to the fault has the fault.
According to the fault detection method, the initial fault detection model is constructed on the basis of the convolutional neural network and the two-way long-short-term memory network in advance, the advantages of the convolutional neural network and the two-way long-short-term memory network are combined, local spatial features of sound feature data can be processed efficiently and accurately, and temporal features of the sound feature data can be processed efficiently and accurately, so that after the initial fault detection model is trained, the initial fault detection model is applied to a fault detection task, and a fault detection result with high accuracy can be obtained. In addition, by extracting the sound characteristic of the sound track time sequence data of the equipment to be detected, the noise signal in the sound track time sequence data can be reduced, so that high-quality sound characteristic data is obtained, and then the high-quality sound characteristic data is used as the input of a trained fault detection model, so that the accuracy of a fault detection result can be further improved.
In one embodiment, S600 includes: and performing spatial feature extraction on the sound feature data through a convolutional neural network to obtain fault spatial feature data, performing time feature extraction on the sound feature data through a two-way long-short-term memory network to obtain fault time feature data, and determining a fault detection result according to the fault spatial feature data and the fault time feature data.
The space feature extraction is to extract feature expression of sound feature data on a local space to obtain fault space feature data. For example, the sound feature data may be energy distribution of sound signals with different frequencies in the voiceprint time sequence data, and may be represented by a spectrogram, the convolutional neural network performs spatial feature extraction on the sound feature data, and the obtained fault spatial feature data may be local features extracted from the spectrogram, such as frequency edges and frequency modes of the sound feature data. The time feature extraction is to extract the feature expression of the sound feature data in time to obtain the fault time feature data. For example, the failure time feature data may be a feature obtained by processing the sound feature data in the time domain, such as time-series information, time-dependence, time-series pattern, and the like of the sound feature data.
The fault detection result may include a fault type, which may be, for example, where the cooling medium pump has failed, a fault point, which may be, for example, where the cooling medium pump has failed, and the like. Further, the fault detection result may further include a fault detection report, after the fault type and the fault point are acquired, by tracking the current working state data and the historical working state data of the device to be detected, the working state and the fault cause of the device to be detected may be further analyzed, and accordingly a fault detection report may be generated, where the content of the fault detection report may include the fault point, the fault type, the fault degree, the fault cause and corresponding fault countermeasure. In addition, the equipment to be detected can be controlled to stop under the condition that the fault detection result indicates that the fault degree exceeds the fault threshold value.
In this embodiment, an interlayer parallel manner may be adopted to accelerate deployment of the fault detection model, so as to improve data processing efficiency. For example, the computing tasks of convolutional neural networks and two-way long and short term memory networks may be deployed on different computing devices to increase overall computing speed.
Specifically, in the process of extracting the spatial features, the convolutional neural network can involve a large amount of matrix calculation, and the GPU (Graphics Processing Unit, graphics processor) is good at processing the calculation tasks, so that the convolutional neural network can be deployed on the GPU; in the process of time feature extraction, the bidirectional long-short-term memory network needs to process time sequence data and extract sequence features, so that the bidirectional long-term memory network can be deployed on a CPU (Central Processing Unit ). Through the parallel processing, the convolutional neural network and the two-way long-short-term memory network can simultaneously perform space feature extraction and time feature extraction to obtain fault space feature data and fault time feature data, and finally the fault detection model can perform fusion analysis processing on the two features based on the fault space feature data and the fault time feature data to determine a fault detection result.
In this embodiment, by performing spatial feature extraction and temporal feature extraction on the sound feature data simultaneously in a parallel computing manner, the processing speed of the fault detection model can be increased, and the fault detection result can be obtained efficiently.
In one embodiment, as shown in fig. 3, S600 includes:
s620, spatial feature extraction is carried out on the sound feature data through a convolutional neural network, and fault spatial feature data are obtained.
S640, performing time feature extraction on the fault space feature data through the two-way long-short-term memory network to obtain the fault feature data.
S660, determining a fault detection result according to the fault characteristic data.
In this embodiment, a network architecture of a fault detection model is designed, and the acoustic feature data is first subjected to spatial feature extraction through a convolutional neural network to obtain fault spatial feature data. Specifically, a series of convolution operation and pooling operation can be performed on the sound feature data through the convolutional neural network, so that local spatial features in the sound feature data, such as frequency variation and mode in a spectrogram, are effectively captured, and high-dimensional fault spatial feature data is obtained.
And then, the fault space feature data is subjected to time feature extraction through a two-way long-short-term memory network, so that the fault feature data is obtained. Specifically, the two-way long-short-term memory network can effectively capture the event dynamic characteristics in the fault space characteristic data, and the two-way long-term memory network can capture the long-term dependency relationship in the fault space characteristic data through the memory unit and the gating mechanism, so that the fault characteristic data simultaneously comprising the space characteristics and the time characteristics is obtained.
And finally, according to the extracted fault characteristic data, the fault detection model can identify the corresponding fault type as a fault detection result. Specifically, the fault classification may be performed by using a classifier such as a vector machine or logistic regression, or may be performed by combining a threshold value determination mode to obtain a final fault detection result. In this embodiment, a multidimensional convolutional neural network may be deployed in the fault detection model, and taking a two-dimensional convolutional neural network as an example, sound feature data may be input into a first layer convolutional neural network to perform convolution and pooling processing, and then the pooled result is used as an input of a second layer convolutional neural network to perform convolution and pooling processing again. By means of the multi-dimensional convolutional neural network deployment mode, fault space characteristic data with higher accuracy can be extracted, and further accuracy of fault detection results is improved.
In this embodiment, the network architecture of the fault detection model is a convolutional neural network connected with a two-way long-short-term memory network, where the output of the convolutional neural network is used as the input of the two-way long-term memory network, and the voice feature data is sequentially extracted through the network architecture, so that the fault feature data with high accuracy and strong robustness is finally obtained, and the accuracy of the fault detection result is further improved.
In one embodiment, the two-way long-short-term memory network includes a forward propagation layer and a reverse propagation layer, as shown in fig. 4, S640 includes:
s642, forward time feature extraction is carried out on the fault space feature data through the forward propagation layer, so that a forward hiding state of the fault space feature data is obtained.
S644, performing reverse time feature extraction on the fault space feature data through the reverse propagation layer to obtain a reverse hiding state of the fault space feature data.
S646, determining fault signature data based on the forward hidden state and the reverse hidden state.
In the bidirectional long-short-term memory network, the corresponding hidden state can be obtained by processing the data in each time step, wherein the forward hidden state refers to the hidden state from the beginning of the time sequence data to the current time step, and the reverse hidden state refers to the hidden state from the end of the time sequence data to the current time step.
Specifically, in the forward propagation layer of the bidirectional long-short-term memory network, processing input data from the first time step of the time sequence data, and sequentially processing the input data backwards until the current time step, wherein each time step generates a forward hidden state; similarly, in the back propagation layer of a two-way long and short-term memory network, processing of the input data starts from the last time step of the time series data, and the back sequence proceeds until the current time step, again producing multiple back hidden states. The forward hiding state and the reverse hiding state both contain the characteristics of the input fault space characteristic data in the time dimension, and the fault characteristic data can be determined according to the forward hiding state and the reverse hiding state.
In the embodiment, the bidirectional long-short-time memory network combines the forward propagation layer and the backward propagation layer, so that the time representation of the fault space feature data can be more comprehensively understood, the long-term dependency relationship of the fault space feature data in the fault space feature data can be captured, and the fault feature data with high accuracy can be extracted.
In one embodiment, the fault detection model includes a fully connected layer, and S660 includes: and carrying out fault classification on the fault characteristic data through the full connection layer to obtain the fault type of the equipment to be detected, wherein the fault detection result comprises the fault type.
By adopting the embodiment, when different fault types occur to the valve cooling equipment, the corresponding collected voiceprint time sequence data and the fault characteristic data obtained by subsequent processing are different, so that the fault characteristic data can be subjected to fault classification through the full connection layer to obtain the fault type of the valve cooling equipment. Specifically, the full connection layer maps fault characteristic data into a high-dimensional space through a series of weight matrixes and linear transformation, then performs nonlinear transformation through an activation function, and finally performs fault classification on the characteristic data obtained through the nonlinear transformation to determine the fault type of equipment to be detected.
In this embodiment, fault feature data is classified by the full connection layer, and before fault classification, a series of linear transformation and nonlinear transformation are further performed on the fault feature data, so that a more abstract and advanced feature representation is extracted and used for fault classification, and the accuracy of the obtained fault type is higher.
In one embodiment, as shown in fig. 5, S400 includes:
S420, carrying out Fourier transform on the voiceprint time sequence data to obtain first characteristic data of the voiceprint time sequence data on a linear frequency spectrum.
S440, carrying out Mel filtering on the first characteristic data to obtain second characteristic data of the voiceprint time sequence data on Mel frequency spectrum.
S460, carrying out logarithmic transformation and discrete cosine transformation on the second characteristic data to determine sound characteristic data.
In the above embodiment, the MFCC (Mel-Frequency Cepstral Coefficients, mel cepstrum coefficient) is one of the ways to extract the sound features of the voiceprint time series data, specifically, the voiceprint time series data may be firstly converted into the first feature data on the linear spectrum, then further converted into the second feature data on the Mel spectrum, and finally the second feature data is subjected to logarithmic transformation and discrete cosine transformation to obtain the sound feature data. It should be noted that, before the voiceprint time sequence data is processed by using the MFCC, the voiceprint time sequence data may be preprocessed, including framing, pre-emphasis, and windowing.
Illustratively, each frame signal in the voiceprint time sequence data is fourier transformed to convert the time domain signal into the frequency domain signal, so as to determine first characteristic data of the voiceprint time sequence data with a linear frequency spectrum, and each frame signal in the first characteristic data is passed through a series of mel-triangle filters, so as to obtain second characteristic data of the voiceprint time sequence data and the mel frequency spectrum. The frequency conversion relationship between the linear spectrum and the mel spectrum can be shown as formula (1):
(1)
In the formula (1), the components are as follows, The linear frequency is represented by a frequency of the signal,Represents the mel frequency corresponding to the linear frequency. The mel triangle filter is a group of triangle filters which are arranged at equal intervals on the mel frequency, can simulate the perception characteristic of human ears on the sound frequency, and can be expressed as a system of equations shown as a formula (2) on a linear frequency spectrum:
(2)
in the formula (2), the amino acid sequence of the compound, Is the center frequency of the mel filter,Is a linear frequency. After the voiceprint time sequence data is filtered by a series of mel filters, the MFCC can be obtained through logarithmic transformation and discrete cosine transformation, and the process can be represented by the formula (3):
(3)
in the formula (3), the amino acid sequence of the compound, Represent the firstThe output of the individual mel-filters,Representing the number of triangular filters in the mel filter,Representing the order of the MFCC,Represent the firstThe values of the individual cepstral coefficients,Representation ofIs a length of (c). The mel-cepstrum coefficient sequences obtained through logarithmic transformation and discrete cosine transformation can be used as feature vectors, the mel-cepstrum coefficient sequences corresponding to each frame of signals are spliced, MFCC feature representations corresponding to voiceprint time sequence data can be obtained, and therefore sound feature data are determined.
In this embodiment, by performing voice feature extraction on the voice print time sequence data by using the MFCC, redundant information in the voice print time sequence data can be removed, data with high voice feature relevance is retained, voice feature data with high robustness and accuracy can be extracted even in a noisy environment, and the voice feature data is used in fault detection, so that a fault detection result with higher accuracy can be obtained.
In order to make a clearer description of the fault detection method provided by the present application, a specific embodiment is described below with reference to fig. 6, where the specific embodiment includes the following steps:
s601, voiceprint time sequence data of equipment to be detected is obtained.
S602, performing Fourier transform on the voiceprint time sequence data to obtain first characteristic data of the voiceprint time sequence data on a linear frequency spectrum, performing Mel filtering on the first characteristic data to obtain second characteristic data of the voiceprint time sequence data on the Mel frequency spectrum, performing logarithmic transform and discrete cosine transform on the second characteristic data, and determining sound characteristic data.
And S603, performing spatial feature extraction on the sound feature data through a convolutional neural network to obtain fault spatial feature data.
S604, performing forward time feature extraction on the fault space feature data through the forward propagation layer to obtain a forward hidden state of the fault space feature data, and performing reverse time feature extraction on the fault space feature data through the reverse propagation layer to obtain a reverse hidden state of the fault space feature data.
S605, determining fault characteristic data according to the forward hiding state and the reverse hiding state.
S606, performing fault classification on the fault characteristic data through the full connection layer to obtain the fault type of the equipment to be detected, wherein the fault detection result comprises the fault type.
In this embodiment, as shown in fig. 7, the model structure of the fault detection model may include a MFCC feature parameter vector set, where the dimension of the convolutional neural network is two-dimensional, that is, the MFCC feature parameter vector set is subjected to two convolution processes, and the fault space feature data output by the convolutional neural network is used as input of the bidirectional long-short-term memory network, and is processed by the forward propagation layer and the backward propagation layer, so as to output the fault feature data. The fault characteristic data is used as the input of the full-connection layer, the fault classification is carried out through the full-connection layer, the fault type is output, and the fault type can be output through the output layer.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a fault detection device for realizing the fault detection method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the fault detection device provided below may refer to the limitation of the fault detection method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 8, there is provided a fault detection apparatus 700 comprising: a data acquisition module 710, a feature extraction module 720, and a fault detection module 730, wherein:
A data acquisition module 710, configured to acquire voiceprint time sequence data of a device to be detected;
The feature extraction module 720 is configured to perform sound feature extraction on the voiceprint time sequence data to obtain sound feature data;
The fault detection module 730 is configured to call the trained fault detection model to perform fault detection with the sound feature data as input, so as to obtain a fault detection result;
The trained fault detection model is obtained by training an initial fault detection model through historical sound characteristic data of equipment to be detected, wherein the historical sound characteristic data carries fault detection result labels, and the initial fault detection model is constructed based on a convolutional neural network and a two-way long-short-term memory network.
In one embodiment, the fault detection module 730 is further configured to perform spatial feature extraction on the sound feature data through a convolutional neural network to obtain a fault spatial feature, perform temporal feature extraction on the sound feature data through a two-way long-short-term memory network to obtain fault time feature data, and determine a fault detection result according to the fault spatial feature data and the fault time feature data.
In one embodiment, the fault detection module 730 is further configured to perform spatial feature extraction on the sound feature data through a convolutional neural network to obtain fault spatial feature data, perform temporal feature extraction on the fault spatial feature data through a two-way long-short-term memory network to obtain fault feature data, and determine a fault detection result according to the fault feature data.
In one embodiment, the two-way long-short-term memory network includes a forward propagation layer and a reverse propagation layer, and the fault detection module 730 is further configured to perform forward time feature extraction on the fault spatial feature data through the forward propagation layer to obtain a forward hidden state of the fault spatial feature data, and perform reverse time feature extraction on the fault spatial feature data through the reverse propagation layer to obtain a reverse hidden state of the fault spatial feature data, and determine the fault spatial feature data according to the forward hidden state and the reverse hidden state.
In one embodiment, the fault detection model includes a full connection layer, and the fault detection module 730 is further configured to perform fault classification on the fault characteristic data through the full connection layer to obtain a fault type of the device to be detected, where the fault detection result includes the fault type.
In one embodiment, the feature extraction module 720 is configured to fourier transform the voiceprint time sequence data to obtain first feature data of the voiceprint time sequence data on the linear spectrum, perform mel filtering on the first feature data to obtain second feature data of the voiceprint time sequence data on the mel spectrum, perform logarithmic transformation and discrete cosine transformation on the second feature data, and determine the sound feature data.
The respective modules in the above-described fault detection device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 9. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data such as a trained fault detection model. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a fault detection method.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 9 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In some embodiments, a computer device is provided, comprising a memory in which a computer program is stored, and a processor which implements the steps of the above-described fault detection method when the computer program is executed.
In some embodiments, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the above-described fault detection method.
In some embodiments, a computer program product is provided comprising a computer program which, when executed by a processor, implements the steps of the above-described fault detection method.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.
Claims (10)
1. A method of fault detection, the method comprising:
acquiring voiceprint time sequence data of equipment to be detected;
Extracting sound characteristics of the voiceprint time sequence data to obtain sound characteristic data;
Taking the sound characteristic data as input, and calling a trained fault detection model to perform fault detection to obtain a fault detection result;
The trained fault detection model is obtained by training an initial fault detection model through historical sound characteristic data of equipment to be detected, wherein the historical sound characteristic data carries fault detection result labels, and the initial fault detection model is constructed based on a convolutional neural network and a two-way long-short-term memory network.
2. The method of claim 1, wherein invoking the trained fault detection model for fault detection results comprises:
Performing spatial feature extraction on the sound feature data through the convolutional neural network to obtain fault spatial features, and performing time feature extraction on the sound feature data through the two-way long-short-term memory network to obtain fault time feature data;
and determining a fault detection result according to the fault space characteristic data and the fault time characteristic data.
3. The method of claim 1, wherein invoking the trained fault detection model for fault detection results comprises:
Performing spatial feature extraction on the sound feature data through the convolutional neural network to obtain fault spatial feature data;
performing time feature extraction on the fault space feature data through the two-way long-short-term memory network to obtain fault feature data;
and determining a fault detection result according to the fault characteristic data.
4. The method of claim 3, wherein the two-way long-short-term memory network comprises a forward propagation layer and a reverse propagation layer;
The step of extracting the time characteristics of the fault space characteristic data through the two-way long-short-term memory network to obtain the fault characteristic data comprises the following steps:
Carrying out forward time feature extraction on the fault space feature data through the forward propagation layer to obtain a forward hiding state of the fault space feature data;
performing reverse time feature extraction on the fault space feature data through the reverse propagation layer to obtain a reverse hiding state of the fault space feature data;
And determining fault characteristic data according to the forward hiding state and the reverse hiding state.
5. The method of claim 3 or 4, wherein the fault detection model comprises a fully connected layer;
the determining a fault detection result according to the fault characteristic data comprises the following steps:
and carrying out fault classification on the fault characteristic data through the full connection layer to obtain the fault type of the equipment to be detected, wherein the fault detection result comprises the fault type.
6. The method according to any one of claims 1 to 4, wherein the performing acoustic feature extraction on the voiceprint time series data to obtain acoustic feature data includes:
Performing Fourier transform on the voiceprint time sequence data to obtain first characteristic data of the voiceprint time sequence data on a linear frequency spectrum;
Performing Mel filtering on the first characteristic data to obtain second characteristic data of the voiceprint time sequence data on Mel frequency spectrum;
and carrying out logarithmic transformation and discrete cosine transformation on the second characteristic data to determine the sound characteristic data.
7. A fault detection device, the device comprising:
the data acquisition module is used for acquiring voiceprint time sequence data of the equipment to be detected;
the feature extraction module is used for extracting sound features of the voiceprint time sequence data to obtain sound feature data;
The fault detection module is used for calling a trained fault detection model to perform fault detection by taking the sound characteristic data as input to obtain a fault detection result;
The trained fault detection model is obtained by training an initial fault detection model through historical sound characteristic data of equipment to be detected, wherein the historical sound characteristic data carries fault detection result labels, and the initial fault detection model is constructed based on a convolutional neural network and a two-way long-short-term memory network.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410369294.9A CN118155633A (en) | 2024-03-28 | 2024-03-28 | Fault detection method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410369294.9A CN118155633A (en) | 2024-03-28 | 2024-03-28 | Fault detection method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118155633A true CN118155633A (en) | 2024-06-07 |
Family
ID=91285316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410369294.9A Pending CN118155633A (en) | 2024-03-28 | 2024-03-28 | Fault detection method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118155633A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119198925A (en) * | 2024-11-27 | 2024-12-27 | 咸阳通润机械制造有限公司 | Defect recognition method and system for oil accessories mud pump casing based on deep learning |
-
2024
- 2024-03-28 CN CN202410369294.9A patent/CN118155633A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119198925A (en) * | 2024-11-27 | 2024-12-27 | 咸阳通润机械制造有限公司 | Defect recognition method and system for oil accessories mud pump casing based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111933188B (en) | Sound event detection method based on convolutional neural network | |
CN112200244B (en) | Intelligent detection method for anomaly of aerospace engine based on hierarchical countermeasure training | |
CN111523509B (en) | Equipment fault diagnosis and health monitoring method integrating physical and depth expression characteristics | |
JP6289400B2 (en) | Method and system for detecting events in a signal subject to periodic stationary background noise | |
CN112183647A (en) | Transformer substation equipment sound fault detection and positioning method based on deep learning | |
CN111986699B (en) | Sound event detection method based on full convolution network | |
Meire et al. | Comparison of deep autoencoder architectures for real-time acoustic based anomaly detection in assets | |
CN111898644B (en) | An intelligent identification method for aerospace liquid engine health status under fault-free samples | |
CN107644231A (en) | A kind of generator amature method for diagnosing faults and device | |
CN109658943A (en) | A kind of detection method of audio-frequency noise, device, storage medium and mobile terminal | |
CN118155633A (en) | Fault detection method, device, computer equipment and storage medium | |
CN117056849A (en) | Unsupervised method and system for monitoring abnormal state of complex mechanical equipment | |
CN114121025A (en) | Voiceprint fault intelligent detection method and device for substation equipment | |
CN115406630A (en) | Method for detecting faults of wind driven generator blades through passive acoustic signals based on machine learning | |
CN117074866A (en) | Single-phase double-split parallel cable fault diagnosis method, device, equipment and medium | |
CN118364362B (en) | Fault diagnosis method based on constant Q transformation and migration learning | |
CN119068912A (en) | Industrial diagnosis method and related equipment based on voiceprint recognition | |
CN114722964B (en) | Digital audio tampering passive detection method and device based on power grid frequency space and time series feature fusion | |
CN112735466A (en) | Audio detection method and device | |
CN115392293B (en) | Transformer fault monitoring method, device, computer equipment and storage medium | |
Zhang et al. | Audio Fault Analysis for Industrial Equipment Based on Feature Metric Engineering with CNNs | |
CN114822590B (en) | Digital audio tampering passive detection method and device based on power grid frequency phase timing sequence characterization | |
Liu et al. | Nonintrusive wind blade fault detection using a deep learning approach by exploring acoustic information | |
CN118503893B (en) | Time sequence data anomaly detection method and device based on space-time characteristic representation difference | |
CN111048203A (en) | A cerebral blood flow regulating function evaluation device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |