CN118130608B - An online detection method for underwater structure welds based on improved deep convolution - Google Patents
An online detection method for underwater structure welds based on improved deep convolution Download PDFInfo
- Publication number
- CN118130608B CN118130608B CN202410045063.2A CN202410045063A CN118130608B CN 118130608 B CN118130608 B CN 118130608B CN 202410045063 A CN202410045063 A CN 202410045063A CN 118130608 B CN118130608 B CN 118130608B
- Authority
- CN
- China
- Prior art keywords
- gradient
- convolution
- pixel
- welding
- graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 23
- 238000003466 welding Methods 0.000 claims abstract description 34
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 238000011176 pooling Methods 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 8
- 230000009467 reduction Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 230000003190 augmentative effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 2
- 230000017105 transposition Effects 0.000 claims 1
- 230000009466 transformation Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 8
- 238000000034 method Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 5
- 230000007547 defect Effects 0.000 description 5
- 229910000831 Steel Inorganic materials 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 239000010959 steel Substances 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000007689 inspection Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013079 data visualisation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000005553 drilling Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/04—Analysing solids
- G01N29/041—Analysing solids on the surface of the material, e.g. using Lamb, Rayleigh or shear waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/04—Analysing solids
- G01N29/043—Analysing solids in the interior, e.g. by shear waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4409—Processing the detected response signal, e.g. electronic circuits specially adapted therefor by comparison
- G01N29/4418—Processing the detected response signal, e.g. electronic circuits specially adapted therefor by comparison with a model, e.g. best-fit, regression analysis
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4481—Neural networks
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/26—Scanned objects
- G01N2291/267—Welds
- G01N2291/2675—Seam, butt welding
Landscapes
- Physics & Mathematics (AREA)
- Analytical Chemistry (AREA)
- General Physics & Mathematics (AREA)
- Pathology (AREA)
- Immunology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biochemistry (AREA)
- Chemical & Material Sciences (AREA)
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of underwater welding detection, in particular to an online detection method of an underwater structure welding seam based on improved deep convolution, which comprises the steps of S1, using a hydrophone sensor to receive an acoustic signal of marine underwater structure welding, S2, using a VMD algorithm to denoise the acoustic signal and extract a weak welding acoustic signal from a high-order IMF component, S3, using a recursion graph to perform time-space transformation on an effective pulse, transferring a one-dimensional signal from a time domain to a phase space domain to provide more potential welding seam state characteristics, S4, further generating a gradient graph from the recursion graph, using the gradient graph to perform characteristic extraction, extracting the characteristics of three directions of an image X, Y and an XY, and S5, using a VGG16 deep convolution network to classify an image sample to finish online judgment of welding seam quality. The invention can realize the on-line detection of the quality of the underwater welding seam, adjusts the welding quality in real time and provides a certain thought for the on-line detection of the quality of the underwater welding seam.
Description
Technical Field
The invention relates to the technical field of underwater welding detection, in particular to an online detection method for an underwater structure welding seam based on improved deep convolution.
Background
The underwater welding technology plays a key role in the fields of offshore wind power, drilling platforms, ships, submarine pipelines and the like. At present, defects in the welding line can seriously affect the reliability of the structural part. In order to repair as soon as possible and reduce the loss, repair by adopting an underwater welding technology is a necessary choice. Therefore, it is important to develop reliable on-line weld quality detection technology.
Therefore, scientists at home and abroad propose various detection modes such as vision, optics, acoustics and the like, but most of them have limitations. Visual detection is easy to be interfered by the outside, such as factors of light, temperature, humidity and the like, the optical detection precision can be influenced under the conditions of complex welding defects and surface defects, and the optical detection cost is higher. In contrast, acoustic-based non-destructive inspection techniques have many advantages, such as a wide inspection range, a large inspection depth, etc., and are more suitable for inspecting internal defects of welded structures. However, the conventional acoustic signal detection technology is also limited in being applied to the on-line detection of the quality of the welded seam of underwater welding. The first problem is that the signal-to-noise ratio of the acoustic signal is low, effective pulse extraction is difficult, the second problem is that the complex acoustic signal features are difficult to extract, the traditional time domain and frequency domain analysis is not applicable, and the third problem is that the quality discrimination accuracy is low under the conditions of poor quality and unbalanced category of the data set. Therefore, the invention provides an online detection method for the welding line of the underwater structure based on improved deep convolution.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides an on-line detection method for the welding seam of the underwater structure based on improved deep convolution, which can realize higher quality recognition rate and provide a certain thought for on-line recognition of the quality of the welding seam of the underwater structure.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
An online detection method of an underwater structure welding seam based on improved depth convolution comprises the following steps:
S1, receiving acoustic signals of marine underwater structure welding by using a hydrophone sensor;
s2, denoising the acoustic signals by using a VMD algorithm, and extracting weak welding acoustic signals from high-order IMF components;
s3, performing time-space domain transformation on the effective pulse by using a recursion diagram, and transferring one-dimensional signals from a time domain to a phase space domain to provide more potential weld state characteristics;
S4, further generating a gradient map from the recursion map, and extracting features in three directions of the image X, Y and the XY by using the gradient map to perform feature extraction;
s5, classifying the image samples by using a VGG16 deep convolution network, and completing online judgment of the weld quality.
Preferably, in step S2, a hydrophone is used to collect acoustic signals of underwater welding, the acoustic signals are noise-reduced by a VMD algorithm, the acoustic signals are decomposed into a plurality of intrinsic mode IMF components, effective acoustic signal pulses are extracted, and the VMD algorithm operates as follows:
Where { u k}:={u1,...uk } and { ω k}:={ω1,...ωk } are all modal sets and their frequency centers, K represents the number of IMF decomposition levels, Representing a partial derivative operation, phi (t) representing a unit pulse function, j being an imaginary unit in the hilbert transform, t representing time, x representing convolution, f (t) representing a signal to be decomposed, and being a set of data having a time sequence;
introducing penalty factor ρ and Lagrange multiplier γ converts constraint into non-constraint variation problem, and augmenting Lagrange expression as follows:
The problem is solved iteratively by adopting a multiplication operator alternating algorithm, and the optimal solution formula of the VMD is obtained by continuously and iteratively updating u k、ωk and gamma as follows:
Where F (ω), γ (ω), and U k (ω) are fourier transform signals of F (t), γ (t), and U k (t), respectively, n is the number of iterations, and τ is the noise margin coefficient.
Preferably, in step S3, the key step of the recursion map is to perform phase space reconstruction, and the recursion map is used to convert the time series data of the effective pulse of the underwater welding sound signal into an image form, so as to reveal the internal structure of the time series, and specifically includes the following steps:
S31: for a time sequence signal u k (k=1, 2,.. The sampling time interval is determined to be deltat, and a proper embedding dimension m and delay time tau are determined through correlation theoretical calculation, so that the time sequence is reconstructed, and a reconstructed power system is represented by x i=[ui,ui+τ,...,ui+(m-1)τ, wherein i=1, 2,.. N- (m-1) tau;
S32, calculating the distance S ij between the i point x i and the j point x j in the reconstructed phase space:
Sij=||xi-xj||
where i=1, 2,..n- (m-1) τ, j=1, 2,..n- (m-1) τ, |·|| represents the norm;
S33, calculating a recursion value:
R(i,j)=θ(εi-Sij)
Wherein, θ (·) represents a Heaviside function, when the value in the bracket is greater than or equal to 0, θ (·) is 1, conversely, when the value in the bracket is less than 0, θ (·) is 0, ε represents a predetermined critical distance, and a fixed value or a variable value can be obtained.
Preferably, in step S4, the recursive graph is further converted into three gradient graphs, namely an X-direction gradient graph, a Y-direction gradient graph and an XY-direction gradient graph, wherein the change of the X axis is that the pixel value on the right side (X+1) of the current pixel is subtracted from the pixel value on the left side (X-1) of the current pixel, the change of the Y axis is that the pixel value below (Y+1) of the current pixel is subtracted from the pixel value above (Y-1) of the current pixel, a two-dimensional vector is formed after the two components are calculated, and the image gradient of the pixel is obtained, and the gradient expression of an image function f (X, Y) is as follows:
Wherein G x, Represents the gradient component in the X direction, G y,Representing the Y-direction gradient component, T representing the transpose.
The amplitude function mag (f) is denoted as g (x, y):
Direction angle function φ (x, y):
For digital images, corresponding to a two-dimensional discrete function gradient, the derivative is approximated using a difference:
Gx(x,y)=H(x+1,y)-H(x-1,y)
Gy(x,y)=H(x,y+1)-H(x,y-1)
In the formula, G x (X, Y) represents a gradient value of the pixel point in the X direction, G y (X, Y) represents a gradient value of the pixel point in the Y direction, and H (X, Y) represents a gray value of the pixel point.
Thus, the gradient value G (x, y) and the gradient direction α (x, y) at the pixel point (x, y) are respectively:
Preferably, in step S5, the VGG16 network is trained through a back propagation algorithm, a gradient descent method is used for optimizing network parameters, the VGG16 convolutional neural network structure comprises 13 convolutional layers and 3 full-connection layers, a deeper network structure, smaller convolutional kernels and a pooling sampling domain, abstract features can be better learned, all the convolutional layers use the same convolutional kernels, step sizes and filling, larger receptive fields can be simulated through stacking a plurality of small convolutional kernels, parameters are reduced at the same time, each convolutional layer adopts the convolutional kernels for feature extraction, the largest pooling layer is used for sampling a feature map, the full-connection layers are used for mapping and classifying features, and the prediction probability of each category is output;
the method specifically comprises the following steps:
s51, after the input image is subjected to one-time average value reduction pretreatment, the input image enters two convolution layers, and each convolution layer uses 64 convolution kernels with the size of 3 multiplied by 3 to extract features;
s52, downsampling the feature map through a2×2 maximum pooling layer;
S53, two convolution blocks, wherein each block comprises two convolution layers, each convolution layer uses 128 convolution kernels of 3 multiplied by 3 to extract features, and a2 multiplied by 2 max pooling layer is arranged behind each convolution block to further reduce the size of the feature map;
S54, three convolution blocks, each block comprising three convolution layers, each convolution layer performing feature extraction using 256, 512 and 512 3 x 3 convolution kernels;
and S55, mapping and classifying the features through three full connection layers.
Compared with the prior art, the invention has the following beneficial effects:
1. The invention displays weak welding acoustic signals from a phase space angle, firstly decomposes the acoustic signals into a plurality of IMF components through a VMD algorithm, and then converts a time sequence into a phase space by using a recurrence chart.
2. According to the invention, the gradient characteristics are extracted from the recursion image, so that the detection rate of the convolutional neural network is improved, the VGG 16 neural network can automatically learn and identify the effective gradient image characteristics, and the method can realize high-efficiency online weld quality classification by comparing the gradient image characteristics with classification results of three kinds of VGG19, RESNET, RESNET neural networks.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a welded steel plate experimental sample according to an embodiment of the present invention;
FIG. 3 is a diagram of an original audio signal according to an embodiment of the present invention;
FIG. 4 is a diagram of IMF components of an audio signal after VMD noise reduction in accordance with an embodiment of the present invention;
FIG. 5 is a recursive diagram of an embodiment of the present invention;
Fig. 6 is a gradient map of an embodiment of the present invention.
Detailed Description
The following technical solutions in the embodiments of the present invention will be clearly and completely described with reference to the accompanying drawings, so that those skilled in the art can better understand the advantages and features of the present invention, and thus the protection scope of the present invention is more clearly defined. The described embodiments of the present invention are intended to be only a few, but not all embodiments of the present invention, and all other embodiments that may be made by one of ordinary skill in the art without inventive faculty are intended to be within the scope of the present invention.
Referring to fig. 1, an on-line detection method for an underwater structure weld based on improved deep convolution includes the following steps:
Step 1, receiving acoustic signals of marine underwater structure welding by using a hydrophone sensor;
step 2, denoising the acoustic signals by using a VMD algorithm, and extracting weak welding acoustic signals from high-order IMF components;
Step 3, performing time-space domain transformation on the effective pulse by using a recursion diagram, and transferring one-dimensional signals from a time domain to a phase space domain to provide more potential weld state characteristics;
Step 4, further generating a gradient map from the recursion map, and extracting features in three directions of the image X, Y and the XY by using the gradient map to perform feature extraction;
And 5, classifying the image samples by using a VGG16 deep convolution network to finish the online judgment of the weld quality.
Referring to fig. 1 to 6, the implementation steps of the technical scheme provided by the invention are as follows:
And 2, as shown in fig. 2, after an experimental platform is built, welding the steel plates, and classifying the qualified steel plates and the unqualified steel plates. As shown in fig. 3, after the hydrophone is used to collect acoustic signals of underwater welding, the original audio signals are collected. As shown in fig. 4, the noise reduction process is performed using a VMD algorithm that decomposes the acoustic signal into 3 eigenmode components. Wherein the IMF decomposition layer number K is set to 3 layers, the band limit alpha is set to 3000, the noise holding coefficient tau is set to 0, the direct current component DC is set to 0, and the effective acoustic signal pulse is extracted. The operating principle of the VMD algorithm is as follows:
Where { u k}:={u1,...uk } and { ω k}:={ω1,...ωk } are all modal sets and their frequency centers, K represents the number of IMF decomposition levels, Representing a partial derivative operation, phi (t) representing a unit pulse function, j being an imaginary unit in the hilbert transform, t representing time, x representing convolution, f (t) representing a signal to be decomposed, and being a set of data having a time sequence;
introducing penalty factor ρ and Lagrange multiplier γ converts constraint into non-constraint variation problem, and augmenting Lagrange expression as follows:
The problem is solved iteratively by adopting a multiplication operator alternating algorithm, and the optimal solution formula of the VMD is obtained by continuously and iteratively updating u k、ωk and gamma as follows:
Where F (ω), γ (ω), and U k (ω) are fourier transform signals of F (t), γ (t), and U k (t), respectively, n is the number of iterations, and τ is the noise margin coefficient.
In this embodiment, the VMD algorithm may decompose the nonstationary signal under the condition of low signal-to-noise ratio, extract the effective pulse, completely separate the pulse from the signal, collect the obtained acoustic signal in the complex environment, and has good noise reduction processing effect and high signal-to-noise ratio of the acoustic signal after noise reduction.
Step 3, the key step of the recursion diagram is to reconstruct the phase space. A recursive diagram of the generation of the active pulses is shown in fig. 5, which converts the time series of data into image form. The method comprises the following specific steps:
3-1), for a time sequence (signal) u k (k=1, 2..once., n), determining that its sampling time interval is Δt, determining a suitable embedding dimension m and delay time τ through correlation theoretical calculation, and then reconstructing the time sequence, where the power system after reconstruction is: x i=[ui,ui+τ,...,ui+(m-1)τ ] wherein i=1, 2, n- (m-1) τ;
3-2), calculating the distance S ij between the i point x i and the j point x j in the reconstructed phase space:
Sij=||xi-xj||
where i=1, 2,..n- (m-1) τ, j=1, 2,..n- (m-1) τ, |·|| represents the norm.
3-3), Calculating a recursion value:
R(i,j)=θ(εi-Sij)
Where θ (·) represents a Heaviside function, when the value in parentheses is 0 or more, θ (·) is 1, and conversely, when the value in parentheses is less than 0, θ (·) is 0. Epsilon represents a preset critical distance, and can take a fixed value or a variable value.
In this embodiment, the recursive graph is used to convert the time-series data into the image form, so that the separability of the data can be enhanced, the internal structure and the change trend of the data can be understood, the data visualization can be realized, and the data structure is more visual.
And 4, further generating a gradient map from the recursion map as shown in fig. 6, so that the change trend of the data in each direction is more obvious. The variation of a certain pixel in the generated image sample in both the X and Y directions is calculated by comparing with neighboring pixels. The change in the X-axis refers to the pixel value to the right of the current pixel (X+1) minus the pixel value to the left of the current pixel (X-1). The change in the Y-axis is the pixel value below the current pixel (Y+1) minus the pixel value above the current pixel (Y-1). After the two components are calculated, a two-dimensional vector is formed, and the image gradient of the pixel is obtained. The image function f (x, y) gradient expression is:
Wherein G x, Represents the gradient component in the X direction, G y,Representing the Y-direction gradient component, T representing the transpose.
The amplitude function mag (f) is denoted as g (x, y):
Direction angle function φ (x, y):
For digital images, corresponding to a two-dimensional discrete function gradient, the derivative is approximated using a difference:
Gx(x,y)=H(x+1,y)-H(x-1,y)
Gy(x,y)=H(x,y+1)-H(x,y-1)
In the formula, G x (X, Y) represents a gradient value of the pixel point in the X direction, G y (X, Y) represents a gradient value of the pixel point in the Y direction, and H (X, Y) represents a gray value of the pixel point.
Thus, the gradient value G (x, y) and the gradient direction α (x, y) at the pixel point (x, y) are respectively:
in the embodiment, the problem of difficult feature extraction can be solved, the data features are more obvious by using a gradient map method, the subsequent quality judgment is facilitated, and the method is suitable for analyzing complex unstable signals.
Step 5, as shown in fig. 2, the process of identifying and classifying the data by adopting the VGG16 neural network is as follows:
5-1), and after one-time average reduction pretreatment, the input image enters two convolution layers (conv 1-1 and conv 1-2), and each convolution layer uses 64 convolution kernels with the 3 multiplied by 3 to extract features, and the step length is 1.
5-2), Downsampling the feature map by a2 x 2 max pooling layer (pool 1) to reduce computation and overfitting.
5-3), Two convolution blocks, each block containing two convolution layers (conv 2-1 and conv2-2, conv3-1 and conv 3-2), each using 128 3 x 3 convolution kernels for feature extraction. Each convolution block is followed by a2 x 2 max pooling layer (pool 2 and pool 3) for further reducing the size of the feature map.
5-4), Three convolution blocks (conv 4-1 through conv4-3, conv5-1 through conv 5-3), each block containing three convolution layers, each using 256, 512, and 512 3 x 3 convolution kernels for feature extraction. Likewise, each convolution block is followed by a2×2 max pooling layer (pool 4 and pool 5).
5-5), The features are mapped and classified by three fully connected layers (FC-4096, FC-4096 and FC-1000). The first two full connection layers have 4096 hidden units, and the last full connection layer outputs the prediction result of the model.
In this embodiment, the overall architecture of the VGG16 model enhances the expressive power of the model by stacking multiple smaller convolutional layers and pooling layers to build a deep network. Meanwhile, through reasonable network design and parameter sharing, the VGG16 model has higher efficiency and training speed while maintaining higher performance. Training and classifying the gradient map by adopting a VGG16 deep neural network, inputting gradient images with the size of 224 multiplied by 224, and obtaining 760 training samples in the data set, wherein 380 samples are qualified samples, 380 samples are unqualified samples, 40 test samples are obtained, wherein 20 samples are qualified samples, and 20 samples are unqualified samples. The batch processing size of the training samples of the neural network is set to be 16, the batch processing size of the test samples is set to be 2, the iteration number of the model is set to be 20, the learning rate is set to be 0.001, and the test accuracy is accurate to 2 decimal places. Experiments prove that the accuracy of the VGG16 in distinguishing the gradient map reaches 97.5%, and the online weld quality recognition is realized.
The description and practice of the invention disclosed herein will be readily apparent to those skilled in the art, and may be modified and adapted in several ways without departing from the principles of the invention. Accordingly, modifications or improvements may be made without departing from the spirit of the invention and are also to be considered within the scope of the invention.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410045063.2A CN118130608B (en) | 2024-01-12 | 2024-01-12 | An online detection method for underwater structure welds based on improved deep convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410045063.2A CN118130608B (en) | 2024-01-12 | 2024-01-12 | An online detection method for underwater structure welds based on improved deep convolution |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118130608A CN118130608A (en) | 2024-06-04 |
CN118130608B true CN118130608B (en) | 2025-02-18 |
Family
ID=91233370
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410045063.2A Active CN118130608B (en) | 2024-01-12 | 2024-01-12 | An online detection method for underwater structure welds based on improved deep convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118130608B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111664365A (en) * | 2020-06-07 | 2020-09-15 | 东北石油大学 | Oil and gas pipeline leakage detection method based on improved VMD and 1DCNN |
CN114255220A (en) * | 2021-12-21 | 2022-03-29 | 徐州徐工挖掘机械有限公司 | Weld quality detection method based on Transformer neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116740387A (en) * | 2023-05-20 | 2023-09-12 | 西北工业大学 | Underwater noise identification method based on continuous wavelet transformation and improved residual neural network |
-
2024
- 2024-01-12 CN CN202410045063.2A patent/CN118130608B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111664365A (en) * | 2020-06-07 | 2020-09-15 | 东北石油大学 | Oil and gas pipeline leakage detection method based on improved VMD and 1DCNN |
CN114255220A (en) * | 2021-12-21 | 2022-03-29 | 徐州徐工挖掘机械有限公司 | Weld quality detection method based on Transformer neural network |
Also Published As
Publication number | Publication date |
---|---|
CN118130608A (en) | 2024-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | An intelligent diagnosis scheme based on generative adversarial learning deep neural networks and its application to planetary gearbox fault pattern recognition | |
CN110823574A (en) | A Fault Diagnosis Method Based on Semi-Supervised Learning Deep Adversarial Networks | |
CN109919123B (en) | Oil spill detection method on sea surface based on multi-scale feature deep convolutional neural network | |
CN114065809B (en) | A method, device, electronic device and storage medium for identifying abnormal noise of a passenger car | |
CN111931820A (en) | Water target radiation noise LOFAR spectrogram spectrum extraction method based on convolution residual error network | |
CN115114949B (en) | A method and system for intelligent identification of ship targets based on underwater acoustic signals | |
CN116680639B (en) | A deep learning-based anomaly detection method for deep-sea submersible sensor data | |
Yao et al. | An adaptive anti-noise network with recursive attention mechanism for gear fault diagnosis in real-industrial noise environment condition | |
Jin et al. | Defect identification of adhesive structure based on DCGAN and YOLOv5 | |
CN111582137A (en) | Rolling bearing signal reconstruction method and system | |
CN117574158A (en) | Wind turbine generator gearbox fault diagnosis method and system | |
CN116975527A (en) | Fault diagnosis method based on soft thresholding and wavelet transformation denoising | |
CN113138377B (en) | Self-adaptive bottom reverberation suppression method based on multi-resolution binary singular value decomposition | |
Su et al. | Deep learning seismic damage assessment with embedded signal denoising considering three-dimensional time–frequency feature correlation | |
CN114861736B (en) | Internal defect positioning device and internal defect positioning method based on GIALDN network | |
CN118130608B (en) | An online detection method for underwater structure welds based on improved deep convolution | |
CN113782044B (en) | Voice enhancement method and device | |
CN119625383A (en) | A method for feature extraction and classification of underwater small targets based on active sonar | |
CN118711207A (en) | A texture segmentation and enhancement method and system for seismic profile images | |
CN116859461B (en) | Multiple imaging method and system | |
CN118473544A (en) | Ship radiation signal background noise suppression method and system based on unsupervised learning | |
CN117058443B (en) | Pipeline magnetic flux leakage image identification method based on improved residual error shrinkage network | |
CN117574056A (en) | Wide-area electromagnetic data denoising method and system based on hybrid neural network model | |
CN116429430A (en) | A bearing fault detection method, system, device and medium based on an adaptive multi-scale enhanced dictionary learning framework | |
CN116913316A (en) | Power transformer typical fault voiceprint diagnosis method based on Mosaic data enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |