[go: up one dir, main page]

CN118130608B - An online detection method for underwater structure welds based on improved deep convolution - Google Patents

An online detection method for underwater structure welds based on improved deep convolution Download PDF

Info

Publication number
CN118130608B
CN118130608B CN202410045063.2A CN202410045063A CN118130608B CN 118130608 B CN118130608 B CN 118130608B CN 202410045063 A CN202410045063 A CN 202410045063A CN 118130608 B CN118130608 B CN 118130608B
Authority
CN
China
Prior art keywords
gradient
convolution
pixel
welding
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410045063.2A
Other languages
Chinese (zh)
Other versions
CN118130608A (en
Inventor
胡泮
陈然
徐鑫慧
沐子轩
华亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202410045063.2A priority Critical patent/CN118130608B/en
Publication of CN118130608A publication Critical patent/CN118130608A/en
Application granted granted Critical
Publication of CN118130608B publication Critical patent/CN118130608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/041Analysing solids on the surface of the material, e.g. using Lamb, Rayleigh or shear waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/043Analysing solids in the interior, e.g. by shear waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4409Processing the detected response signal, e.g. electronic circuits specially adapted therefor by comparison
    • G01N29/4418Processing the detected response signal, e.g. electronic circuits specially adapted therefor by comparison with a model, e.g. best-fit, regression analysis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4481Neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/26Scanned objects
    • G01N2291/267Welds
    • G01N2291/2675Seam, butt welding

Landscapes

  • Physics & Mathematics (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biochemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of underwater welding detection, in particular to an online detection method of an underwater structure welding seam based on improved deep convolution, which comprises the steps of S1, using a hydrophone sensor to receive an acoustic signal of marine underwater structure welding, S2, using a VMD algorithm to denoise the acoustic signal and extract a weak welding acoustic signal from a high-order IMF component, S3, using a recursion graph to perform time-space transformation on an effective pulse, transferring a one-dimensional signal from a time domain to a phase space domain to provide more potential welding seam state characteristics, S4, further generating a gradient graph from the recursion graph, using the gradient graph to perform characteristic extraction, extracting the characteristics of three directions of an image X, Y and an XY, and S5, using a VGG16 deep convolution network to classify an image sample to finish online judgment of welding seam quality. The invention can realize the on-line detection of the quality of the underwater welding seam, adjusts the welding quality in real time and provides a certain thought for the on-line detection of the quality of the underwater welding seam.

Description

Improved depth convolution-based on-line detection method for welding seam of underwater structure
Technical Field
The invention relates to the technical field of underwater welding detection, in particular to an online detection method for an underwater structure welding seam based on improved deep convolution.
Background
The underwater welding technology plays a key role in the fields of offshore wind power, drilling platforms, ships, submarine pipelines and the like. At present, defects in the welding line can seriously affect the reliability of the structural part. In order to repair as soon as possible and reduce the loss, repair by adopting an underwater welding technology is a necessary choice. Therefore, it is important to develop reliable on-line weld quality detection technology.
Therefore, scientists at home and abroad propose various detection modes such as vision, optics, acoustics and the like, but most of them have limitations. Visual detection is easy to be interfered by the outside, such as factors of light, temperature, humidity and the like, the optical detection precision can be influenced under the conditions of complex welding defects and surface defects, and the optical detection cost is higher. In contrast, acoustic-based non-destructive inspection techniques have many advantages, such as a wide inspection range, a large inspection depth, etc., and are more suitable for inspecting internal defects of welded structures. However, the conventional acoustic signal detection technology is also limited in being applied to the on-line detection of the quality of the welded seam of underwater welding. The first problem is that the signal-to-noise ratio of the acoustic signal is low, effective pulse extraction is difficult, the second problem is that the complex acoustic signal features are difficult to extract, the traditional time domain and frequency domain analysis is not applicable, and the third problem is that the quality discrimination accuracy is low under the conditions of poor quality and unbalanced category of the data set. Therefore, the invention provides an online detection method for the welding line of the underwater structure based on improved deep convolution.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides an on-line detection method for the welding seam of the underwater structure based on improved deep convolution, which can realize higher quality recognition rate and provide a certain thought for on-line recognition of the quality of the welding seam of the underwater structure.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
An online detection method of an underwater structure welding seam based on improved depth convolution comprises the following steps:
S1, receiving acoustic signals of marine underwater structure welding by using a hydrophone sensor;
s2, denoising the acoustic signals by using a VMD algorithm, and extracting weak welding acoustic signals from high-order IMF components;
s3, performing time-space domain transformation on the effective pulse by using a recursion diagram, and transferring one-dimensional signals from a time domain to a phase space domain to provide more potential weld state characteristics;
S4, further generating a gradient map from the recursion map, and extracting features in three directions of the image X, Y and the XY by using the gradient map to perform feature extraction;
s5, classifying the image samples by using a VGG16 deep convolution network, and completing online judgment of the weld quality.
Preferably, in step S2, a hydrophone is used to collect acoustic signals of underwater welding, the acoustic signals are noise-reduced by a VMD algorithm, the acoustic signals are decomposed into a plurality of intrinsic mode IMF components, effective acoustic signal pulses are extracted, and the VMD algorithm operates as follows:
Where { u k}:={u1,...uk } and { ω k}:={ω1,...ωk } are all modal sets and their frequency centers, K represents the number of IMF decomposition levels, Representing a partial derivative operation, phi (t) representing a unit pulse function, j being an imaginary unit in the hilbert transform, t representing time, x representing convolution, f (t) representing a signal to be decomposed, and being a set of data having a time sequence;
introducing penalty factor ρ and Lagrange multiplier γ converts constraint into non-constraint variation problem, and augmenting Lagrange expression as follows:
The problem is solved iteratively by adopting a multiplication operator alternating algorithm, and the optimal solution formula of the VMD is obtained by continuously and iteratively updating u k、ωk and gamma as follows:
Where F (ω), γ (ω), and U k (ω) are fourier transform signals of F (t), γ (t), and U k (t), respectively, n is the number of iterations, and τ is the noise margin coefficient.
Preferably, in step S3, the key step of the recursion map is to perform phase space reconstruction, and the recursion map is used to convert the time series data of the effective pulse of the underwater welding sound signal into an image form, so as to reveal the internal structure of the time series, and specifically includes the following steps:
S31: for a time sequence signal u k (k=1, 2,.. The sampling time interval is determined to be deltat, and a proper embedding dimension m and delay time tau are determined through correlation theoretical calculation, so that the time sequence is reconstructed, and a reconstructed power system is represented by x i=[ui,ui+τ,...,ui+(m-1)τ, wherein i=1, 2,.. N- (m-1) tau;
S32, calculating the distance S ij between the i point x i and the j point x j in the reconstructed phase space:
Sij=||xi-xj||
where i=1, 2,..n- (m-1) τ, j=1, 2,..n- (m-1) τ, |·|| represents the norm;
S33, calculating a recursion value:
R(i,j)=θ(εi-Sij)
Wherein, θ (·) represents a Heaviside function, when the value in the bracket is greater than or equal to 0, θ (·) is 1, conversely, when the value in the bracket is less than 0, θ (·) is 0, ε represents a predetermined critical distance, and a fixed value or a variable value can be obtained.
Preferably, in step S4, the recursive graph is further converted into three gradient graphs, namely an X-direction gradient graph, a Y-direction gradient graph and an XY-direction gradient graph, wherein the change of the X axis is that the pixel value on the right side (X+1) of the current pixel is subtracted from the pixel value on the left side (X-1) of the current pixel, the change of the Y axis is that the pixel value below (Y+1) of the current pixel is subtracted from the pixel value above (Y-1) of the current pixel, a two-dimensional vector is formed after the two components are calculated, and the image gradient of the pixel is obtained, and the gradient expression of an image function f (X, Y) is as follows:
Wherein G x, Represents the gradient component in the X direction, G y,Representing the Y-direction gradient component, T representing the transpose.
The amplitude function mag (f) is denoted as g (x, y):
Direction angle function φ (x, y):
For digital images, corresponding to a two-dimensional discrete function gradient, the derivative is approximated using a difference:
Gx(x,y)=H(x+1,y)-H(x-1,y)
Gy(x,y)=H(x,y+1)-H(x,y-1)
In the formula, G x (X, Y) represents a gradient value of the pixel point in the X direction, G y (X, Y) represents a gradient value of the pixel point in the Y direction, and H (X, Y) represents a gray value of the pixel point.
Thus, the gradient value G (x, y) and the gradient direction α (x, y) at the pixel point (x, y) are respectively:
Preferably, in step S5, the VGG16 network is trained through a back propagation algorithm, a gradient descent method is used for optimizing network parameters, the VGG16 convolutional neural network structure comprises 13 convolutional layers and 3 full-connection layers, a deeper network structure, smaller convolutional kernels and a pooling sampling domain, abstract features can be better learned, all the convolutional layers use the same convolutional kernels, step sizes and filling, larger receptive fields can be simulated through stacking a plurality of small convolutional kernels, parameters are reduced at the same time, each convolutional layer adopts the convolutional kernels for feature extraction, the largest pooling layer is used for sampling a feature map, the full-connection layers are used for mapping and classifying features, and the prediction probability of each category is output;
the method specifically comprises the following steps:
s51, after the input image is subjected to one-time average value reduction pretreatment, the input image enters two convolution layers, and each convolution layer uses 64 convolution kernels with the size of 3 multiplied by 3 to extract features;
s52, downsampling the feature map through a2×2 maximum pooling layer;
S53, two convolution blocks, wherein each block comprises two convolution layers, each convolution layer uses 128 convolution kernels of 3 multiplied by 3 to extract features, and a2 multiplied by 2 max pooling layer is arranged behind each convolution block to further reduce the size of the feature map;
S54, three convolution blocks, each block comprising three convolution layers, each convolution layer performing feature extraction using 256, 512 and 512 3 x 3 convolution kernels;
and S55, mapping and classifying the features through three full connection layers.
Compared with the prior art, the invention has the following beneficial effects:
1. The invention displays weak welding acoustic signals from a phase space angle, firstly decomposes the acoustic signals into a plurality of IMF components through a VMD algorithm, and then converts a time sequence into a phase space by using a recurrence chart.
2. According to the invention, the gradient characteristics are extracted from the recursion image, so that the detection rate of the convolutional neural network is improved, the VGG 16 neural network can automatically learn and identify the effective gradient image characteristics, and the method can realize high-efficiency online weld quality classification by comparing the gradient image characteristics with classification results of three kinds of VGG19, RESNET, RESNET neural networks.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a welded steel plate experimental sample according to an embodiment of the present invention;
FIG. 3 is a diagram of an original audio signal according to an embodiment of the present invention;
FIG. 4 is a diagram of IMF components of an audio signal after VMD noise reduction in accordance with an embodiment of the present invention;
FIG. 5 is a recursive diagram of an embodiment of the present invention;
Fig. 6 is a gradient map of an embodiment of the present invention.
Detailed Description
The following technical solutions in the embodiments of the present invention will be clearly and completely described with reference to the accompanying drawings, so that those skilled in the art can better understand the advantages and features of the present invention, and thus the protection scope of the present invention is more clearly defined. The described embodiments of the present invention are intended to be only a few, but not all embodiments of the present invention, and all other embodiments that may be made by one of ordinary skill in the art without inventive faculty are intended to be within the scope of the present invention.
Referring to fig. 1, an on-line detection method for an underwater structure weld based on improved deep convolution includes the following steps:
Step 1, receiving acoustic signals of marine underwater structure welding by using a hydrophone sensor;
step 2, denoising the acoustic signals by using a VMD algorithm, and extracting weak welding acoustic signals from high-order IMF components;
Step 3, performing time-space domain transformation on the effective pulse by using a recursion diagram, and transferring one-dimensional signals from a time domain to a phase space domain to provide more potential weld state characteristics;
Step 4, further generating a gradient map from the recursion map, and extracting features in three directions of the image X, Y and the XY by using the gradient map to perform feature extraction;
And 5, classifying the image samples by using a VGG16 deep convolution network to finish the online judgment of the weld quality.
Referring to fig. 1 to 6, the implementation steps of the technical scheme provided by the invention are as follows:
And 2, as shown in fig. 2, after an experimental platform is built, welding the steel plates, and classifying the qualified steel plates and the unqualified steel plates. As shown in fig. 3, after the hydrophone is used to collect acoustic signals of underwater welding, the original audio signals are collected. As shown in fig. 4, the noise reduction process is performed using a VMD algorithm that decomposes the acoustic signal into 3 eigenmode components. Wherein the IMF decomposition layer number K is set to 3 layers, the band limit alpha is set to 3000, the noise holding coefficient tau is set to 0, the direct current component DC is set to 0, and the effective acoustic signal pulse is extracted. The operating principle of the VMD algorithm is as follows:
Where { u k}:={u1,...uk } and { ω k}:={ω1,...ωk } are all modal sets and their frequency centers, K represents the number of IMF decomposition levels, Representing a partial derivative operation, phi (t) representing a unit pulse function, j being an imaginary unit in the hilbert transform, t representing time, x representing convolution, f (t) representing a signal to be decomposed, and being a set of data having a time sequence;
introducing penalty factor ρ and Lagrange multiplier γ converts constraint into non-constraint variation problem, and augmenting Lagrange expression as follows:
The problem is solved iteratively by adopting a multiplication operator alternating algorithm, and the optimal solution formula of the VMD is obtained by continuously and iteratively updating u k、ωk and gamma as follows:
Where F (ω), γ (ω), and U k (ω) are fourier transform signals of F (t), γ (t), and U k (t), respectively, n is the number of iterations, and τ is the noise margin coefficient.
In this embodiment, the VMD algorithm may decompose the nonstationary signal under the condition of low signal-to-noise ratio, extract the effective pulse, completely separate the pulse from the signal, collect the obtained acoustic signal in the complex environment, and has good noise reduction processing effect and high signal-to-noise ratio of the acoustic signal after noise reduction.
Step 3, the key step of the recursion diagram is to reconstruct the phase space. A recursive diagram of the generation of the active pulses is shown in fig. 5, which converts the time series of data into image form. The method comprises the following specific steps:
3-1), for a time sequence (signal) u k (k=1, 2..once., n), determining that its sampling time interval is Δt, determining a suitable embedding dimension m and delay time τ through correlation theoretical calculation, and then reconstructing the time sequence, where the power system after reconstruction is: x i=[ui,ui+τ,...,ui+(m-1)τ ] wherein i=1, 2, n- (m-1) τ;
3-2), calculating the distance S ij between the i point x i and the j point x j in the reconstructed phase space:
Sij=||xi-xj||
where i=1, 2,..n- (m-1) τ, j=1, 2,..n- (m-1) τ, |·|| represents the norm.
3-3), Calculating a recursion value:
R(i,j)=θ(εi-Sij)
Where θ (·) represents a Heaviside function, when the value in parentheses is 0 or more, θ (·) is 1, and conversely, when the value in parentheses is less than 0, θ (·) is 0. Epsilon represents a preset critical distance, and can take a fixed value or a variable value.
In this embodiment, the recursive graph is used to convert the time-series data into the image form, so that the separability of the data can be enhanced, the internal structure and the change trend of the data can be understood, the data visualization can be realized, and the data structure is more visual.
And 4, further generating a gradient map from the recursion map as shown in fig. 6, so that the change trend of the data in each direction is more obvious. The variation of a certain pixel in the generated image sample in both the X and Y directions is calculated by comparing with neighboring pixels. The change in the X-axis refers to the pixel value to the right of the current pixel (X+1) minus the pixel value to the left of the current pixel (X-1). The change in the Y-axis is the pixel value below the current pixel (Y+1) minus the pixel value above the current pixel (Y-1). After the two components are calculated, a two-dimensional vector is formed, and the image gradient of the pixel is obtained. The image function f (x, y) gradient expression is:
Wherein G x, Represents the gradient component in the X direction, G y,Representing the Y-direction gradient component, T representing the transpose.
The amplitude function mag (f) is denoted as g (x, y):
Direction angle function φ (x, y):
For digital images, corresponding to a two-dimensional discrete function gradient, the derivative is approximated using a difference:
Gx(x,y)=H(x+1,y)-H(x-1,y)
Gy(x,y)=H(x,y+1)-H(x,y-1)
In the formula, G x (X, Y) represents a gradient value of the pixel point in the X direction, G y (X, Y) represents a gradient value of the pixel point in the Y direction, and H (X, Y) represents a gray value of the pixel point.
Thus, the gradient value G (x, y) and the gradient direction α (x, y) at the pixel point (x, y) are respectively:
in the embodiment, the problem of difficult feature extraction can be solved, the data features are more obvious by using a gradient map method, the subsequent quality judgment is facilitated, and the method is suitable for analyzing complex unstable signals.
Step 5, as shown in fig. 2, the process of identifying and classifying the data by adopting the VGG16 neural network is as follows:
5-1), and after one-time average reduction pretreatment, the input image enters two convolution layers (conv 1-1 and conv 1-2), and each convolution layer uses 64 convolution kernels with the 3 multiplied by 3 to extract features, and the step length is 1.
5-2), Downsampling the feature map by a2 x 2 max pooling layer (pool 1) to reduce computation and overfitting.
5-3), Two convolution blocks, each block containing two convolution layers (conv 2-1 and conv2-2, conv3-1 and conv 3-2), each using 128 3 x 3 convolution kernels for feature extraction. Each convolution block is followed by a2 x 2 max pooling layer (pool 2 and pool 3) for further reducing the size of the feature map.
5-4), Three convolution blocks (conv 4-1 through conv4-3, conv5-1 through conv 5-3), each block containing three convolution layers, each using 256, 512, and 512 3 x 3 convolution kernels for feature extraction. Likewise, each convolution block is followed by a2×2 max pooling layer (pool 4 and pool 5).
5-5), The features are mapped and classified by three fully connected layers (FC-4096, FC-4096 and FC-1000). The first two full connection layers have 4096 hidden units, and the last full connection layer outputs the prediction result of the model.
In this embodiment, the overall architecture of the VGG16 model enhances the expressive power of the model by stacking multiple smaller convolutional layers and pooling layers to build a deep network. Meanwhile, through reasonable network design and parameter sharing, the VGG16 model has higher efficiency and training speed while maintaining higher performance. Training and classifying the gradient map by adopting a VGG16 deep neural network, inputting gradient images with the size of 224 multiplied by 224, and obtaining 760 training samples in the data set, wherein 380 samples are qualified samples, 380 samples are unqualified samples, 40 test samples are obtained, wherein 20 samples are qualified samples, and 20 samples are unqualified samples. The batch processing size of the training samples of the neural network is set to be 16, the batch processing size of the test samples is set to be 2, the iteration number of the model is set to be 20, the learning rate is set to be 0.001, and the test accuracy is accurate to 2 decimal places. Experiments prove that the accuracy of the VGG16 in distinguishing the gradient map reaches 97.5%, and the online weld quality recognition is realized.
The description and practice of the invention disclosed herein will be readily apparent to those skilled in the art, and may be modified and adapted in several ways without departing from the principles of the invention. Accordingly, modifications or improvements may be made without departing from the spirit of the invention and are also to be considered within the scope of the invention.

Claims (3)

1.一种基于改进型深度卷积的水下结构焊缝在线检测方法,其特征在于,包括以下步骤:1. An online detection method for underwater structure welds based on improved deep convolution, characterized in that it comprises the following steps: S1、使用水听器传感器接收海洋水下结构焊接的声信号;S1. Using a hydrophone sensor to receive acoustic signals of underwater structure welding in the ocean; S2、使用VMD算法对声信号进行去噪处理,从高阶IMF成分中提取微弱焊接声信号;S2, use VMD algorithm to denoise the acoustic signal and extract weak welding acoustic signals from high-order IMF components; S3、使用递归图对有效脉冲进行时空域转变,将一维信号从时域转到相空间域,提供更多潜在焊缝状态特征;S3, using the recursive graph to transform the effective pulse into the time-space domain, converting the one-dimensional signal from the time domain to the phase space domain, and providing more potential weld state characteristics; S4、将递归图进一步生成梯度图,使用梯度图进行特征提取,提取图像X、Y和XY三个方向的特征;S4, further generating a gradient map from the recursive map, using the gradient map to perform feature extraction, and extracting features in three directions of the image X, Y and XY; S5、使用VGG16深度卷积网络对图像样本进行分类,完成焊缝质量在线判定;S5. Use VGG16 deep convolutional network to classify image samples and complete online judgment of weld quality; 在步骤S3中,递归图的关键步骤是进行相空间重构,利用递归图将水下焊接声信号有效脉冲的时间序列数据转换为图像形式,揭示时间序列的内部结构,具体包括如下步骤:In step S3, the key step of the recursive graph is to reconstruct the phase space. The recursive graph is used to convert the time series data of the effective pulses of the underwater welding acoustic signal into an image form to reveal the internal structure of the time series. Specifically, the following steps are included: S31:对于时间序列信号uk(k=1,2,...,n),确定其采样时间间隔为Δt,经过相关理论计算确定合适的嵌入维度m以及延迟时间τ,进而对时间序列进行重构,重构后的动力系统为:xi=[ui,ui+τ,...,ui+(m-1)τ]式中,i=1,2,...,n-(m-1)τ;S31: For the time series signal u k (k=1,2,...,n), determine its sampling time interval as Δt, determine the appropriate embedding dimension m and delay time τ through relevant theoretical calculations, and then reconstruct the time series. The reconstructed dynamic system is: x i =[u i ,u i+τ ,...,u i+(m-1)τ ] where i=1,2,...,n-(m-1)τ; S32:计算重构后相空间中i点xi和j点xj的距离SijS32: Calculate the distance S ij between point i xi and point j xj in the reconstructed phase space: Sij=||xi-xj||S ij =|| xi - xj || 式中,i=1,2,...,n-(m-1)τ,j=1,2,...,n-(m-1)τ,||·||表示范数;Where, i=1,2,...,n-(m-1)τ, j=1,2,...,n-(m-1)τ, ||·|| represents the norm; S33:计算递归值R(i,j):S33: Calculate the recursive value R(i,j): R(i,j)=θ(εi-Sij),R(i,j)=θ(ε i -S ij ), 式中,θ(·)表示Heaviside函数,当括号内的数大于等于0时,θ(·)为1,相反,当括号内的数字小于0时,θ(·)为0;ε表示预先设定的临界距离,可取固定值或者变化值;Where θ(·) represents the Heaviside function. When the number in the brackets is greater than or equal to 0, θ(·) is 1. On the contrary, when the number in the brackets is less than 0, θ(·) is 0. ε represents the pre-set critical distance, which can be a fixed value or a variable value. 在步骤S4中,将递归图进一步转化为三张梯度图,分别是X方向梯度图、Y方向梯度图和XY方向梯度图;X轴的变化是指当前像素右侧(X+1)的像素值减去当前像素左侧(X-1)的像素值,Y轴的变化是当前像素下方(Y+1)的像素值减去当前像素上方(Y-1)的像素值;计算这两个分量后,形成一个二维向量,即得到了该像素的图像梯度;图像函数f(x,y)梯度表达式为:In step S4, the recursive graph is further converted into three gradient graphs, namely, the X-direction gradient graph, the Y-direction gradient graph, and the XY-direction gradient graph; the change of the X-axis refers to the pixel value on the right side of the current pixel (X+1) minus the pixel value on the left side of the current pixel (X-1), and the change of the Y-axis refers to the pixel value below the current pixel (Y+1) minus the pixel value above the current pixel (Y-1); after calculating these two components, a two-dimensional vector is formed, that is, the image gradient of the pixel is obtained; the gradient expression of the image function f(x, y) is: 式中,Gx表示X方向梯度分量,Gy表示Y方向梯度分量,T表示转置;In the formula, G x , represents the X-direction gradient component, G y , represents the gradient component in the Y direction, and T represents transposition; 幅度函数mag(f)表示为g(x,y):The magnitude function mag(f) is expressed as g(x,y): 方向角函数φ(x,y):Direction angle function φ(x,y): 对于数字图像来说,相当于二维离散函数求梯度,使用差分来近似导数:For digital images, it is equivalent to finding the gradient of a two-dimensional discrete function, and using differences to approximate the derivative: Gx(x,y)=H(x+1,y)-H(x-1,y) Gx (x,y)=H(x+1,y)-H(x-1,y) Gy(x,y)=H(x,y+1)-H(x,y-1)G y (x,y) = H (x,y+1) - H (x,y-1) 式中,Gx(x,y)表示像素点在X方向梯度值,Gy(x,y)表示像素点在Y方向梯度值,H(x,y)表示像素点的灰度值;In the formula, Gx (x,y) represents the gradient value of the pixel in the X direction, Gy (x,y) represents the gradient value of the pixel in the Y direction, and H(x,y) represents the grayscale value of the pixel; 因此,像素点(x,y)处的梯度值G(x,y)和梯度方向α(x,y)分别是:Therefore, the gradient value G(x,y) and gradient direction α(x,y) at the pixel point (x,y) are: 2.根据权利要求1所述的一种基于改进型深度卷积的水下结构焊缝在线检测方法,其特征在于,在步骤S2中,采用水听器采集水下焊接的声信号,声信号经VMD算法进行降噪处理,声信号被分解为多个本征模态IMF分量,有效声信号脉冲被提取出来,VMD算法的工作原理如下:2. According to claim 1, an underwater structure weld online detection method based on improved deep convolution is characterized in that, in step S2, a hydrophone is used to collect underwater welding acoustic signals, the acoustic signals are subjected to noise reduction processing by a VMD algorithm, the acoustic signals are decomposed into multiple intrinsic mode IMF components, and effective acoustic signal pulses are extracted. The working principle of the VMD algorithm is as follows: 其中{uk}:={u1,...uk}和{ωk}:={ω1,...ωk}是所有模态集合及其频率中心,K表示IMF分解层数,表示偏导运算,φ(t)表示单位脉冲函数,j是希尔伯特变换中的虚数单位,t表示时间,*表示卷积,f(t)表示待分解信号,是一组具有时间序列的数据;where { uk }:={ u1 ,... uk } and { ωk }:={ ω1 ,... ωk } are all mode sets and their frequency centers, K represents the number of IMF decomposition levels, represents partial derivative operation, φ(t) represents unit pulse function, j is the imaginary unit in Hilbert transform, t represents time, * represents convolution, f(t) represents the signal to be decomposed, and is a set of data with time series; 引入惩罚因子ρ和Lagrange乘子γ将约束性转化为非约束性变分问题,增广Lagrange表达式如下:The penalty factor ρ and Lagrange multiplier γ are introduced to transform the constraint into an unconstrained variational problem. The augmented Lagrange expression is as follows: 采用乘法算子交替算法来迭代求解这个问题,通过不断迭代更新uk、ωk和γ,得到VMD的最优解公式如下:The multiplication operator alternating algorithm is used to iteratively solve this problem. By continuously iteratively updating u k , ω k and γ, the optimal solution formula of VMD is obtained as follows: 其中,F(ω)、γ(ω)和Uk(ω)分别是f(t)、γ(t)和uk(t)的傅里叶变换信号,n为迭代次数,τ为噪声容限系数。Wherein, F(ω), γ(ω) and U k (ω) are the Fourier transform signals of f(t), γ(t) and uk (t), respectively, n is the number of iterations, and τ is the noise margin coefficient. 3.根据权利要求1所述的一种基于改进型深度卷积的水下结构焊缝在线检测方法,其特征在于,在步骤S5中,VGG16网络通过反向传播算法进行训练,使用梯度下降方法优化网络参数,VGG16卷积神经网络结构包括13个卷积层和3个全连接层,较深的网络结构和较小的卷积核和池化采样域,可以更好的学习抽象特征;其所有卷积层都使用相同大小的卷积核、步长和填充,通过堆叠多个小的卷积核,可以模拟更大的感受野,同时减少参数量;每个卷积层都采用卷积核进行特征提取,再通过最大池化层对特征图进行采样,最后通过全连接层对特征进行映射和分类,并输出每个类别的预测概率;3. According to claim 1, an online detection method for underwater structure welds based on improved deep convolution is characterized in that, in step S5, the VGG16 network is trained by a back propagation algorithm, and the network parameters are optimized using a gradient descent method. The VGG16 convolutional neural network structure includes 13 convolutional layers and 3 fully connected layers. The deeper network structure and smaller convolution kernels and pooling sampling domains can better learn abstract features; all convolutional layers use the same size of convolution kernels, step sizes and padding, and by stacking multiple small convolution kernels, a larger receptive field can be simulated while reducing the number of parameters; each convolutional layer uses a convolution kernel for feature extraction, and then samples the feature map through a maximum pooling layer, and finally maps and classifies the features through a fully connected layer, and outputs the predicted probability of each category; 具体包括如下步骤:The specific steps include: S51:输入图像经过一次减均值预处理后,进入两个卷积层,每个卷积层都使用64个3×3的卷积核进行特征提取;S51: After the input image is preprocessed by subtracting the mean, it enters two convolutional layers. Each convolutional layer uses 64 3×3 convolution kernels for feature extraction. S52:通过一个2×2的最大池化层对特征图进行下采样;S52: Downsample the feature map through a 2×2 maximum pooling layer; S53:两个卷积块,每个块包含两个卷积层,每个卷积层使用128个3×3的卷积核进行特征提取;每个卷积块后面都有一个2×2的最大池化层进一步减少特征图的尺寸;S53: Two convolution blocks, each block contains two convolution layers, each convolution layer uses 128 3×3 convolution kernels for feature extraction; each convolution block is followed by a 2×2 maximum pooling layer to further reduce the size of the feature map; S54:三个卷积块,每个块包含三个卷积层,每个卷积层使用256个、512个和512个3×3的卷积核进行特征提取;S54: three convolution blocks, each block contains three convolution layers, and each convolution layer uses 256, 512, and 512 3×3 convolution kernels for feature extraction; S55:通过三个全连接层对特征进行映射和分类。S55: Features are mapped and classified through three fully connected layers.
CN202410045063.2A 2024-01-12 2024-01-12 An online detection method for underwater structure welds based on improved deep convolution Active CN118130608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410045063.2A CN118130608B (en) 2024-01-12 2024-01-12 An online detection method for underwater structure welds based on improved deep convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410045063.2A CN118130608B (en) 2024-01-12 2024-01-12 An online detection method for underwater structure welds based on improved deep convolution

Publications (2)

Publication Number Publication Date
CN118130608A CN118130608A (en) 2024-06-04
CN118130608B true CN118130608B (en) 2025-02-18

Family

ID=91233370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410045063.2A Active CN118130608B (en) 2024-01-12 2024-01-12 An online detection method for underwater structure welds based on improved deep convolution

Country Status (1)

Country Link
CN (1) CN118130608B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111664365A (en) * 2020-06-07 2020-09-15 东北石油大学 Oil and gas pipeline leakage detection method based on improved VMD and 1DCNN
CN114255220A (en) * 2021-12-21 2022-03-29 徐州徐工挖掘机械有限公司 Weld quality detection method based on Transformer neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740387A (en) * 2023-05-20 2023-09-12 西北工业大学 Underwater noise identification method based on continuous wavelet transformation and improved residual neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111664365A (en) * 2020-06-07 2020-09-15 东北石油大学 Oil and gas pipeline leakage detection method based on improved VMD and 1DCNN
CN114255220A (en) * 2021-12-21 2022-03-29 徐州徐工挖掘机械有限公司 Weld quality detection method based on Transformer neural network

Also Published As

Publication number Publication date
CN118130608A (en) 2024-06-04

Similar Documents

Publication Publication Date Title
Wang et al. An intelligent diagnosis scheme based on generative adversarial learning deep neural networks and its application to planetary gearbox fault pattern recognition
CN110823574A (en) A Fault Diagnosis Method Based on Semi-Supervised Learning Deep Adversarial Networks
CN109919123B (en) Oil spill detection method on sea surface based on multi-scale feature deep convolutional neural network
CN114065809B (en) A method, device, electronic device and storage medium for identifying abnormal noise of a passenger car
CN111931820A (en) Water target radiation noise LOFAR spectrogram spectrum extraction method based on convolution residual error network
CN115114949B (en) A method and system for intelligent identification of ship targets based on underwater acoustic signals
CN116680639B (en) A deep learning-based anomaly detection method for deep-sea submersible sensor data
Yao et al. An adaptive anti-noise network with recursive attention mechanism for gear fault diagnosis in real-industrial noise environment condition
Jin et al. Defect identification of adhesive structure based on DCGAN and YOLOv5
CN111582137A (en) Rolling bearing signal reconstruction method and system
CN117574158A (en) Wind turbine generator gearbox fault diagnosis method and system
CN116975527A (en) Fault diagnosis method based on soft thresholding and wavelet transformation denoising
CN113138377B (en) Self-adaptive bottom reverberation suppression method based on multi-resolution binary singular value decomposition
Su et al. Deep learning seismic damage assessment with embedded signal denoising considering three-dimensional time–frequency feature correlation
CN114861736B (en) Internal defect positioning device and internal defect positioning method based on GIALDN network
CN118130608B (en) An online detection method for underwater structure welds based on improved deep convolution
CN113782044B (en) Voice enhancement method and device
CN119625383A (en) A method for feature extraction and classification of underwater small targets based on active sonar
CN118711207A (en) A texture segmentation and enhancement method and system for seismic profile images
CN116859461B (en) Multiple imaging method and system
CN118473544A (en) Ship radiation signal background noise suppression method and system based on unsupervised learning
CN117058443B (en) Pipeline magnetic flux leakage image identification method based on improved residual error shrinkage network
CN117574056A (en) Wide-area electromagnetic data denoising method and system based on hybrid neural network model
CN116429430A (en) A bearing fault detection method, system, device and medium based on an adaptive multi-scale enhanced dictionary learning framework
CN116913316A (en) Power transformer typical fault voiceprint diagnosis method based on Mosaic data enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant