CN112462346B - Ground penetrating radar subgrade disease target detection method based on convolutional neural network - Google Patents
Ground penetrating radar subgrade disease target detection method based on convolutional neural network Download PDFInfo
- Publication number
- CN112462346B CN112462346B CN202011357009.XA CN202011357009A CN112462346B CN 112462346 B CN112462346 B CN 112462346B CN 202011357009 A CN202011357009 A CN 202011357009A CN 112462346 B CN112462346 B CN 112462346B
- Authority
- CN
- China
- Prior art keywords
- data
- ground penetrating
- penetrating radar
- scan
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
-
- E—FIXED CONSTRUCTIONS
- E01—CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
- E01C—CONSTRUCTION OF, OR SURFACES FOR, ROADS, SPORTS GROUNDS, OR THE LIKE; MACHINES OR AUXILIARY TOOLS FOR CONSTRUCTION OR REPAIR
- E01C23/00—Auxiliary devices or arrangements for constructing, repairing, reconditioning, or taking-up road or like surfaces
- E01C23/01—Devices or auxiliary means for setting-out or checking the configuration of new surfacing, e.g. templates, screed or reference line supports; Applications of apparatus for measuring, indicating, or recording the surface configuration of existing surfacing, e.g. profilographs
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/885—Radar or analogous systems specially adapted for specific applications for ground probing
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/28—Details of pulse systems
- G01S7/285—Receivers
- G01S7/292—Extracting wanted echo-signals
- G01S7/2923—Extracting wanted echo-signals based on data belonging to a number of consecutive radar periods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Architecture (AREA)
- Civil Engineering (AREA)
- Structural Engineering (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及探地雷达信号处理领域,尤其涉及一种基于卷积神经网络的探地雷达路基病害目标检测方法。The present invention relates to the field of ground penetrating radar signal processing, and in particular to a ground penetrating radar roadbed defect target detection method based on convolutional neural network.
背景技术Background Art
路基对于公路铁路至关重要。由于建造条件、地理环境、气候、车辆行驶等原因,导致道路存在多种病害。公铁路表面及浅层病害易于观察检测,但路基位置处病害不易被发现,若不及时有效处理,影响公路铁路的使用,严重威胁驾驶者的生命安全。探地雷达作为一种无损、准确度高、效率快、适应性强的检测技术,替代原有的有损和无损检测手段,广泛用于路基病害检测工程中。Roadbed is crucial for highways and railways. Due to construction conditions, geographical environment, climate, vehicle driving and other reasons, there are many road diseases. Road and railway surface and shallow diseases are easy to observe and detect, but roadbed diseases are not easy to find. If they are not handled in time and effectively, they will affect the use of roads and railways and seriously threaten the lives of drivers. Ground penetrating radar is a non-destructive, highly accurate, efficient and adaptable detection technology that replaces the original destructive and non-destructive detection methods and is widely used in roadbed disease detection projects.
探地雷达系统中发射天线发射短脉冲电磁波,电磁波穿过地表及地下介质,遇到电性不同界面及目标时发生反射,接收天线接收反射回波,形成A-Scan信号。发射/接收天线沿着公路铁路测线以固定间隔移动,不同位置接收的A-Scan信号构成B-Scan图像数据。The transmitting antenna in the ground penetrating radar system emits short pulse electromagnetic waves. The electromagnetic waves pass through the surface and underground media, and are reflected when encountering interfaces and targets with different electrical properties. The receiving antenna receives the reflected echo and forms an A-Scan signal. The transmitting/receiving antenna moves at fixed intervals along the highway and railway survey line, and the A-Scan signals received at different positions constitute the B-Scan image data.
早期的探地雷达目标检测方法基于A-Scan信号,根据不同病害目标及地层结构在时间剖面和频率域上的分布等特征,主要利用不同目标能量分布、波形特征、幅度相位及目标信号间互相关等信息,采用傅里叶变换、小波变换等计算方法,人为的提取出不同目标的特征进行分析识别及定位。虽然上述方法可检测到目标,但依赖于人为分析识别,需要具有丰富经验和先验知识的技术人员,了解掌握大量的路基病害目标结构特征,含有较多主观因素;花费大量的精力和时间,检测效率低;并且由于人工操作导致获得的特征参数和特征表示较少,缺乏泛化能力,导致检测准确率低,影响路基病害判断。Early ground penetrating radar target detection methods were based on A-Scan signals. According to the distribution characteristics of different disease targets and stratum structures in the time profile and frequency domain, they mainly used information such as energy distribution, waveform characteristics, amplitude phase and cross-correlation between target signals, and used calculation methods such as Fourier transform and wavelet transform to artificially extract the characteristics of different targets for analysis, identification and positioning. Although the above methods can detect targets, they rely on manual analysis and identification, require technicians with rich experience and prior knowledge, and understand and master a large number of roadbed disease target structure characteristics, and contain many subjective factors; they take a lot of energy and time, and the detection efficiency is low; and due to manual operation, the obtained feature parameters and feature representations are few, lacking generalization ability, resulting in low detection accuracy, affecting the judgment of roadbed diseases.
随着近年来机器学习的发展,且结合探地雷达B-Scan图像数据中不同目标的表现形式,如不同介质的圆形目标呈现出极性差异的双曲线结构、方形目标呈现出两边为双曲线、中间是平行线的结构等,根据它们的形状、极性等表现特征,采用机器学习方法实现路基病害目标的自动检测,虽然不需人工提取目标识别检测,由于算法设计等方面原因,其对于复杂路基环境仍无法做到准确检测。因此高效准确的识别定位复杂环境下的路基病害对于公路铁路维护具有重要意义。With the development of machine learning in recent years, combined with the representation of different targets in the ground penetrating radar B-Scan image data, such as circular targets of different media presenting a hyperbolic structure with different polarity, square targets presenting a structure with hyperbolas on both sides and parallel lines in the middle, etc., according to their shape, polarity and other performance characteristics, machine learning methods are used to realize automatic detection of roadbed disease targets. Although there is no need for manual extraction of target recognition and detection, it is still impossible to accurately detect complex roadbed environments due to reasons such as algorithm design. Therefore, efficient and accurate identification and positioning of roadbed diseases in complex environments is of great significance for highway and railway maintenance.
发明内容Summary of the invention
本发明的目的是为克服上述技术的缺点,提供一种基于卷积神经网络的探地雷达路基病害目标检测方法,不依赖于人为识别,实现不同类型路基病害目标的快速且准确检测,适应于不同复杂路基环境,具有泛化能力。The purpose of the present invention is to overcome the shortcomings of the above-mentioned technology and provide a ground penetrating radar roadbed disease target detection method based on convolutional neural network, which does not rely on human identification, can achieve fast and accurate detection of different types of roadbed disease targets, is adaptable to different complex roadbed environments, and has generalization ability.
为了达到上述发明目的,本发明采用以下技术方案予以实现。In order to achieve the above-mentioned object of the invention, the present invention adopts the following technical solutions to achieve it.
一种基于卷积神经网络的探地雷达路基病害目标检测方法,按照以下步骤执行:A method for detecting roadbed defects using ground penetrating radar based on convolutional neural network is implemented in the following steps:
步骤一:获取探地雷达原始图像数据Step 1: Obtaining raw ground penetrating radar image data
利用探地雷达系统对实际路基进行探测采集探地雷达B-Scan实际图像数据,以及采用基于FDTD的gprMax软件对路基中常见的3种病害类型进行正演模拟生成探地雷达B-Scan仿真图像;The ground penetrating radar system is used to detect the actual roadbed and collect the actual ground penetrating radar B-Scan image data. The FDTD-based gprMax software is used to perform forward simulation on three common types of roadbed diseases to generate ground penetrating radar B-Scan simulation images.
步骤二:探地雷达数据预处理Step 2: GPR data preprocessing
对采集的探地雷达图像数据采用归一化、去零偏、均值滤波法去直达波和自动增益处理,对仿真探地雷达图像数据采用均值滤波法去除直达波和自动增益放大信号处理,分别得到相应预处理后的二维图像数据,再将预处理后的图像及步骤一中原始图像数据缩放至统一的像素大小;The collected ground penetrating radar image data is normalized, de-biased, and subjected to mean filtering to remove direct waves and automatic gain processing. The simulated ground penetrating radar image data is subjected to mean filtering to remove direct waves and automatic gain amplification signal processing to obtain corresponding preprocessed two-dimensional image data, and then the preprocessed images and the original image data in
步骤三:标记探地雷达图像中目标Step 3: Marking targets in the GPR image
采用labelImg软件标记仿真和采集的探地雷达图像中目标,将目标类别、坐标等信息存储在.xml文件中;Use labelImg software to label the targets in the simulated and collected GPR images, and store the target category, coordinates and other information in .xml files;
步骤四:构建PASCAL VOC数据集Step 4: Build the PASCAL VOC dataset
将.jpg格式的探地雷达图像数据与.xml格式的标记信息整理构建PASCAL VOC数据集,并按照一定比例划分为训练集、验证集和测试集;The ground penetrating radar image data in .jpg format and the tag information in .xml format are organized to construct the PASCAL VOC dataset, and divided into training set, validation set and test set according to a certain ratio;
步骤五:动态设置锚框参数Step 5: Dynamically set anchor box parameters
计算训练集中标记的目标边界框在不同纵横比下对应的数量,选择大于设定阈值的纵横比及其倒数作为网络训练中预设锚框纵横比参数的初始值;Calculate the number of target bounding boxes marked in the training set at different aspect ratios, and select the aspect ratio greater than the set threshold and its reciprocal as the initial value of the preset anchor box aspect ratio parameter in network training;
步骤六:获取卷积神经网络模型Step 6: Get the convolutional neural network model
采用搭建的Cascade R-CNN模型对训练集数据进行训练,得到拟合数据的网络模型,并用步骤四中生成的验证集数据对网络超参数进行微调得到最终的卷积神经网络模型;Use the built Cascade R-CNN model to train the training set data to obtain a network model that fits the data, and use the validation set data generated in step 4 to fine-tune the network hyperparameters to obtain the final convolutional neural network model;
步骤七:评价卷积神经网络模型性能Step 7: Evaluate Convolutional Neural Network Model Performance
采用步骤四中生成的测试集评价模型性能,召回率和平均精度作为评价指标;Use the test set generated in step 4 to evaluate the model performance, with recall and average precision as evaluation indicators;
步骤八:检测探地雷达路基病害目标Step 8: Detecting roadbed damage targets using ground penetrating radar
将探地雷达数据B-Scan数据以.jpg格式输入训练好的Cascade R-CNN模型中进行检测,输出存在目标的类别、置信度及检测框坐标。The ground penetrating radar data B-Scan data is input into the trained Cascade R-CNN model in .jpg format for detection, and the category, confidence and detection box coordinates of the target are output.
本发明的进一步特点在于:The present invention is further characterized in that:
步骤一中,获取探地雷达原始图像数据,具体过程如下:In
(1)获取探地雷达采集图像数据(1) Obtaining ground penetrating radar image data
采用探地雷达系统对不同地方的实际路基进行探测,采集探地雷达图像,以B-Scan形式成像。The ground penetrating radar system is used to detect the actual roadbed in different places, collect ground penetrating radar images, and form images in the form of B-Scan.
(2)获取探地雷达仿真图像数据(2) Obtaining ground penetrating radar simulation image data
采用基于FDTD的gprMax软件对路基中主要的3种病害类型进行正演模拟生成探地雷达B-Scan仿真图像。gprMax软件分别构建路基模型和3种病害模型,其中路基由面层、基层和垫层三层结构构成,3种病害包括空洞、脱空和断层。The FDTD-based gprMax software was used to perform forward simulation on the three main types of roadbed defects to generate ground penetrating radar B-Scan simulation images. The gprMax software constructed the roadbed model and the three defect models respectively. The roadbed consists of a three-layer structure of surface layer, base layer and cushion layer. The three defects include voids, voids and faults.
调整不同类型病害目标的大小、形状和埋藏深度,以及发射天线的中心频率,将发射/接收天线以固定步长沿侧线移动,仿真出路基病害探地雷达数据图像,以B-Scan形式成像显示。Adjust the size, shape and burial depth of different types of disease targets, as well as the center frequency of the transmitting antenna, move the transmitting/receiving antenna along the lateral line with a fixed step length, simulate the ground penetrating radar data image of the roadbed disease, and display it in the form of B-Scan.
本发明的进一步特点在于:The present invention is further characterized in that:
步骤二中,探地雷达数据预处理,具体处理流程如下:In step 2, ground penetrating radar data is preprocessed, and the specific processing flow is as follows:
对采集的探地雷达图像数据采用归一化、去零偏、均值滤波法去直达波和自动增益处理;对仿真探地雷达图像数据采用均值滤波法去直达波和自动增益处理。The collected ground penetrating radar image data are normalized, de-biased, and processed by the mean filter method to remove direct waves and automatic gain. The simulated ground penetrating radar image data are processed by the mean filter method to remove direct waves and automatic gain.
(1)探地雷达采集数据归一化处理(1) Normalization of ground penetrating radar data
对二维B-Scan图像进行归一化,使得二维B-Scan中所有采样点值取值范围变为[-1,1],计算公式如下:Normalize the two-dimensional B-Scan image so that the value range of all sampling points in the two-dimensional B-Scan becomes [-1,1]. The calculation formula is as follows:
其中二维B-Scan数据B(M×N)由N道A-Scan信号数据构成,M表示采样点数,N表示扫描总道数,Bmin、Bmax分别表示图像矩阵B的最小值和最大值,Bi′j为归一化后的采样点值。The two-dimensional B-Scan data B (M × N) is composed of N channels of A-Scan signal data, M represents the number of sampling points, N represents the total number of scanning channels, B min and B max represent the minimum and maximum values of the image matrix B respectively, and Bi ′ j is the normalized sampling point value.
(2)探地雷达采集数据去零偏处理(2) Debiasing of Ground Penetrating Radar Data
对二维B-Scan图像进行去零偏,计算公式如下:The two-dimensional B-Scan image is debiased, and the calculation formula is as follows:
其中xij为第j道A-Scan数据Xj=[xj1,xj2,...,xjM]T的第i个采样点,xi′j为去零偏后的数据采样点值,得到去零偏后的探地雷达数据。Wherein, x ij is the i-th sampling point of the j-th A-Scan data X j =[x j1 ,x j2 ,...,x jM ] T , and x i ′ j is the data sampling point value after zero bias removal, so as to obtain the ground penetrating radar data after zero bias removal.
(3)探地雷达数据均值滤波处理(3) Mean filtering of ground penetrating radar data
对二维B-Scan图像进行均值滤波法去直达波,具体过程如下:The two-dimensional B-Scan image is subjected to mean filtering to remove direct waves. The specific process is as follows:
将B-Scan数据的每道A-Scan信号逐采样点减去该采样点对应所有道A-Scan数据的均值,计算公式如下:Subtract the mean value of all A-Scan data corresponding to each sampling point from each A-Scan signal of the B-Scan data at each sampling point. The calculation formula is as follows:
其中xij为第j道A-Scan数据Xj=[xj1,xj2,...,xjM]T的第i个采样点,xi′j为去除直达波后的数据采样点值。Wherein, x ij is the i-th sampling point of the j-th A-Scan data X j =[x j1 ,x j2 ,...,x jM ] T , and x i ′ j is the data sampling point value after removing the direct wave.
(4)探地雷达数据自动增益处理(4) Automatic gain processing of ground penetrating radar data
对二维B-Scan图像进行自动增益实现信号放大,具体过程如下:Automatic gain is performed on the two-dimensional B-Scan image to achieve signal amplification. The specific process is as follows:
将每道A-Scan信号划分为T个时窗,相邻时窗间有50%的重叠,分别由每个时窗内采样点的平均幅值计算时窗起始点对应增益值,相邻时窗增益值采取线性插值计算,计算公式如下:Each A-Scan signal is divided into T time windows, with 50% overlap between adjacent time windows. The gain value corresponding to the starting point of the time window is calculated by the average amplitude of the sampling points in each time window. The gain values of adjacent time windows are calculated by linear interpolation. The calculation formula is as follows:
其中每个时窗大小为 表示下取整,Atj表示第j道A-Scan数据中第t个时窗的平均幅值,xij为第j道A-Scan数据Xj=[xj1,xj2,...,xjM]T的第i个采样点。The size of each time window is represents rounding down, Atj represents the average amplitude of the tth time window in the jth A-Scan data, and xij is the i-th sampling point of the jth A-Scan data Xj = [ xj1 , xj2 , ..., xjM ] T .
每个时窗的增益值Gtj计算如下:The gain value Gtj of each time window is calculated as follows:
时窗内各采样点增益值计算公式如下:The calculation formula for the gain value of each sampling point in the time window is as follows:
其中Gsj表示第j道A-Scan数据中[t,t+W]时窗内各个采样点对应的增益值,Gtj表示第t个时窗的增益值,Gt+W,j表示第t+1个时窗的增益值,s表示时窗内采样点索引。Where Gsj represents the gain value corresponding to each sampling point in the [t,t+W] time window in the j-th A-Scan data, Gtj represents the gain value of the t-th time window, Gt+W,j represents the gain value of the t+1-th time window, and s represents the sampling point index in the time window.
经处理分别得到仿真和采集探地雷达数据的预处理数据,再将其与步骤一中的原始图像数据缩放至统一的375×500像素大小。After processing, the preprocessed data of the simulated and collected ground penetrating radar data are obtained respectively, and then they are scaled together with the original image data in
本发明的进一步特点在于:The present invention is further characterized in that:
步骤四中,构建PASCAL VOC数据集,具体流程如下:In step 4, the PASCAL VOC dataset is constructed. The specific process is as follows:
将步骤一中生成获取的原始探地雷达数据图像数据与步骤二中预处理后的.jpg格式的图像数据,及步骤三中打好目标标记并存储为.xml格式的文件数据,按照PASCALVOC数据集的标准格式构建,并以8:1:1的比例分别划分为训练集、验证集和测试集。The original ground penetrating radar data image data generated in
本发明的进一步特点在于:The present invention is further characterized in that:
步骤五中,动态设置锚框参数,具体流程如下:In step 5, the anchor frame parameters are dynamically set. The specific process is as follows:
统计训练集中人为标记的目标边界框的纵横比,计算训练集中目标边界框在不同统计纵横比下对应的数量,选择大于阈值0.65的边界框纵横比及其倒数作为网络训练中锚框纵横比参数的初始值。The aspect ratios of the manually marked target bounding boxes in the training set are counted, the number of target bounding boxes in the training set corresponding to different statistical aspect ratios is calculated, and the bounding box aspect ratio greater than the threshold 0.65 and its reciprocal are selected as the initial value of the anchor box aspect ratio parameter in network training.
本发明的进一步特点在于:The present invention is further characterized in that:
步骤六中,获得卷积神经网络模型结构,具体操作如下:In step six, the convolutional neural network model structure is obtained. The specific operations are as follows:
改进Cascade R-CNN模型中的FPN获取多尺度特征映射模块,在原有P3-P6融合特征映射层基础上,增加P2融合层以检测小目标及P7融合层以检测较大目标,其中P2-P7分别表示第2至第7个stage融合特征映射的输出层;并且在FPN模块的5个stage输出特征图经1×1卷积统一通道数为256后,分别添加1×1卷积核及ReLU激活函数增强网络非线性表达能力;三阶段级联IOU阈值分别设置为0.5、0.6和0.7。The FPN multi-scale feature mapping module in the Cascade R-CNN model is improved. On the basis of the original P3 - P6 fusion feature mapping layers, the P2 fusion layer is added to detect small targets and the P7 fusion layer is added to detect larger targets, where P2 - P7 represent the output layers of the 2nd to 7th stage fusion feature maps, respectively. After the 5 stage output feature maps of the FPN module are unified to 256 channels by 1×1 convolution, 1×1 convolution kernels and ReLU activation functions are added to enhance the nonlinear expression ability of the network. The IOU thresholds of the three-stage cascade are set to 0.5, 0.6 and 0.7, respectively.
采用随机梯度下降算法训练Cascade R-CNN模型时,总损失函数为分类损失和回归损失的加权和,计算公式如下:When the stochastic gradient descent algorithm is used to train the Cascade R-CNN model, the total loss function is the weighted sum of the classification loss and the regression loss, and the calculation formula is as follows:
L(x,g)=Lcls(h(x),y)+λLreg(f(x,b),g)L(x,g)=L cls (h(x),y)+λL reg (f(x,b),g)
其中Lcls(·)表示分类损失函数,采用交叉熵损失函数,Lloc(·)表示回归损失函数,采用smooth L1损失函数,h(x)表示分类器函数,f(x,b)表示回归器函数,x表示训练过程中输入的划分图像块,y表示真实的类别标签,λ表示加权系数,b表示预测边界框,g表示真实边界框。Where L cls (·) represents the classification loss function, using the cross entropy loss function, L loc (·) represents the regression loss function, using the smooth L1 loss function, h(x) represents the classifier function, f(x,b) represents the regressor function, x represents the partitioned image block input during training, y represents the true category label, λ represents the weighting coefficient, b represents the predicted bounding box, and g represents the true bounding box.
训练网络模型的初始学习率为0.0025,采用Step学习率变化策略,最大迭代周期epoch设置为50,网络训练到第38和48个epoch时学习率分别下降0.1。训练完后生成的网络模型,采用步骤四中生成的验证集进一步微调超参数生成更拟合数据的网络模型。The initial learning rate of the training network model is 0.0025, and the Step learning rate change strategy is adopted. The maximum iteration cycle epoch is set to 50. When the network is trained to the 38th and 48th epochs, the learning rate decreases by 0.1 respectively. After the training, the network model generated uses the validation set generated in step 4 to further fine-tune the hyperparameters to generate a network model that better fits the data.
本发明的进一步特点在于:The present invention is further characterized in that:
步骤七中,评价卷积神经网络模型性能,具体操作如下:In step 7, the performance of the convolutional neural network model is evaluated. The specific operations are as follows:
采用步骤四中生成的测试集评价模型性能,召回率和平均精度作为评价指标。The test set generated in step 4 is used to evaluate the model performance, with recall and average precision as evaluation indicators.
召回率计算公式如下:The recall calculation formula is as follows:
其中TP表示真阳性,即模型预测为正例、实际为正例的样本个数,FN表示假阴性,即模型预测为负例、实际为正例的样本个数。TP represents true positives, that is, the number of samples predicted by the model to be positive and actually are positive; FN represents false negatives, that is, the number of samples predicted by the model to be negative and actually are positive.
平均精度计算公式如下:The average precision calculation formula is as follows:
其中FP表示型预测为正例、实际为负例的样本个数,M表示一个类别样本中存在的正例样本数。FP represents the number of samples predicted as positive but actually negative, and M represents the number of positive samples in a category.
与现有技术相比,本发明具有以下有益的技术效果:Compared with the prior art, the present invention has the following beneficial technical effects:
1、本发明中采用gprMax软件对路基病害进行正演模拟,以及通过探地雷达系统对实际路基路况探测,得到探地雷达B-Scan数据图像,满足卷积神经网络训练中大量数据集要求,具有丰富的目标特征信息;1. In the present invention, gprMax software is used to perform forward simulation of roadbed diseases, and the ground penetrating radar system is used to detect the actual roadbed conditions to obtain ground penetrating radar B-Scan data images, which meet the requirements of a large number of data sets in convolutional neural network training and have rich target feature information;
2、本发明采用均值滤波法,快速有效去除直达波,实时性强,自动增益处理有效实现目标信号放大,并且采用多种预处理方法扩充数据集,丰富目标特征;2. The present invention adopts the mean filtering method to quickly and effectively remove the direct wave, has strong real-time performance, and the automatic gain processing effectively realizes the target signal amplification. In addition, a variety of preprocessing methods are used to expand the data set and enrich the target features;
3、本发明采用卷积神经网络实现探地雷达路基病害目标的自动检测,不需依赖于人为识别处理,降低人工成本及数据处理量,节省人力和资源;3. The present invention uses convolutional neural network to realize automatic detection of roadbed disease targets by ground penetrating radar, which does not need to rely on manual identification and processing, reduces labor costs and data processing volume, and saves manpower and resources;
4、本发明采用深度卷积神经网络检测探地雷达路基病害目标,效率高、检测精度准确,泛化能力强,适应于不同复杂路基环境;4. The present invention uses a deep convolutional neural network to detect ground penetrating radar roadbed disease targets, which has high efficiency, accurate detection accuracy, strong generalization ability, and is adaptable to different complex roadbed environments;
5、本发明在原有Cascade R-CNN模型的FPN模块结构基础上,添加1×1卷积核及激活函数增加网络非线性表达能力,降低计算量和复杂性,增加P2卷积层以检测小目标,实现多尺度目标大小的准确检测;5. Based on the FPN module structure of the original Cascade R-CNN model, the present invention adds a 1×1 convolution kernel and activation function to increase the nonlinear expression ability of the network, reduce the amount of calculation and complexity, and add a P2 convolution layer to detect small targets, thereby achieving accurate detection of multi-scale target sizes;
6、本发明采用动态设置锚框参数,无需人为设定,适应多种目标边界框标记情况。6. The present invention adopts dynamic setting of anchor frame parameters, which does not require manual setting and is adaptable to various target boundary box marking situations.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为基于卷积神经网络的探地雷达路基病害目标检测方法的流程图;FIG1 is a flow chart of a method for detecting roadbed defects using ground penetrating radar based on a convolutional neural network;
图2为部分探地雷达路基病害目标B-Scan原始图像;Figure 2 is the B-Scan original image of some ground penetrating radar roadbed disease targets;
图3为Cascade R-CNN网络模型结构;Figure 3 shows the Cascade R-CNN network model structure;
图4为步骤六中Cascade R-CNN模型的FPN模块结构;Figure 4 shows the FPN module structure of the Cascade R-CNN model in step 6;
图5为部分探地雷达路基病害目标检测结果图;Figure 5 is a diagram showing some of the ground penetrating radar roadbed disease target detection results;
具体实施方式DETAILED DESCRIPTION
下面结合附图对本发明做进一步详细说明。The present invention is further described in detail below with reference to the accompanying drawings.
参见图1所示,本发明所述的基于卷积神经网络的探地雷达路基病害目标检测方法,按以下步骤操作处理:As shown in FIG1 , the method for detecting roadbed defects using a ground penetrating radar based on a convolutional neural network according to the present invention is processed according to the following steps:
步骤一:获取探地雷达原始图像数据Step 1: Obtaining raw ground penetrating radar image data
利用探地雷达系统对实际路基进行探测采集探地雷达B-Scan实际图像数据,以及采用基于FDTD的gprMax软件对路基中常见的3种病害类型进行正演模拟生成探地雷达B-Scan仿真图像;The ground penetrating radar system is used to detect the actual roadbed and collect the actual ground penetrating radar B-Scan image data. The FDTD-based gprMax software is used to perform forward simulation on three common types of roadbed diseases to generate ground penetrating radar B-Scan simulation images.
步骤二:探地雷达数据预处理Step 2: GPR data preprocessing
对采集的探地雷达图像数据采用归一化、去零偏、均值滤波法去直达波和自动增益处理,对仿真探地雷达图像数据采用均值滤波法去除直达波和自动增益放大信号处理,分别得到相应预处理后的二维图像数据,再将预处理后的图像及步骤一中原始图像数据缩放至统一的像素大小;The collected ground penetrating radar image data is normalized, de-biased, and subjected to mean filtering to remove direct waves and automatic gain processing. The simulated ground penetrating radar image data is subjected to mean filtering to remove direct waves and automatic gain amplification signal processing to obtain corresponding preprocessed two-dimensional image data, and then the preprocessed images and the original image data in
步骤三:标记探地雷达图像中目标Step 3: Marking targets in the GPR image
采用labelImg软件标记仿真和采集探地雷达图像中的目标,将目标类别、坐标等信息存储在.xml文件中;Use labelImg software to label the targets in the simulated and collected GPR images, and store the target category, coordinates and other information in .xml files;
步骤四:构建PASCAL VOC数据集Step 4: Build the PASCAL VOC dataset
将.jpg格式的探地雷达图像数据与.xml格式的标记信息整理构建PASCAL VOC数据集,并按照一定比例划分为训练集、验证集和测试集;The ground penetrating radar image data in .jpg format and the tag information in .xml format are organized to construct the PASCAL VOC dataset, and divided into training set, validation set and test set according to a certain ratio;
步骤五:动态设置锚框参数Step 5: Dynamically set anchor box parameters
计算训练集中标记的目标边界框在不同纵横比下对应的数量,选择大于设定阈值的纵横比及其倒数作为网络训练中预设锚框纵横比参数的初始值;Calculate the number of target bounding boxes marked in the training set at different aspect ratios, and select the aspect ratio greater than the set threshold and its reciprocal as the initial value of the preset anchor box aspect ratio parameter in network training;
步骤六:获取卷积神经网络模型Step 6: Get the convolutional neural network model
采用搭建的Cascade R-CNN模型对训练集数据进行训练,得到拟合数据的网络模型,并用步骤四中生成的验证集数据对网络超参数进行微调得到最终的卷积神经网络模型;Use the built Cascade R-CNN model to train the training set data to obtain a network model that fits the data, and use the validation set data generated in step 4 to fine-tune the network hyperparameters to obtain the final convolutional neural network model;
步骤七:评价卷积神经网络模型性能Step 7: Evaluate Convolutional Neural Network Model Performance
采用步骤四中生成的测试集评价模型性能,召回率和平均精度作为评价指标;Use the test set generated in step 4 to evaluate the model performance, with recall and average precision as evaluation indicators;
步骤八:检测探地雷达路基病害目标Step 8: Detecting roadbed damage targets using ground penetrating radar
将探地雷达数据B-Scan数据以.jpg格式输入训练好的Cascade R-CNN模型中进行检测,输出存在目标的类别、置信度及检测框坐标。The ground penetrating radar data B-Scan data is input into the trained Cascade R-CNN model in .jpg format for detection, and the category, confidence and detection box coordinates of the target are output.
步骤一中获取探地雷达原始图像数据,进一步分为获取实际采集数据和仿真数据。The first step is to obtain the original ground penetrating radar image data, which is further divided into obtaining actual acquisition data and simulation data.
对于探地雷达路基病害目标实际采集数据,采用探地雷达采集系统对不同地方路基进行实地探测,采集探地雷达图像,以B-Scan形式成像。For the actual data collection of GPR roadbed disease targets, a GPR acquisition system is used to conduct field detection of roadbeds in different places, collect GPR images, and form images in the form of B-Scan.
对于探地雷达路基病害目标仿真数据,采用基于FDTD的gprMax软件对路基中主要的3种病害类型进行正演模拟生成探地雷达B-Scan图像。gprMax软件分别构建道路模型和3种病害目标模型。仿真模型主体宽度为10m、高度为3m,由于公路、铁路由面层、基层和底基层三层结构构成,面层主要有沥青和混泥土等组成、基层主要有混合土组成、底基层主要有沙石等组成,因此厚度分别设置为20cm、30cm和2.5m,相对介电常数分别为4、9和12,电导率分别为0.05、0.05和0.1,3种病害包括空洞、脱空和断层,将其放置于底基层中的不同位置。For the simulation data of GPR roadbed disease targets, the FDTD-based gprMax software was used to perform forward simulation on the three main types of diseases in the roadbed to generate GPR B-Scan images. The gprMax software constructed the road model and the three disease target models respectively. The simulation model has a width of 10m and a height of 3m. Since the highway and railway are composed of three layers of surface layer, base layer and subbase layer, the surface layer is mainly composed of asphalt and concrete, the base layer is mainly composed of mixed soil, and the subbase layer is mainly composed of sand and gravel, so the thickness is set to 20cm, 30cm and 2.5m respectively, the relative dielectric constants are 4, 9 and 12 respectively, and the conductivity is 0.05, 0.05 and 0.1 respectively. The three diseases include voids, voids and faults, which are placed at different positions in the subbase layer.
调整不同类型病害目标的大小、形状和埋藏深度,目标数据每10个数据为一组,每组中目标所处位置不同,组间目标大小不同,并且设置发射天线的中心频率分别为300MHz、900MHz和2GHz,仿真出探地雷达路基病害目标数据图像,以B-Scan形式成像显示。The size, shape and burial depth of different types of disease targets are adjusted. The target data are grouped into groups of 10 data. The targets in each group are at different locations and have different sizes. The center frequencies of the transmitting antennas are set to 300 MHz, 900 MHz and 2 GHz, respectively. The ground penetrating radar roadbed disease target data images are simulated and displayed in the form of B-Scan.
步骤二中探地雷达数据预处理,对采集的探地雷达图像数据采用归一化、去零偏、均值滤波法去直达波和自动增益处理,对仿真探地雷达图像数据采用均值滤波法去直达波和自动增益处理。In step 2, the ground penetrating radar data is preprocessed by normalizing, removing zero bias, and using the mean filtering method to remove direct waves and automatic gain processing for the collected ground penetrating radar image data, and by using the mean filtering method to remove direct waves and automatic gain processing for the simulated ground penetrating radar image data.
(1)探地雷达采集数据归一化处理(1) Normalization of ground penetrating radar data
对二维B-Scan图像进行归一化,使得二维B-Scan中所有采样点值取值范围变为[-1,1],方便后续的预处理操作,计算公式如下:Normalize the two-dimensional B-Scan image so that the value range of all sampling points in the two-dimensional B-Scan becomes [-1,1], which is convenient for subsequent preprocessing operations. The calculation formula is as follows:
其中二维B-Scan数据B(M×N)由N道A-Scan信号数据构成,M表示采样点数,N表示扫描总道数,Bmin、Bmax分别表示图像矩阵B的最小值和最大值,Bi′j为归一化后的采样点值。The two-dimensional B-Scan data B (M × N) is composed of N channels of A-Scan signal data, M represents the number of sampling points, N represents the total number of scanning channels, B min and B max represent the minimum and maximum values of the image matrix B respectively, and Bi ′ j is the normalized sampling point value.
(2)探地雷达采集数据去零偏处理(2) Debiasing of Ground Penetrating Radar Data
为使得探地雷达每道A-Scan数据均值为0,保证A-Scan信号波形是无偏移的,对二维B-Scan图像进行去零偏。计算每道A-Scan数据均值,再用每道逐采样点减去该均值,计算公式如下:In order to make the mean value of each A-Scan data of the ground penetrating radar 0 and ensure that the A-Scan signal waveform is non-offset, the two-dimensional B-Scan image is de-biased. Calculate the mean value of each A-Scan data, and then subtract the mean value from each sampling point. The calculation formula is as follows:
其中xij为第j道A-Scan数据Xj=[xj1,xj2,...,xjM]T的第i个采样点,xi′j为去零偏后的数据采样点值,得到去零偏后的探地雷达数据。Wherein, x ij is the i-th sampling point of the j-th A-Scan data X j =[x j1 ,x j2 ,...,x jM ] T , and x i ′ j is the data sampling point value after zero bias removal, so as to obtain the ground penetrating radar data after zero bias removal.
(3)探地雷达数据均值滤波处理(3) Mean filtering of ground penetrating radar data
由于直达波信号能量较强,趋于稳定存在,会掩藏真正的目标信号,且其在B-Scan图像中呈现水平直线状,因此对二维B-Scan图像采用均值滤波法去直达波,具体过程如下:Since the direct wave signal has strong energy and tends to exist stably, it will hide the real target signal, and it appears as a horizontal straight line in the B-Scan image. Therefore, the mean filter method is used to remove the direct wave from the two-dimensional B-Scan image. The specific process is as follows:
将B-Scan数据的每道A-Scan信号逐采样点减去该采样点对应所有道A-Scan数据的均值,计算公式如下:Subtract the mean value of all A-Scan data corresponding to each sampling point from each A-Scan signal of the B-Scan data at each sampling point. The calculation formula is as follows:
其中xij为第j道A-Scan数据Xj=[xj1,xj2,...,xjM]T的第i个采样点,xi′j为去除直达波后的数据采样点值。Wherein, x ij is the i-th sampling point of the j-th A-Scan data X j =[x j1 ,x j2 ,...,x jM ] T , and x i ′ j is the data sampling point value after removing the direct wave.
(4)探地雷达数据自动增益处理(4) Automatic gain processing of ground penetrating radar data
不同病害目标所处深度不同,在探测较深位置处目标时,由于双程时间较长导致信号减弱,无法在B-Scan图像中用肉眼直接观察到目标,因此对二维B-Scan图像进行自动增益实现信号放大,起到均衡作用,具体过程如下:Different disease targets are located at different depths. When detecting targets at deeper locations, the signal is weakened due to the long round-trip time, and the target cannot be directly observed with the naked eye in the B-Scan image. Therefore, the two-dimensional B-Scan image is automatically amplified to achieve signal amplification and balance. The specific process is as follows:
将每道A-Scan信号划分为T个时窗,相邻时窗间有50%的重叠,分别由每个时窗内采样点的平均幅值计算时窗起始点对应增益值,为保证图像不出现失真现象,相邻时窗内每个采样点的增益值采取线性插值赋值计算,具体计算公式如下:Each A-Scan signal is divided into T time windows, with 50% overlap between adjacent time windows. The gain value corresponding to the starting point of the time window is calculated by the average amplitude of the sampling points in each time window. To ensure that the image is not distorted, the gain value of each sampling point in the adjacent time windows is calculated by linear interpolation. The specific calculation formula is as follows:
其中每个时窗大小为 表示下取整,Atj表示第j道A-Scan数据中第t个时窗的平均幅值,xij为第j道A-Scan数据Xj=[xj1,xj2,...,xjM]T的第i个采样点值。The size of each time window is represents rounding down, Atj represents the average amplitude of the tth time window in the jth A-Scan data, and xij is the i-th sampling point value of the jth A-Scan data Xj = [ xj1 , xj2 , ..., xjM ] T .
每个时窗起始点的增益值Gtj计算如下:The gain value Gtj at the starting point of each time window is calculated as follows:
时窗内各采样点增益值计算公式如下:The calculation formula for the gain value of each sampling point in the time window is as follows:
其中Gsj表示第j道A-Scan数据中[t,t+W]时窗内各个采样点对应的增益值,Gtj表示第t个时窗的增益值,Gt+W,j表示第t+1个时窗的增益值,s表示时窗内采样点索引。对于浅层目标信号较强,对应增益值较小,深层目标信号较弱,对应增益值较大。Where G sj represents the gain value corresponding to each sampling point in the [t, t+W] time window in the j-th A-Scan data, G tj represents the gain value of the t-th time window, G t+W,j represents the gain value of the t+1-th time window, and s represents the index of the sampling point in the time window. For shallow target signals, the corresponding gain value is small, and for deep target signals, the corresponding gain value is large.
采用上述4种处理方法对步骤一中仿真和采集的原始探地雷达数据进行对应的预处理,分别得到各自的预处理图像数据,再将其与原始图像数据一同缩放至统一的375×500像素大小。The above four processing methods are used to perform corresponding preprocessing on the original ground penetrating radar data simulated and collected in
步骤三中标记探地雷达图像中目标,采用labelImg软件标记步骤二中得到的探地雷达图像中的目标,将不同路基病害目标标记为一类目标,标记完成后软件自动将图像文件名称、位置、标记目标类别、坐标等信息进行存储,生成对应的.xml文件。In step three, mark the targets in the ground penetrating radar image. Use labelImg software to mark the targets in the ground penetrating radar image obtained in step two, and mark different roadbed disease targets as one type of target. After marking, the software automatically stores the image file name, location, marked target category, coordinates and other information, and generates the corresponding .xml file.
步骤四中构建PASCAL VOC数据集,将步骤一中生成获取的原始探地雷达数据图像数据与步骤二中预处理后的.jpg格式的图像数据,及步骤三中打好目标标记并存储为.xml格式的文件数据,按照PASCAL VOC数据集的标准格式构建标准数据集,并以8:1:1的比例分别划分为训练集、验证集和测试集。PASCAL VOC数据集包括Annotations、ImageSets和JPEGImages三个文件夹,其中Annotations中存放.xml目标标记文件,ImageSets中存放训练集、验证集和测试集生成数据路径及名称构成的.txt文件,JPEGImages中存放.jpg格式探地雷达数据图像。In step 4, the PASCAL VOC dataset is constructed. The original ground penetrating radar data image data generated in
步骤五中动态设置锚框参数,统计训练集中打好标记的目标边界框对应的长与宽的纵横比,分别计算训练集中图像数据标记的目标边界框在不同纵横比下对应的数量,选择大于阈值0.65的纵横比及其倒数作为网络训练中预设锚框纵横比参数的初始值,锚框的尺度初始值采用网络的默认值。In step 5, the anchor box parameters are dynamically set, the aspect ratio of the length to the width of the marked target bounding boxes in the training set is counted, and the number of target bounding boxes marked by image data in the training set at different aspect ratios is calculated respectively. The aspect ratio greater than the threshold 0.65 and its reciprocal are selected as the initial value of the preset anchor box aspect ratio parameter in the network training. The initial value of the anchor box scale adopts the default value of the network.
步骤六中获取卷积神经网络模型,重新搭建Cascade R-CNN模型结构,图3为Cascade R-CNN网络模型结构示意图,其中“Input”表示图像输入,“Conv”表示骨干网Backbone中的卷积层,“Pool”表示区域特征提取,“Head”表示网络的Head部分,用于对图像特征进行预测,生成预测框和分类,“B”表示边界框回归操作,“C”表示分类操作,“B0”表示初步生成的边界框,“1”、“2”、“3”分别表示网络模型的3个阶段,图中可看到Cascade R-CNN网络模型主要包括4个阶段,其第一个阶段利用RPN模块生成初步的边界框,其他三个阶段采用分别设置为0.5、0.6和0.7的级联IOU阈值获取更为准确的边界框,通过在上一个阶段中对边界框进行重新采样和调整,为下一个阶段找到适合IOU值更高的正例样本进行训练。In step 6, the convolutional neural network model is obtained, and the Cascade R-CNN model structure is rebuilt. Figure 3 is a schematic diagram of the Cascade R-CNN network model structure, where "Input" represents image input, "Conv" represents the convolution layer in the backbone network Backbone, "Pool" represents regional feature extraction, "Head" represents the Head part of the network, which is used to predict image features, generate prediction boxes and classifications, "B" represents bounding box regression operation, "C" represents classification operation, "B0" represents the initially generated bounding box, and "1", "2" and "3" represent the three stages of the network model respectively. It can be seen from the figure that the Cascade R-CNN network model mainly includes four stages. The first stage uses the RPN module to generate a preliminary bounding box. The other three stages use cascade IOU thresholds set to 0.5, 0.6 and 0.7 respectively to obtain more accurate bounding boxes. By resampling and adjusting the bounding box in the previous stage, positive samples with higher IOU values are found for training in the next stage.
为检测不同尺度大小的病害目标,改进Cascade R-CNN模型中的FPN模块,获取多尺度特征映射,其在原有生成的P3-P6融合特征映射层基础上,增加P2融合层以检测小目标及P7融合层以检测较大目标,并在FPN模块的5个stage输出特征图经1×1卷积统一通道数为256后,分别添加1×1卷积核及ReLU激活函数以增强网络非线性表达能力,降低计算量和复杂度。改进后的FPN模块结构如图4所示,其中Conv1-5表示模型中骨干网5个stage的输出特征图,Conv_R1-5表示Conv1-5层分别统一通道为256后的输出卷积层,P2-7为FPN模块的输出融合特征层,其中P6由P5经步长为2的3×3卷积核生成,P7由P6经步长为2的3×3卷积核及ReLU激活函数生成,P2由Conv_R2经3×3×256卷积操作生成。In order to detect disease targets of different sizes, the FPN module in the Cascade R-CNN model is improved to obtain multi-scale feature maps. On the basis of the originally generated P3 - P6 fusion feature mapping layer, the P2 fusion layer is added to detect small targets and the P7 fusion layer is added to detect larger targets. After the 5-stage output feature maps of the FPN module are subjected to 1×1 convolution to unify the number of channels to 256, 1×1 convolution kernels and ReLU activation functions are added to enhance the nonlinear expression ability of the network and reduce the amount of calculation and complexity. The structure of the improved FPN module is shown in Figure 4, where Conv1-5 represents the output feature maps of the five stages of the backbone network in the model, Conv_R1-5 represents the output convolutional layer after the channels of Conv1-5 layers are unified to 256, P2-7 is the output fusion feature layer of the FPN module, where P6 is generated by P5 through a 3×3 convolution kernel with a step size of 2, P7 is generated by P6 through a 3×3 convolution kernel with a step size of 2 and a ReLU activation function, and P2 is generated by Conv_R2 through a 3×3×256 convolution operation.
网络训练目的是寻找总损失函数达最小值时对应的网络参数权重和偏置,因此采用随机梯度下降算法来训练Cascade R-CNN模型,其总损失函数为分类损失和回归损失的加权和,计算公式如下:The purpose of network training is to find the network parameter weights and biases corresponding to the minimum total loss function. Therefore, the stochastic gradient descent algorithm is used to train the Cascade R-CNN model. Its total loss function is the weighted sum of classification loss and regression loss. The calculation formula is as follows:
L(x,g)=Lcls(h(x),y)+λLreg(f(x,b),g)L(x,g)=L cls (h(x),y)+λL reg (f(x,b),g)
其中Lcls(·)表示分类损失函数,采用交叉熵损失函数,Lloc(·)表示回归损失函数,采用smooth L1损失函数,h(x)表示分类器函数,f(x,b)表示回归器函数,x表示训练过程中输入的划分图像块,y表示真实的类别标签,λ表示加权系数,b表示预测边界框,g表示真实边界框。Where L cls (·) represents the classification loss function, using the cross entropy loss function, L loc (·) represents the regression loss function, using the smooth L1 loss function, h(x) represents the classifier function, f(x,b) represents the regressor function, x represents the partitioned image block input during training, y represents the true category label, λ represents the weighting coefficient, b represents the predicted bounding box, and g represents the true bounding box.
训练Cascade R-CNN网络模型的初始学习率为0.0025,采用Step学习率变化策略,最大迭代周期epoch设置为50,网络训练到第38和48个epoch时学习率分别下降0.1。训练完后生成的网络模型,采用步骤四中生成的验证集进一步微调超参数生成更拟合数据的网络模型。The initial learning rate for training the Cascade R-CNN network model is 0.0025, and the Step learning rate change strategy is adopted. The maximum iteration cycle epoch is set to 50, and the learning rate is reduced by 0.1 when the network is trained to the 38th and 48th epochs respectively. After the training, the network model generated uses the validation set generated in step 4 to further fine-tune the hyperparameters to generate a network model that better fits the data.
步骤七中评价卷积神经网络模型性能,采用步骤四中生成的测试集评价模型性能,召回率和平均精度作为评价指标。In step 7, the performance of the convolutional neural network model is evaluated. The test set generated in step 4 is used to evaluate the model performance, and the recall rate and average precision are used as evaluation indicators.
召回率计算公式如下:The recall calculation formula is as follows:
其中TP表示真阳性,即模型预测为正例、实际为正例的样本个数,FN表示假阴性,即模型预测为负例、实际为正例的样本个数。TP represents true positives, that is, the number of samples predicted by the model to be positive and actually are positive; FN represents false negatives, that is, the number of samples predicted by the model to be negative and actually are positive.
平均精度计算公式如下:The average precision calculation formula is as follows:
其中FP表示型预测为正例、实际为负例的样本个数,M表示一个类别样本中存在的正例样本数。FP represents the number of samples predicted as positive but actually negative, and M represents the number of positive samples in a category.
步骤八中检测探地雷达路基病害目标,将探地雷达数据B-Scan数据以.jpg格式输入训练好的Cascade R-CNN模型中进行检测,输出存在目标的类别、置信度及检测框坐标。In step eight, the ground penetrating radar roadbed disease targets are detected. The ground penetrating radar data B-Scan data is input into the trained Cascade R-CNN model in .jpg format for detection, and the category, confidence and detection box coordinates of the existing target are output.
下面结合具体实例对本方案的探地雷达路基病害目标检测方法的实验效果进行说明:The following is an explanation of the experimental effect of the ground penetrating radar roadbed disease target detection method of this scheme with specific examples:
采用探地雷达系统和gprMax软件生成探地雷达路基病害目标图像,图2为原始探地雷达病害目标图像,图2(a)为采用2GHz探地雷达天线对实际路基路况进行探测得到的B-Scan数据图像,主要包括不同深度位置的断层、脱空和空洞目标病害,图2(b)为探地雷达路基病害目标仿真B-Scan数据图像,根据不同中心频率的发射天线对不同类型目标在不同位置处形成的目标回波,构成B-Scan仿真图像,数据病害目标主要包括空洞、脱空和断层,其中仿真中设置空洞为圆形和方形形状、脱空为方形和倒三角形状,以及断层有左右两种不同的倾斜角度。本实例采用labelImg软件对生成的探地雷达图像数据中目标边界框及类别进行标注,选择能框住目标的最小边界框,并统一标注为一类病害目标,生成相应的.xml文件。进而将生成的探地雷达病害目标原始图像、预处理后的图像数据及标记的.xml文件数据以8:1:1的比例生成训练集、验证集和测试集,再将其进行整理构建PASCAL VOC数据集。The ground penetrating radar system and gprMax software are used to generate the ground penetrating radar roadbed disease target image. Figure 2 is the original ground penetrating radar disease target image. Figure 2(a) is the B-Scan data image obtained by using a 2GHz ground penetrating radar antenna to detect the actual roadbed road conditions, mainly including faults, voids and holes at different depths. Figure 2(b) is the simulated B-Scan data image of the ground penetrating radar roadbed disease target. The target echoes formed by the transmitting antennas with different center frequencies at different positions for different types of targets constitute the B-Scan simulation image. The data disease targets mainly include holes, voids and faults. In the simulation, the holes are set to be circular and square, the voids are set to be square and inverted triangle, and the faults have two different inclination angles. In this example, labelImg software is used to annotate the target bounding box and category in the generated ground penetrating radar image data, select the minimum bounding box that can frame the target, and uniformly annotate it as a type of disease target, and generate the corresponding .xml file. Then, the generated original images of ground penetrating radar disease targets, preprocessed image data and marked .xml file data are used to generate training sets, validation sets and test sets in a ratio of 8:1:1, and then they are organized to construct the PASCAL VOC dataset.
本实例在基于PyTorch的检测工具箱mmdetection上实现,采用步骤六中的Cascade R-CNN网络模型进行训练检测,其骨干网络Backbone为ResNet101,在1个GPU(每个GPU训练2个图像)上训练网络50个迭代周期epoch,初始学习率设置为0.0025,学习率采用Step机制,在训练到38和48个epoch时下降0.1,动量设置为0.9,权重衰减为0.0005。输入到网络的图像统一为375×500像素大小,使用框架中的翻转作为唯一的线上数据增强技术。This example is implemented on the detection toolbox mmdetection based on PyTorch. The Cascade R-CNN network model in step 6 is used for training detection. Its backbone network Backbone is ResNet101. The network is trained for 50 epochs on 1 GPU (each GPU trains 2 images). The initial learning rate is set to 0.0025. The learning rate uses the Step mechanism and decreases by 0.1 when training to 38 and 48 epochs. The momentum is set to 0.9 and the weight decay is 0.0005. The image input to the network is unified to 375×500 pixels, and flipping in the framework is used as the only online data enhancement technology.
使用验证集微调网络超参数后得到的网络模型,采用测试集来评价网络性能,召回率Recall和平均精度AP作为评价指标。测试集中是否有目标标记分为正例样本和负例样本,网络根据设置的NMS后处理的IOU阈值进行检测得到结果,通过使用召回率Recall和平均精度AP将两者结果进行比对来评价网络检测性能,判断网络性能是否良好及其检测结果是否准确,网络后处理所用NMS的IOU阈值设置为0.6,即检测到的边界框置信度大于0.6时才判断为正确结果进行输出。本实例中得到的召回率达到94.5%,平均精度达到90.1%,且网络检测一张图像中目标所用时间量级为毫秒级,均表明该网络模型具有良好的检测性能,可用于准确高效的检测路基病害目标,图5为部分探地雷达路基病害目标检测结果图,均可准确的检测出图像中的路基病害目标。The network model obtained after fine-tuning the network hyperparameters using the validation set is used to evaluate the network performance using the test set, with the recall rate Recall and the average precision AP as evaluation indicators. Whether there is a target mark in the test set is divided into positive samples and negative samples. The network is detected according to the IOU threshold of the NMS post-processing set to obtain the result. The network detection performance is evaluated by comparing the two results using the recall rate Recall and the average precision AP to determine whether the network performance is good and whether the detection result is accurate. The IOU threshold of the NMS used in the network post-processing is set to 0.6, that is, the correct result is output only when the confidence of the detected bounding box is greater than 0.6. The recall rate obtained in this example reaches 94.5%, the average precision reaches 90.1%, and the time taken by the network to detect the target in an image is in the order of milliseconds, which shows that the network model has good detection performance and can be used to accurately and efficiently detect roadbed disease targets. Figure 5 shows the detection results of some ground penetrating radar roadbed disease targets, which can accurately detect roadbed disease targets in the image.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011357009.XA CN112462346B (en) | 2020-11-26 | 2020-11-26 | Ground penetrating radar subgrade disease target detection method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011357009.XA CN112462346B (en) | 2020-11-26 | 2020-11-26 | Ground penetrating radar subgrade disease target detection method based on convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112462346A CN112462346A (en) | 2021-03-09 |
CN112462346B true CN112462346B (en) | 2023-04-28 |
Family
ID=74809063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011357009.XA Active CN112462346B (en) | 2020-11-26 | 2020-11-26 | Ground penetrating radar subgrade disease target detection method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112462346B (en) |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113447536B (en) * | 2021-06-24 | 2022-09-30 | 山东大学 | Concrete dielectric constant inversion and disease identification method and system |
CN113191391A (en) * | 2021-04-07 | 2021-07-30 | 浙江省交通运输科学研究院 | Road disease classification method aiming at three-dimensional ground penetrating radar map |
CN113256562B (en) * | 2021-04-22 | 2021-12-14 | 深圳安德空间技术有限公司 | Road underground hidden danger detection method and system based on radar images and artificial intelligence |
CN114882236B (en) * | 2021-05-19 | 2024-09-13 | 哈尔滨工业大学 | Automatic identification method for underground cavity target of ground penetrating radar based on SinGAN algorithm |
CN113298155B (en) * | 2021-05-27 | 2022-07-29 | 中国民航大学 | Airport runway underground disease detection method based on SF-SSD algorithm |
CN113655477B (en) * | 2021-06-11 | 2023-09-01 | 成都圭目机器人有限公司 | Method for automatically detecting geological diseases by adopting shallow layer ground radar |
CN113534086B (en) * | 2021-07-02 | 2024-12-10 | 中国船舶集团有限公司第七二四研究所 | A radar signal boundary sample detection method based on matched filtering |
CN113836999A (en) * | 2021-08-16 | 2021-12-24 | 山东大学 | Intelligent identification method and system of tunnel construction risk based on ground penetrating radar |
CN113780361B (en) * | 2021-08-17 | 2024-09-13 | 哈尔滨工业大学 | A method for underground pipeline recognition in 3D ground penetrating radar images based on 2.5D-CNN algorithm |
CN113687427B (en) * | 2021-08-18 | 2023-11-28 | 上海圭目机器人有限公司 | Ground penetrating radar target position prediction method based on double-frequency back projection method |
CN113901878B (en) * | 2021-09-13 | 2024-04-05 | 哈尔滨工业大学 | Three-dimensional ground penetrating radar image underground pipeline identification method based on CNN+RNN algorithm |
CN113935237B (en) * | 2021-10-13 | 2025-02-07 | 华北电力大学 | A method and system for distinguishing the fault type of power transmission line based on capsule network |
CN113837163B (en) * | 2021-11-29 | 2022-03-08 | 深圳大学 | Tunnel monitoring method and system based on three-dimensional ground penetrating radar and storage medium |
CN114266892B (en) * | 2021-12-20 | 2024-11-29 | 江苏燕宁工程科技集团有限公司 | Pavement disease recognition method and system for multi-source data deep learning |
CN114296075B (en) * | 2021-12-29 | 2022-10-28 | 中路交科科技股份有限公司 | Ground penetrating radar image artificial intelligence identification method and device |
CN114548278A (en) * | 2022-02-22 | 2022-05-27 | 西安建筑科技大学 | Deep learning-based method and system for defect identification of in-service tunnel lining structures |
CN114821296B (en) * | 2022-03-14 | 2024-07-19 | 西安电子科技大学 | Underground disease ground penetrating radar image identification method, system, storage medium and terminal |
CN115063525B (en) * | 2022-04-06 | 2023-04-07 | 广州易探科技有限公司 | Three-dimensional mapping method and device for urban road subgrade and pipeline |
CN114895302A (en) * | 2022-04-06 | 2022-08-12 | 广州易探科技有限公司 | Method and device for rapidly detecting roadbed diseases of urban roads |
CN115310482B (en) * | 2022-07-31 | 2025-03-25 | 西南交通大学 | A radar intelligent identification method for bridge steel bars |
CN114947951B (en) * | 2022-08-01 | 2022-10-25 | 深圳华声医疗技术股份有限公司 | Ultrasonic imaging control method and device, ultrasonic equipment and storage medium |
CN115100363B (en) * | 2022-08-24 | 2022-11-25 | 中国科学院地理科学与资源研究所 | Underground abnormal body three-dimensional modeling method and device based on ground penetrating radar |
CN115343685A (en) * | 2022-08-29 | 2022-11-15 | 北京国电经纬工程技术有限公司 | Multi-dimensional ground penetrating radar detection method, device and equipment applied to disease identification |
CN116129282B (en) * | 2022-12-12 | 2024-10-18 | 中公高科养护科技股份有限公司 | Pavement disease identification method, medium and system |
CN115619687B (en) * | 2022-12-20 | 2023-05-09 | 安徽数智建造研究院有限公司 | Tunnel lining void radar signal identification method, equipment and storage medium |
CN116469014B (en) * | 2023-01-10 | 2024-04-30 | 南京航空航天大学 | Small sample satellite radar image sailboard identification and segmentation method based on optimized Mask R-CNN |
CN116413719B (en) * | 2023-06-12 | 2023-09-08 | 深圳大学 | A method and related equipment for exploring lava pipes beneath the lunar subsurface |
CN117269954B (en) * | 2023-08-28 | 2024-04-16 | 哈尔滨工业大学 | Real-time identification method for multiple hidden diseases of ground penetrating radar road based on YOLO |
CN117173618B (en) * | 2023-09-06 | 2024-04-30 | 哈尔滨工业大学 | Ground penetrating radar cavity target identification method based on multi-feature sensing Faster R-CNN |
CN117077451B (en) * | 2023-10-17 | 2024-03-26 | 深圳市城市交通规划设计研究中心股份有限公司 | Road interior service life assessment method, electronic equipment and storage medium |
CN117576487A (en) * | 2024-01-16 | 2024-02-20 | 天博电子信息科技有限公司 | An intelligent identification method for ground penetrating radar cavity targets based on deformable convolution |
CN118566863A (en) * | 2024-04-19 | 2024-08-30 | 山东大学 | A geological radar translation method and system based on deep learning |
CN118209954B (en) * | 2024-05-21 | 2024-08-16 | 深圳安德空间技术有限公司 | Ground penetrating radar target identification method and system based on semi-supervised learning |
CN118642062A (en) * | 2024-06-14 | 2024-09-13 | 苏交科集团广东检测认证有限公司 | A method for identifying underground road damage based on target detection and ground penetrating radar |
CN119148095B (en) * | 2024-11-11 | 2025-03-07 | 浙江大学长三角智慧绿洲创新中心 | Method, equipment and medium for identifying recessive diseases of three-dimensional ground penetrating radar road |
CN119313673B (en) * | 2024-12-17 | 2025-02-25 | 中国华西工程设计建设有限公司 | Road subgrade disease detection method and system |
CN119335497B (en) * | 2024-12-19 | 2025-04-08 | 西安电子科技大学杭州研究院 | A method for judging the degree of sinking damage in heavy-haul railway ballast based on ground penetrating radar |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103941254A (en) * | 2014-03-03 | 2014-07-23 | 中国神华能源股份有限公司 | Soil physical property classification recognition method and device based on geological radar |
CN105242271A (en) * | 2014-07-03 | 2016-01-13 | 通用汽车环球科技运作有限责任公司 | Object classification for vehicle radar systems |
CN107621626A (en) * | 2017-10-09 | 2018-01-23 | 中国矿业大学(北京) | Detection method of radar signal railway subgrade disease based on deep convolutional neural network |
CN108830331A (en) * | 2018-06-22 | 2018-11-16 | 西安交通大学 | A kind of Ground Penetrating Radar object detection method based on full convolutional network |
CN110458129A (en) * | 2019-08-16 | 2019-11-15 | 电子科技大学 | Non-metal mine identification method based on deep convolutional neural network |
CN110706211A (en) * | 2019-09-17 | 2020-01-17 | 中国矿业大学(北京) | Convolutional neural network-based real-time detection method for railway roadbed disease radar map |
CN110717464A (en) * | 2019-10-15 | 2020-01-21 | 中国矿业大学(北京) | Intelligent railway roadbed disease identification method based on radar data |
-
2020
- 2020-11-26 CN CN202011357009.XA patent/CN112462346B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103941254A (en) * | 2014-03-03 | 2014-07-23 | 中国神华能源股份有限公司 | Soil physical property classification recognition method and device based on geological radar |
CN105242271A (en) * | 2014-07-03 | 2016-01-13 | 通用汽车环球科技运作有限责任公司 | Object classification for vehicle radar systems |
CN107621626A (en) * | 2017-10-09 | 2018-01-23 | 中国矿业大学(北京) | Detection method of radar signal railway subgrade disease based on deep convolutional neural network |
CN108830331A (en) * | 2018-06-22 | 2018-11-16 | 西安交通大学 | A kind of Ground Penetrating Radar object detection method based on full convolutional network |
CN110458129A (en) * | 2019-08-16 | 2019-11-15 | 电子科技大学 | Non-metal mine identification method based on deep convolutional neural network |
CN110706211A (en) * | 2019-09-17 | 2020-01-17 | 中国矿业大学(北京) | Convolutional neural network-based real-time detection method for railway roadbed disease radar map |
CN110717464A (en) * | 2019-10-15 | 2020-01-21 | 中国矿业大学(北京) | Intelligent railway roadbed disease identification method based on radar data |
Non-Patent Citations (2)
Title |
---|
一种基于稀疏表示的重载铁路路基病害快速识别方法;杜彦良等;《土木工程学报》;20131115(第11期);第138-144页 * |
基于级联卷积神经网络的公路路基病害识别;沙爱民等;《长安大学学报(自然科学版)》;20190331;第39卷(第2期);第1-9页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112462346A (en) | 2021-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112462346B (en) | Ground penetrating radar subgrade disease target detection method based on convolutional neural network | |
Li et al. | Detection of concealed cracks from ground penetrating radar images based on deep learning algorithm | |
Dorafshan et al. | Deep learning models for bridge deck evaluation using impact echo | |
Liu et al. | Novel YOLOv3 model with structure and hyperparameter optimization for detection of pavement concealed cracks in GPR images | |
US12061154B2 (en) | Method for constructing recognition model of moisture damage of asphalt pavement and method and system for recognizing moisture damage of asphalt pavement | |
CN113256562B (en) | Road underground hidden danger detection method and system based on radar images and artificial intelligence | |
CN108898085B (en) | An intelligent detection method of road diseases based on mobile phone video | |
US20250060475A1 (en) | Method and apparatus for artificial intelligence recognition of ground penetrating radar images | |
CN114548278A (en) | Deep learning-based method and system for defect identification of in-service tunnel lining structures | |
CN111025286B (en) | An adaptive selection method of ground penetrating radar spectrum for water damage detection | |
CN110222701A (en) | A kind of bridge defect automatic identifying method | |
Yang et al. | Defect segmentation: Mapping tunnel lining internal defects with ground penetrating radar data using a convolutional neural network | |
CN113009447B (en) | Road underground cavity detection and early warning method based on deep learning and ground penetrating radar | |
CN108074244A (en) | A kind of safe city wagon flow statistical method for merging deep learning and Background difference | |
CN114814769B (en) | A method for automatic classification of ground penetrating radar images | |
CN115616674A (en) | Ground penetrating radar training data set simulation amplification and road nondestructive testing method and system | |
CN115755193A (en) | Pavement structure internal disease identification method | |
Barkataki et al. | Classification of soil types from GPR B scans using deep learning techniques | |
CN115438547A (en) | Overall evaluation method and system based on pavement service state | |
CN117496463A (en) | Learning method and system for improving road perception precision | |
Guo et al. | Research on tunnel lining image target recognition method based on YOLOv3 | |
CN111507423B (en) | Engineering quantity measuring method for cleaning transmission line channel | |
Li et al. | Research on Target Recognition Method of Tunnel Lining Image Based on Deep Learning | |
CN115099043B (en) | A method, device and electronic device for predicting road mortality risk during animal migration | |
Li et al. | Road sub-surface defect detection based on gprMax forward simulation-sample generation and Swin Transformer-YOLOX |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |