[go: up one dir, main page]

CN117315221A - Rock slope crack identification method based on multi-source image data fusion - Google Patents

Rock slope crack identification method based on multi-source image data fusion Download PDF

Info

Publication number
CN117315221A
CN117315221A CN202311102180.XA CN202311102180A CN117315221A CN 117315221 A CN117315221 A CN 117315221A CN 202311102180 A CN202311102180 A CN 202311102180A CN 117315221 A CN117315221 A CN 117315221A
Authority
CN
China
Prior art keywords
image
fusion
channel
visible light
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311102180.XA
Other languages
Chinese (zh)
Inventor
李腾跃
张文
尹韩
伟明
王佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202311102180.XA priority Critical patent/CN117315221A/en
Publication of CN117315221A publication Critical patent/CN117315221A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/23Dune restoration or creation; Cliff stabilisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a rock slope crack identification method based on multi-source image data fusion, which comprises the following steps: s1: visible light image data and infrared image data are collected; s2: registering images; s3: preprocessing an image; s4: decomposing and fusing the multi-source images; s5: according to the method, high-quality unmanned aerial vehicle image data are acquired from a camera imaging principle model, a visible light image and a thermal infrared image are registered, image pretreatment is respectively carried out on the visible light image and the thermal infrared image of a high-steep rock slope scene, the visible light image and the thermal infrared image after fusion pretreatment are used for identifying the rock slope scene cracks, and a quantitative crack pattern identification method is used for identifying the rock slope scene cracks, so that the definition and accuracy of image acquisition are greatly improved, and compared with other methods, the crack identification effect after multi-source image data fusion in the rock slope scene is more stable and reliable, and the identification accuracy is higher.

Description

基于多源图像数据融合的岩质斜坡裂隙识别方法A method for identifying cracks in rock slopes based on multi-source image data fusion

技术领域Technical Field

本发明涉及岩体间隙研究领域,尤其涉及基于多源图像数据融合的岩质斜坡裂隙识别方法。The invention relates to the field of rock mass gap research, and in particular to a rock slope crack identification method based on multi-source image data fusion.

背景技术Background Art

随着我国近几年经济快速发展,各种大型的基础设施建设如火如荼的进行中,涉及复杂的地质环境的岩土工程问题层出不穷。随着岩土领域技术的不断进步,基坑工程、隧道工程工程等面临着开挖深度更深、开挖面积更大以及施工过程更加复杂等问题。With the rapid economic development in my country in recent years, various large-scale infrastructure construction is in full swing, and geotechnical engineering problems involving complex geological environments are emerging one after another. With the continuous advancement of geotechnical technology, foundation pit engineering, tunnel engineering, etc. are facing problems such as deeper excavation depth, larger excavation area and more complicated construction process.

目前,自动信息提取的数据来源主要有两种:(1)数码照相(2)三维激光扫描。对数码照相获取的数字图像主要有两种处理方法,一种是直接对数码照片进行数字图像处理分析,如:采用图像处理算法提取出岩体裂隙,根据提取结果自动解析获取围岩裂隙发育程度参数;直接对预处理后的图像进行统计特征分析。另一种是基于近景摄影测量技术,根据不同角度拍摄的掌子面照片中基准点的映射关系,计算三维空间点云数据,再分析岩体裂隙长度、岩层产状等信息,也有对三者进行结合的,如专利公告号为:CN 109187548公开的一种复杂岩石裂隙模型建模识别方法,利用ProE软件对三维实体模型进行建模,方便定位裂隙在岩石中的位置,裂隙的倾角、岩桥角等不同的组合方式,大大简化了ANASY或者其他软件建模的复杂性,无需再ANSYS软件中进行布尔操作切割模型,但是上述的方法中,针对岩石裂隙的图像获取清晰度不够,因为当相机快门打开时光线进入相机被感光器件接收,即相机在曝光时间内采集到目标光学信号。数字相机中的拜耳传感器阵列包含绿色、红色和蓝色三种矩阵图案,但其采集到的信号像素只有一个颜色通道。实际情况下一个像素应该包含三个颜色通道值,因此数字相机成像过程中采用拜耳插值方法填补缺失的像素通道值,插值经过gamma校正和有损压缩以保存为彩色图像。上述数字相机成像模型可以表示为:At present, there are two main sources of data for automatic information extraction: (1) digital photography and (2) three-dimensional laser scanning. There are two main methods for processing digital images obtained by digital photography. One is to directly perform digital image processing and analysis on digital photos, such as: using image processing algorithms to extract rock mass cracks, and automatically analyzing the extraction results to obtain the parameters of the degree of development of surrounding rock cracks; directly performing statistical feature analysis on the pre-processed images. The other is based on close-range photogrammetry technology. According to the mapping relationship of the reference points in the tunnel face photos taken at different angles, the three-dimensional space point cloud data is calculated, and then the information such as the length of the rock mass crack and the occurrence of the rock formation is analyzed. There are also methods that combine the three, such as the patent announcement number: CN 109187548 discloses a complex rock crack model modeling and identification method. The three-dimensional solid model is modeled by ProE software to facilitate the location of the crack in the rock. The different combinations of the crack inclination angle, rock bridge angle, etc. greatly simplify the complexity of ANASY or other software modeling, and there is no need to perform Boolean operation cutting models in ANSYS software. However, in the above method, the image acquisition clarity of the rock crack is not enough, because when the camera shutter is opened, the light enters the camera and is received by the photosensitive device, that is, the camera collects the target optical signal within the exposure time. The Bayer sensor array in the digital camera contains three matrix patterns of green, red and blue, but the signal pixels collected by it have only one color channel. In actual situations, a pixel should contain three color channel values. Therefore, the Bayer interpolation method is used in the digital camera imaging process to fill in the missing pixel channel values. The interpolation is gamma corrected and lossy compressed to save as a color image. The above digital camera imaging model can be expressed as:

式(1)中I表示彩色图像,c表示有损压缩,g表示相机反应函数(Camera ResponseFunction,CRF),BI表示贝尔插值,S(t)表示传感器在时刻t时的响应,N表示噪声信号,根据公式(1)可知,数字相机成像时随着时间变化过程中传感器信号接收的累加将在图像中产生模糊,拜耳插值和有损压缩过程可导致成像过程产生伪影并降低图像分辨率,影响了裂隙识别的准确度。In formula (1), I represents color image, c represents lossy compression, g represents camera response function (CRF), BI represents Bell interpolation, S(t) represents the response of the sensor at time t, and N represents the noise signal. According to formula (1), the accumulated signal received by the sensor during the imaging process of the digital camera is Blurs will be produced in the image. Bayer interpolation and lossy compression processes can cause artifacts in the imaging process and reduce image resolution, affecting the accuracy of crack identification.

发明内容Summary of the invention

未解决上述技术问题,提高裂隙识别的准确度。The above technical problems have not been solved to improve the accuracy of crack identification.

本发明提出了一种基于多源图像数据融合的岩质斜坡裂隙识别方法,包括:S1:可见光图像数据和红外图像数据采集:通过无人机搭载多源相机,通过无人机仿地倾斜摄影测量,获取影像数据进而获得研究区初始地形信息,并对地形单元进行空间平面拟合,创建与之平行的贴近飞行平面,在飞行平面上构件多角度结构面,相机垂直于结构面,同时获取可见光图像和热红外图像数据;The present invention proposes a rock slope crack identification method based on multi-source image data fusion, including: S1: visible light image data and infrared image data acquisition: by using a multi-source camera mounted on an unmanned aerial vehicle, the unmanned aerial vehicle performs terrain-like tilt photogrammetry to obtain image data and thus obtain initial terrain information of the study area, and performs spatial plane fitting on the terrain unit, creates a flight plane parallel to it, constructs multi-angle structural surfaces on the flight plane, and the camera is perpendicular to the structural surface, and simultaneously obtains visible light image and thermal infrared image data;

S2:图像配准:采用尺度不变特征转换方法将可见光图像与热红外图像内容信息对齐;S2: Image registration: Use scale-invariant feature transformation method to align the content information of visible light image and thermal infrared image;

S3:图像预处理:分别对可见光图像和热红外图像进行改进型的Laplacian锐化和颜色还原的多尺度Retinex增强处理,改进型的Laplacian锐化具体为:对图像像素进行二阶微分来改善边缘的细节,可以表示为:S3: Image preprocessing: Improved Laplacian sharpening and multi-scale Retinex enhancement of color restoration are performed on visible light images and thermal infrared images respectively. The improved Laplacian sharpening is to perform second-order differential on image pixels to improve edge details, which can be expressed as:

其中,I(x,y)表示输入的可见光图像其像素坐标为(x,y),γ和Γ分别是调节锐化程度和边缘亮度的参数;Where I(x, y) represents the input visible light image with pixel coordinates (x, y), γ and Γ are parameters for adjusting the sharpness and edge brightness respectively;

结合公式(2)和(3)通过二阶微分计算得到拉普拉斯算子,其表示为Combining formulas (2) and (3), the Laplace operator is obtained by second-order differential calculation, which is expressed as

表示拉普拉斯算子,Λ表示权重参数。最终经过Laplacian锐化后的图像ILS(x,y)可以表示为 represents the Laplacian operator, and Λ represents the weight parameter. The image I LS (x, y) after Laplacian sharpening can be expressed as

颜色还原的多尺度Retinex增强处理为:通过取单尺度视网膜SSR的不同加权尺度之和得到多尺度视网膜MSR,随后在多尺度视网膜MSR中加入颜色恢复函数C(x,y),如公式(9),该函数对图像的三个颜色通道的百分比进行调节:The multi-scale Retinex enhancement process of color restoration is as follows: the multi-scale retinal MSR is obtained by taking the sum of different weighted scales of the single-scale retinal SSR, and then the color restoration function C(x, y) is added to the multi-scale retinal MSR, as shown in formula (9). This function adjusts the percentages of the three color channels of the image:

表示图像I(x,y)的第i个反射分量,G(x,y)表示包覆支持函数以获取反射分量,N表示不同尺度,wn表示权重并满足M表示输入图像的像素总数,表示输出的增强后的图像; represents the i-th reflection component of the image I(x, y), G(x, y) represents the covering support function to obtain the reflection component, N represents different scales, w n represents the weight and satisfies M represents the total number of pixels in the input image, represents the enhanced image of the output;

S4:多源图像的分解和融合:对S2中增强处理的图像ILS和RMSRCR分别提取其三通道(R、G、B)的图像,并对其进行分解和融合策略,保留每个单通道的低秩部分和显著部分,放弃单通道的稀疏噪声部分,再采用平均策略融合每个通道的低秩部分,采用求和策略融合每个通道的显著部分;S4: Decomposition and fusion of multi-source images: Extract the three-channel (R, G, B) images of the enhanced images I LS and R MSRCR in S2 respectively, and perform decomposition and fusion strategies on them, retaining the low-rank part and significant part of each single channel, abandoning the sparse noise part of the single channel, and then using the average strategy to fuse the low-rank part of each channel, and using the summation strategy to fuse the significant part of each channel;

S5:裂隙识别:通过采用Frangi vesselness滤波器S4中得到的融合图像,并从中提取线状目标。S5: Crack identification: The fused image obtained in S4 is obtained by applying the Frangi vesselness filter and extracting linear targets from it.

优选的,S4中采用MSRCR和Laplacian锐化分别处理输入的可见光图像,并得到处理后的图像ILS和RMSRCR。然后对ILS和RMSRCR分解并分别提取其三通道(R、G、B)的图像;将ILS和RMSRCR的三通道图像开展分解和融合策略,在分解后的结果中放弃单个通道的稀疏噪声部分,保留ILS和RMSRCR图像中的低秩部分和显著部分,该过程可以表示为Preferably, in S4, MSRCR and Laplacian sharpening are used to process the input visible light image respectively, and the processed images I LS and R MSRCR are obtained. Then, I LS and R MSRCR are decomposed and their three-channel (R, G, B) images are extracted respectively; the three-channel images of I LS and R MSRCR are subjected to decomposition and fusion strategies, and the sparse noise part of a single channel is discarded in the decomposed result, and the low-rank part and the significant part in the I LS and R MSRCR images are retained. The process can be expressed as

其中i表示[R,G,B]通道中的一个,表示RMSRCR和ILS的单通道低秩部分,表示RMSRCR和ILS的单通道显著部分,在此基础上采用平均策略融合每个通道的低秩部分,采用求和策略融合每个通道的显著部分,其被定义为Where i represents one of the [R, G, B] channels, and represents the single-channel low-rank part of R MSRCR and I LS , and Represents the single channel salient part of R MSRCR and I LS , on this basis, the average strategy is used to fuse the low rank part of each channel, and the summation strategy is used to fuse the salient part of each channel, which is defined as

表示低秩部分融合结果,表示显著部分融合结果。本发明专利通过如下方式实现单通道的图像融合 represents the low-rank partial fusion result, Indicates the significant part of the fusion result. The present invention realizes single-channel image fusion in the following way

I1 i表示单通道重建图像,最后通过融合实现可见光图像最终的分解与融合输出结果I1I 1 i represents the single-channel reconstructed image, and finally fused and The final decomposition and fusion output result I 1 of the visible light image is achieved.

优选的,S4中采用Laplacian锐化对热红外图像V进行预处理,后按照公式(25)至(31)对预处理后的热红外图像VLS和增强的可见光图像I1进行分解和融合。Preferably, in S4, Laplacian sharpening is used to preprocess the thermal infrared image V, and then the preprocessed thermal infrared image V LS and the enhanced visible light image I 1 are decomposed and fused according to formulas (25) to (31).

其中,表示I1和VLS的单通道低秩部分,表示I1和VLS的单通道显著部分,表示单通道的低秩融合结果,表示单通道的显著融合结果,表示单通道重建图像,最后通过融合实现可见光图像与热红外图像最终的分解与融合输出结果F。in, and represents the single-channel low-rank part of I 1 and V LS , and represents the single channel significant portion of I1 and VLS , Represents the low-rank fusion result of a single channel, represents the significant fusion result of a single channel, Represents a single-channel reconstructed image, and finally fused and The final decomposition and fusion output result F of the visible light image and thermal infrared image is achieved.

优选的,首先对待处理图像F转化为灰度图像并使用高斯滤波器进行滤波以降低噪声,在滤波后的图像基础上获得二值图像,为了提取岩石裂缝的骨架,采用形态学闭合和定向闭合策略来生成骨架化的二元图像,最后,在多源融合图像数据中显示出裂隙的分布。Preferably, the image to be processed F is first converted into a grayscale image and filtered using a Gaussian filter to reduce noise. A binary image is obtained based on the filtered image. In order to extract the skeleton of rock cracks, morphological closure and directional closure strategies are used to generate a skeletonized binary image. Finally, the distribution of cracks is displayed in the multi-source fusion image data.

本发明提出的基于多源图像数据融合的岩质斜坡裂隙识别方法有以下有益效果:分别从相机成像原理模型出发获取高品质无人机图像数据、配准可见光图像与热红外图像、针对高陡岩质斜坡场景可见光图像与热红外图像分别进行图像预处理、融合预处理后的可见光图像与热红外图像、定量裂隙模式识别方法对岩质斜坡场景裂隙进行识别,大大提高了图像获取的清晰度以及准确度,在岩质斜坡场景下多源图像数据融合后的裂隙识别效果较其他方法更加稳定可靠且识别准确率更高。The rock slope crack identification method based on multi-source image data fusion proposed in the present invention has the following beneficial effects: high-quality UAV image data is obtained based on the camera imaging principle model, visible light images and thermal infrared images are registered, image preprocessing is performed on visible light images and thermal infrared images of steep rock slope scenes, the preprocessed visible light images and thermal infrared images are fused, and a quantitative crack pattern recognition method is used to identify cracks in rock slope scenes, which greatly improves the clarity and accuracy of image acquisition. In rock slope scenes, the crack identification effect after multi-source image data fusion is more stable and reliable than other methods and has a higher recognition accuracy.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required for describing the embodiments are briefly introduced below.

图1为整体技术方法框图;Figure 1 is a block diagram of the overall technical method;

图2为DJI M300 RTK UAV搭载Zenmuse H20T多源相机;Figure 2 shows the DJI M300 RTK UAV equipped with the Zenmuse H20T multi-source camera;

图3为变焦相机图像数据采集结果(左)与热红外相机图像数据采集结果(右);Figure 3 shows the image data acquisition results of the zoom camera (left) and the thermal infrared camera (right);

图4为UAV多角度贴近摄影地形单元航线规划示意图;FIG4 is a schematic diagram of the route planning of the UAV multi-angle close-up photography terrain unit;

图5为UAV多角度贴近摄影结构面航线规划示意图;Figure 5 is a schematic diagram of the route planning of the UAV for multi-angle close-up photography of the structure surface;

图6为视角盲区航线补充;Figure 6 is a supplement to the blind spot route;

图7为高陡岩质斜坡24小时热红外图像数据监测结果;Figure 7 shows the 24-hour thermal infrared image data monitoring results of a high and steep rock slope;

图8为可见光图像与热红外图像配准示意图;FIG8 is a schematic diagram of the registration of a visible light image and a thermal infrared image;

图9为可见光图像分解与融合;Figure 9 shows the decomposition and fusion of visible light images;

图10为多源图像分解融合;Figure 10 shows the decomposition and fusion of multi-source images;

图11为实验使用的可见光图像及其对应的热红外图像;Figure 11 shows the visible light image used in the experiment and its corresponding thermal infrared image;

图12为可见光图像和热红外图像多源分解和融合图;FIG12 is a diagram showing multi-source decomposition and fusion of visible light images and thermal infrared images;

图13为全局融合实验结果图;FIG13 is a diagram showing the results of a global fusion experiment;

图14为全局裂隙分割结果;Figure 14 shows the global crack segmentation result;

图15为输入的可见光图像;FIG15 is an input visible light image;

图16为局部裂隙分割结果;Figure 16 shows the local crack segmentation result;

图17为骨架分割示意图;FIG17 is a schematic diagram of skeleton segmentation;

具体实施方式DETAILED DESCRIPTION

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present invention will be described clearly and completely below in conjunction with the accompanying drawings in the embodiments of the present invention.

实施例Example

本实施例以在云南省德钦县旭龙水电站坝址附件开展了数据采集工作,考收集高品质的可见光图像与热红外图像的数据时间在清晨7:30。This embodiment carried out data collection work near the dam site of Xulong Hydropower Station in Deqin County, Yunnan Province, and the time for collecting high-quality visible light images and thermal infrared images was 7:30 in the morning.

如图1所示,本发明提出了一种基于多源图像数据融合的岩质斜坡裂隙识别方法,包括:S1:可见光图像数据和红外图像数据采集;S2:图像配准;S3:图像预处理;S4:多源图像的分解和融合;S5:裂隙识别。As shown in Figure 1, the present invention proposes a rock slope crack identification method based on multi-source image data fusion, including: S1: visible light image data and infrared image data acquisition; S2: image registration; S3: image preprocessing; S4: decomposition and fusion of multi-source images; S5: crack identification.

具体说:S1中,通过无人机搭载多源相机,通过无人机仿地倾斜摄影测量,获取影像数据进而获得研究区初始地形信息,并对地形单元进行空间平面拟合,创建与之平行的贴近飞行平面,在飞行平面上构件多角度结构面,相机垂直于结构面,同时获取可见光图像和热红外图像数据;本实施例中选用DJI M300 RTK UAV搭载Zenmuse H20T多源相机(如图2所示),依照上述数字相机成像原理设置变焦相机图像数据采集参数和热红外相机图像数据采集参数如表1所示,在该参数下收集的典型图像数据如图3所示。Specifically, in S1, a multi-source camera is carried by a UAV, and the UAV is used for terrain-like tilt photogrammetry to obtain image data and thus obtain the initial terrain information of the study area, and the terrain unit is fitted into a spatial plane to create a flight plane parallel to it. On the flight plane, a multi-angle structural surface of the component is constructed, and the camera is perpendicular to the structural surface, and visible light image and thermal infrared image data are obtained at the same time; in this embodiment, a DJI M300 RTK UAV is selected to carry a Zenmuse H20T multi-source camera (as shown in FIG. 2 ), and the zoom camera image data acquisition parameters and the thermal infrared camera image data acquisition parameters are set according to the above-mentioned digital camera imaging principle as shown in Table 1, and the typical image data collected under this parameter is shown in FIG. 3 .

表1变焦相机图像数据采集参数和热红外相机图像数据采集参数Table 1 Zoom camera image data acquisition parameters and thermal infrared camera image data acquisition parameters

表2UAV作业时相关工作参数Table 2 Related working parameters of UAV operation

针对可见光图像和UAV热红外图像数据采集,将数字高程模型(DigitalElevation Model,DEM)导入无人机地面控制器中规划出飞行航线,使无人机可以根据研究区地形变化来调整飞行高度进行仿地倾斜摄影测量,获取较清楚的影像数据。For the collection of visible light images and UAV thermal infrared image data, the Digital Elevation Model (DEM) is imported into the UAV ground controller to plan the flight route, so that the UAV can adjust the flight altitude according to the terrain changes in the study area to perform terrain-like oblique photogrammetry and obtain clearer image data.

再将影像数据导入到无人机中,建立研究区仿地模型,获取研究区初始地形信息。地形单元多角度贴近航线规划。根据研究区地形特征信息将其划分为多个地形单元,对每个地形单元进行空间平面拟合,如图4所示。在拟合平面特定距离(5至100m之间)之外,创建与之平行的贴近飞行平面。在飞行平面内设置多角度贴近航线,相机角度自动设置为垂直于拟合平面。影像重叠率设置为航向80%及以上,旁向60%及以上。结构面多角度贴近航线规划。根据现场调查所确定的结构面发育分组特征,在飞行平面上规划针对结构面的多角度贴近航线(如图5所示)。为每组结构面设置相应的贴近航次,相机角度垂直于结构面。影像重叠率设置与地形单元多角度贴近航线规划步骤一致。重点区域补充航线规划。在上述航线基础上,针对隐蔽结构面等易存在摄影盲区以及研究所需的兴趣区域设置多角度航线(如图6所示),对其进行全景式的贴近拍摄,消除摄影盲区。Then import the image data into the drone, establish a terrain-simulating model of the study area, and obtain the initial terrain information of the study area. Multi-angle approach route planning for terrain units. According to the terrain feature information of the study area, it is divided into multiple terrain units, and a spatial plane fitting is performed for each terrain unit, as shown in Figure 4. Outside the specific distance of the fitting plane (between 5 and 100m), a close flight plane parallel to it is created. Set a multi-angle approach route in the flight plane, and the camera angle is automatically set to be perpendicular to the fitting plane. The image overlap rate is set to 80% or more in the heading and 60% or more in the lateral direction. Multi-angle approach route planning for structural surfaces. According to the development grouping characteristics of the structural surfaces determined by the field investigation, multi-angle approach routes for structural surfaces are planned on the flight plane (as shown in Figure 5). Set a corresponding approach flight for each group of structural surfaces, and the camera angle is perpendicular to the structural surface. The image overlap rate setting is consistent with the steps of multi-angle approach route planning for terrain units. Supplementary route planning for key areas. On the basis of the above routes, multi-angle routes are set for hidden structural surfaces and other areas of interest that are prone to photographic blind spots, as well as research areas, to conduct panoramic close-up photography to eliminate photographic blind spots.

利用热红外成像技术收集高陡岩质斜坡场景的热红外图像数据,可见光图像采集策略的基础上开展了对高陡岩质斜坡同一区域的24小时的热红外图像数据监测(如图7所示),每间隔1.5小时进行一次数据收集。Thermal infrared imaging technology was used to collect thermal infrared image data of high and steep rock slope scenes. Based on the visible light image acquisition strategy, 24-hour thermal infrared image data monitoring of the same area of the high and steep rock slope was carried out (as shown in Figure 7), and data was collected every 1.5 hours.

针对S2:图像配准For S2: Image Registration

上述多角度贴近航线任务完成后获得可见光图像及其对应的热红外图像数据,由于彩色相机与热红外相机之间存在视差,两者采集的图像数据存在不完全对应情况。本发明专利采用尺度不变特征转换(Scale-invariant feature transform,SIFT)方法将可见光图像与热红外图像内容信息对齐,如图8所示。After completing the multi-angle close-to-route mission, the visible light image and its corresponding thermal infrared image data are obtained. Due to the parallax between the color camera and the thermal infrared camera, the image data collected by the two cameras are not completely corresponding. The patent of this invention uses the scale-invariant feature transform (SIFT) method to align the content information of the visible light image and the thermal infrared image, as shown in Figure 8.

针对S3:图像预处理,主要是对于可见光图像和热红外图像首先分别实施Laplacian锐化和Retinex增强处理,Laplacian锐化增强边缘细节特征,Retinex增强提升图像对比度和清晰度。由于岩石结构表面的裂缝恰好是边缘部分,而且有些裂缝在图像中不够明显,因此有必要增强边缘细节特征。本发明改进传统拉普拉斯锐化策略来增强在高陡岩质斜坡拍摄的图像的边缘。通过对图像像素进行二阶微分来改善边缘的细节,可以表示为:For S3: image preprocessing, mainly for visible light images and thermal infrared images, Laplacian sharpening and Retinex enhancement processing are first implemented respectively. Laplacian sharpening enhances edge detail features, and Retinex enhancement improves image contrast and clarity. Since the cracks on the surface of the rock structure happen to be the edge part, and some cracks are not obvious enough in the image, it is necessary to enhance the edge detail features. The present invention improves the traditional Laplace sharpening strategy to enhance the edges of images taken on steep rock slopes. The edge details are improved by performing second-order differentials on image pixels, which can be expressed as:

其中,I(x,y)表示输入的可见光图像其像素坐标为(x,y),γ和Γ分别是调节锐化程度和边缘亮度的参数。结合公式(2)和(3)通过二阶微分计算得到拉普拉斯算子,其表示为:Where I(x, y) represents the pixel coordinates of the input visible light image (x, y), and γ and Γ are parameters for adjusting the sharpness and edge brightness, respectively. Combining formulas (2) and (3), the Laplace operator is calculated by second-order differential, which is expressed as:

表示拉普拉斯算子,Λ表示权重参数。最终经过Laplacian锐化后的图像ILS(x,y)可以表示为: represents the Laplacian operator, and Λ represents the weight parameter. The image I LS (x, y) after Laplacian sharpening can be expressed as:

Retinex原理本质是从乘性模型角度出发将图像视为照明分量与反射分量做乘法运算,其可以表示为:The essence of the Retinex principle is to treat the image as a multiplication of the illumination component and the reflection component from the perspective of the multiplicative model, which can be expressed as:

I(x,y)=L(x,y)·R(x,y) (6)I(x,y)=L(x,y)·R(x,y) (6)

L(x,y)表示在像素(x,y)的照明分量,R(x,y)表示在像素(x,y)的反射分量。其中,反射分量指的是图像中的高频信息且不依赖于照明分量。Retinex策略旨在估计图像亮度,并通过去除原始图像的亮度成分来获取反射图像。然而,它不能在动态范围、边缘增强和色彩恒定方面实现平衡。而岩质斜坡图像的增强需要充分考虑上述三个条件,因此本发明专利在图像预处理过程中采用颜色还原的多尺度Retinex算法(MSRCR),其是Retinex的一个变体算法。MSRCR在增强不同视觉场景下拍摄的图像显示出良好性能(如水下场景、低光照度场景和雾化场景)。MSRCR的步骤如下,首先多尺度视网膜(MSR)是通过取单尺度视网膜(SSR)的不同加权尺度之和来计算的,它是图像动态范围和色彩保真度的一个平衡。从而MSR既能处理色调对比度和动态范围,也能避免光晕现象的出现。尽管MSR在改善SSR的性能方面取得了进展,但它引入了颜色失真现象。随后在MSR中加入了一个颜色恢复函数C(x,y)来解决颜色失真问题,如公式(9)所示,该函数对图像的三个颜色通道的百分比进行调节。L(x, y) represents the illumination component at pixel (x, y), and R(x, y) represents the reflection component at pixel (x, y). Among them, the reflection component refers to the high-frequency information in the image and is independent of the illumination component. The Retinex strategy aims to estimate the image brightness and obtain the reflection image by removing the brightness component of the original image. However, it cannot achieve a balance in terms of dynamic range, edge enhancement, and color constancy. The enhancement of rock slope images needs to fully consider the above three conditions. Therefore, the patent of the present invention adopts a multi-scale Retinex algorithm (MSRCR) for color restoration in the image preprocessing process, which is a variant algorithm of Retinex. MSRCR shows good performance in enhancing images taken in different visual scenes (such as underwater scenes, low-light scenes, and foggy scenes). The steps of MSRCR are as follows: first, the multi-scale retina (MSR) is calculated by taking the sum of different weighted scales of the single-scale retina (SSR), which is a balance between the dynamic range and color fidelity of the image. Therefore, MSR can handle both tonal contrast and dynamic range, and avoid the occurrence of halo phenomenon. Although MSR has made progress in improving the performance of SSR, it introduces color distortion. Subsequently, a color restoration function C(x, y) was added to MSR to solve the color distortion problem, as shown in formula (9), which adjusts the percentage of the three color channels of the image.

表示图像I(x,y)的第i个反射分量,G(x,y)表示包覆支持函数以获取反射分量,N表示不同尺度,wn表示权重并满足M表示输入图像的像素总数,表示输出的增强后的图像。 represents the i-th reflection component of the image I(x, y), G(x, y) represents the covering support function to obtain the reflection component, N represents different scales, w n represents the weight and satisfies M represents the total number of pixels in the input image, Represents the output enhanced image.

S4:图像的分解和融合S4: Image decomposition and fusion

本发明专利通过改进低秩潜在表示算法实现图像的分解与融合,算法可分为三个步骤:(1)将输入图像分解为低秩部分、显著部分和稀疏噪声部分;(2)采用两种不同的融合策略将图像的低秩部分和显著部分融合;(3)重建图像。输入图像的全局结构信息和亮度信息主要在低秩部分,而局部结构信息和显著特征则在显著部分。除了稀疏噪声部分,低秩部分和显著部分都是图像需要保留的关键信息。LatLRR的数学表达式可以定义为:The present invention realizes image decomposition and fusion by improving the low-rank latent representation algorithm. The algorithm can be divided into three steps: (1) decomposing the input image into a low-rank part, a significant part and a sparse noise part; (2) using two different fusion strategies to fuse the low-rank part and the significant part of the image; (3) reconstructing the image. The global structural information and brightness information of the input image are mainly in the low-rank part, while the local structural information and significant features are in the significant part. In addition to the sparse noise part, the low-rank part and the significant part are both key information that needs to be retained in the image. The mathematical expression of LatLRR can be defined as:

X=XZ+LX+E (11)X=XZ+LX+E (11)

其中X、Z、L、E分别表示观测数据矩阵,低秩系数,显著系数和稀疏噪声。XZ和LX分别表示低秩部分和显著部分。公式(11)通过如下方式优化求解:Where X, Z, L, and E represent the observed data matrix, low-rank coefficients, significant coefficients, and sparse noise, respectively. XZ and LX represent the low-rank part and the significant part, respectively. Formula (11) is optimized and solved in the following way:

||·||*和||·||1分别表示核范数与L1范数,λ表示平衡参数(其值大于0)。分解后的图像的低秩部分ILrr和显著部分IS分别表示为:||·|| * and ||·|| 1 represent the nuclear norm and L 1 norm respectively, and λ represents the balance parameter (its value is greater than 0). The low-rank part I Lrr and the significant part I S of the decomposed image are expressed as:

ILrr=XZ (13)I Lrr =XZ (13)

IS=LX (14) IS = LX (14)

在对图像进行分解并去除稀疏噪声部分后,采用平均和策略分别融合低秩部分和显著部分,其表示为After decomposing the image and removing the sparse noise part, the average and strategy are used to fuse the low-rank part and the salient part respectively, which is expressed as

FLrr(x,y)=w1I1Lrr(x,y)+w2I2Lrr(x,y) (15)F Lrr (x,y)=w 1 I 1Lrr (x,y)+w 2 I 2Lrr (x,y) (15)

FS(x,y)=s1I1S(x,y)+s2I2S(x,y) (16)F S (x,y)=s 1 I 1S (x,y)+s 2 I 2S (x,y) (16)

w1和w2是低秩部分的权重参数,I1Lrr和I2Lrr是分解的低秩部分,F1Lrr表示低秩融合结果。s1和s2是平均和策略过程中的显著部分的权重参数,I1S和I2S是分解的显著部分,FS表示显著融合结果。在获得低秩融合结果和显著融合结果后,最终重建的图像F可以使用F1Lrr和FS进行融合,其定义为 w1 and w2 are weight parameters of the low-rank part, I1Lrr and I2Lrr are the low-rank parts of the decomposition, and F1Lrr represents the low-rank fusion result. s1 and s2 are weight parameters of the significant part in the average and strategy process, I1S and I2S are the significant parts of the decomposition, and Fs represents the significant fusion result. After obtaining the low-rank fusion result and the significant fusion result, the final reconstructed image F can be fused using F1Lrr and Fs , which is defined as

F(x,y)=FLrr(x,y)+FS(x,y) (17)F(x,y)=F Lrr (x,y)+F S (x,y) (17)

具体的说:本发明专利在可见光图像经过图像预处理之后,按照改进的低秩潜在表示算法进行可见光图像增强(如图9所示)。具体来说,采用MSRCR和Laplacian锐化分别处理输入的可见光图像,并得到处理后的图像ILS和RMSRCR。然后对ILS和RMSRCR分解并分别提取其三通道(R、G、B)的图像。将ILS和RMSRCR的三通道图像开展分解和融合策略,在分解后的结果中本发明专利放弃单个通道的稀疏噪声部分,保留ILS和RMSRCR图像中的低秩部分和显著部分,该过程可以表示为Specifically: After the visible light image is preprocessed, the patent of the present invention performs visible light image enhancement according to the improved low-rank latent representation algorithm (as shown in Figure 9). Specifically, MSRCR and Laplacian sharpening are used to process the input visible light image respectively, and the processed images I LS and R MSRCR are obtained. Then I LS and R MSRCR are decomposed and their three-channel (R, G, B) images are extracted respectively. The three-channel images of I LS and R MSRCR are subjected to a decomposition and fusion strategy. In the decomposed result, the patent of the present invention abandons the sparse noise part of a single channel and retains the low-rank part and the significant part in the I LS and R MSRCR images. The process can be expressed as

其中i表示[R,G,B]通道中的一个,表示RMSRCR和ILS的单通道低秩部分,表示RMSRCR和ILS的单通道显著部分。在此基础上采用平均策略融合每个通道的低秩部分,采用求和策略融合每个通道的显著部分,其被定义为Where i represents one of the [R, G, B] channels, and represents the single-channel low-rank part of R MSRCR and I LS , and Represents the single channel salient part of R MSRCR and I LS . On this basis, the average strategy is used to fuse the low-rank part of each channel, and the summation strategy is used to fuse the salient part of each channel, which is defined as

表示低秩部分融合结果,表示显著部分融合结果。本发明专利通过如下方式实现单通道的图像融合 represents the low-rank partial fusion result, Indicates the significant part of the fusion result. The present invention realizes single-channel image fusion in the following way

表示单通道重建图像,最后通过融合实现可见光图像最终的分解与融合输出结果I1 Represents a single-channel reconstructed image, and finally fused and The final decomposition and fusion output result I 1 of the visible light image is achieved.

对红外光的分解和融合:对于可见光图像与热红外图像数据的融合(如图10所示),重复S4中图像分解的步骤。输入图像是增强的可见光图像I1和热红外图像V。首先采用Laplacian锐化对热红外图像V进行预处理,然后按照公式(25)至(31)对预处理后的热红外图像VLS和增强的可见光图像I1进行分解和融合。Decomposition and fusion of infrared light: For the fusion of visible light image and thermal infrared image data (as shown in FIG10 ), repeat the image decomposition step in S4. The input images are the enhanced visible light image I 1 and the thermal infrared image V. First, Laplacian sharpening is used to preprocess the thermal infrared image V LS, and then the preprocessed thermal infrared image V LS and the enhanced visible light image I 1 are decomposed and fused according to formulas (25) to (31).

其中,表示I1和VLS的单通道低秩部分,表示I1和VLS的单通道显著部分,表示单通道的低秩融合结果,表示单通道的显著融合结果,表示单通道重建图像,最后通过融合实现可见光图像与热红外图像最终的分解与融合输出结果F。in, and represents the single-channel low-rank part of I 1 and V LS , and represents the single channel significant portion of I1 and VLS , Represents the low-rank fusion result of a single channel, represents the significant fusion result of a single channel, Represents a single-channel reconstructed image, and finally fused and The final decomposition and fusion output result F of the visible light image and thermal infrared image is achieved.

最终通过对融合的输出结构进行裂隙识别,实用FracPaQ的新版本在Zenodo的平台上发布,通过采用Frangi vesselness滤波器分析图像并从中提取线状目标。其具体流程如下:首先对待处理图像F转化为灰度图像并使用高斯滤波器进行滤波以降低噪声。在滤波后的图像基础上获得二值图像,为了提取岩石裂缝的骨架,采用形态学闭合和定向闭合策略来生成骨架化的二元图像。最后,在多源融合图像数据中显示出裂隙的分布。Finally, the cracks are identified by the fused output structure. The new version of FracPaQ is released on the Zenodo platform. The image is analyzed by using the Frangi vesselness filter and linear targets are extracted from it. The specific process is as follows: First, the processed image F is converted into a grayscale image and filtered using a Gaussian filter to reduce noise. A binary image is obtained based on the filtered image. In order to extract the skeleton of the rock cracks, morphological closure and directional closure strategies are used to generate a skeletonized binary image. Finally, the distribution of cracks is displayed in the multi-source fused image data.

通过上述方法,数据采集完成后选取了13对可见光图像及其对应的热红外图像(如图11所示)开展融合实验研究、全局融合与局部融合实验研究。Through the above method, after the data collection was completed, 13 pairs of visible light images and their corresponding thermal infrared images (as shown in Figure 11) were selected to carry out fusion experimental research, global fusion experimental research and local fusion experimental research.

图12展示了(a)输入的可见光图像;(b)输入的热红外图像;(c)分解与融合后的可见光图像;(d)Laplacian锐化后的热红外图像;(e)分解与融合后的可见光图像和输入的热红外图像融合结果;(f)多源图像分解融合结果。Figure 12 shows (a) the input visible light image; (b) the input thermal infrared image; (c) the visible light image after decomposition and fusion; (d) the thermal infrared image after Laplacian sharpening; (e) the fusion result of the decomposition and fusion of the visible light image and the input thermal infrared image; (f) the decomposition and fusion result of multi-source images.

通过采用FracPaQ对上述结果进行裂隙识别,表3中为裂隙识别结果。根据裂隙识别结果可以证明,本发明专利在岩质斜坡场景下多源图像数据融合后的裂隙识别效果更好。FracPaQ is used to identify cracks in the above results, and the crack identification results are shown in Table 3. According to the crack identification results, it can be proved that the crack identification effect of the present invention after multi-source image data fusion in the rock slope scene is better.

表3融合试验裂隙识别结果(黑色加粗表示结果最佳)Table 3 Crack recognition results of fusion test (black bold indicates the best result)

采用融合试验中的13对可见光图像与对应的热红外图像开展了全局与局部裂隙识别实验。图13展示了全局融合实验结果,图13中(a)输入的可见光图像;(b)输入的热红外图像;(c)本发明的融合结果。The global and local crack recognition experiments were carried out using 13 pairs of visible light images and corresponding thermal infrared images in the fusion experiment. Figure 13 shows the results of the global fusion experiment, where (a) is the input visible light image; (b) is the input thermal infrared image; and (c) is the fusion result of the present invention.

根据图13中的融合结果采用FracPaQ对上述结果进行裂隙识别,According to the fusion results in Figure 13, FracPaQ was used to identify cracks in the above results.

图14为全局裂隙识别效果图,其中,(a)输入的可见光图像;(b)输入的热红外图像;(c)输入的可见光图像识别结果;(d)输入的热红外图像识别结果;(e)本发明专利融合后识别结果。Figure 14 is a diagram of the global crack recognition effect, where (a) the input visible light image; (b) the input thermal infrared image; (c) the input visible light image recognition result; (d) the input thermal infrared image recognition result; (e) the recognition result after fusion of the patent of the present invention.

表4为全局裂隙识别数量结果。为验证本发明的泛化性,对13对图像中岩质斜坡区域提取开展局部裂隙识别实验,Table 4 shows the results of global crack recognition. To verify the generalization of the present invention, local crack recognition experiments were carried out on the rock slope area extraction in 13 pairs of images.

表4全局裂隙识别结果(黑色加粗表示结果最佳)Table 4 Global crack recognition results (black bold indicates the best result)

图15为融合后的局部裂隙识别实验结果,其中,(a)输入的可见光图像;(b)输入的热红外图像;(c)本发明专利融合结果;FIG15 is a fusion result of a local crack recognition experiment, where (a) the input visible light image; (b) the input thermal infrared image; (c) the fusion result of the present invention;

图16为局部裂隙识别效果图,其中,局部裂隙分割结果.(a)输入的可见光图像;(b)输入的热红外图像;(c)输入的可见光图像识别结果;(d)输入的热红外图像识别结果;(e)本发明专利融合后识别结果。Figure 16 is a diagram of the local crack recognition effect, where the local crack segmentation results are: (a) input visible light image; (b) input thermal infrared image; (c) input visible light image recognition result; (d) input thermal infrared image recognition result; (e) recognition result after fusion of the patent of the present invention.

表5为全局图像与局部图像坐标对应关系,表6为局部裂隙识别数量结果。根据裂隙识别结果可以证明,本发明专利在岩质斜坡场景下多源图像数据融合后的裂隙识别效果较其他方法更加稳定可靠且识别准确率更高。在裂隙识别结果基础上提取形态学特征获得裂隙骨架,如图17所示。Table 5 shows the coordinate correspondence between the global image and the local image, and Table 6 shows the number of local crack identification results. According to the crack identification results, it can be proved that the crack identification effect of the present invention after multi-source image data fusion in the rock slope scene is more stable and reliable than other methods, and the recognition accuracy is higher. Based on the crack identification results, the morphological features are extracted to obtain the crack skeleton, as shown in Figure 17.

表5全局图像与局部图像坐标对应关系Table 5 Correspondence between global image and local image coordinates

全局分辨率Global resolution 局部分辨率Local resolution 左上角坐标Upper left corner coordinates 右上角坐标Upper right corner coordinates 左下角坐标Lower left corner coordinates 右下角坐标Lower right corner coordinates 1#1# 960×768960×768 304×301304×301 (656,374)(656,374) (960,374)(960,374) (656,675)(656,675) (960,675)(960,675) 2#2# 960×768960×768 310×418310×418 (650,350)(650,350) (960,350)(960,350) (650,768)(650,768) (960,768)(960,768) 3#3# 960×768960×768 480×392480×392 (480,376)(480,376) (960,376)(960,376) (480,768)(480,768) (960,768)(960,768) 4#4# 960×768960×768 551×407551×407 (252,361)(252,361) (803,361)(803,361) (252,768)(252,768) (803,768)(803,768) 5#5# 960×768960×768 544×521544×521 (20,142)(20,142) (564,142)(564,142) (20,663)(20,663) (564,663)(564,663) 6#6# 960×768960×768 356×462356×462 (604,6)(604,6) (960,6)(960,6) (604,468)(604,468) (960,468)(960,468) 7#7# 960×768960×768 470×474470×474 (346,1)(346,1) (816,1)(816,1) (346,475)(346,475) (816,475)(816,475) 8#8# 960×768960×768 484×698484×698 (0,16)(0,16) (484,16)(484,16) (0,714)(0,714) (484,714)(484,714) 9#9# 960×768960×768 406×499406×499 (554,269)(554,269) (960,269)(960,269) (554,768)(554,768) (960,768)(960,768) 10#10# 960×768960×768 614×412614×412 (346,2)(346,2) (960,2)(960,2) (346,414)(346,414) (960,414)(960,414) 11#11# 960×768960×768 589×298589×298 (371,286)(371,286) (960,286)(960,286) (371,584)(371,584) (960,584)(960,584) 12#12# 960×768960×768 315×245315×245 (356,194)(356,194) (671,194)(671,194) (356,439)(356,439) (671,439)(671,439) 13#13# 960×768960×768 500×522500×522 (460,1)(460,1) (960,1)(960,1) (460,523)(460,523) (960,523)(960,523)

表6局部裂隙识别结果(黑色加粗表示结果最佳)Table 6 Local crack identification results (black bold indicates the best result)

Claims (4)

1.一种基于多源图像数据融合的岩质斜坡裂隙识别方法,其特征在于,包括:1. A rock slope crack identification method based on multi-source image data fusion, characterized by comprising: S1:可见光图像数据和红外图像数据采集:通过无人机搭载多源相机,通过无人机仿地倾斜摄影测量,获取影像数据进而获得研究区初始地形信息,并对地形单元进行空间平面拟合,创建与之平行的贴近飞行平面,在飞行平面上构件多角度结构面,相机垂直于结构面,同时获取可见光图像和热红外图像数据;S1: Visible light image data and infrared image data acquisition: The UAV is equipped with a multi-source camera, and the UAV is used for terrain-like tilt photogrammetry to obtain image data and then obtain the initial terrain information of the study area. The terrain unit is fitted with a spatial plane, and a flight plane parallel to it is created. On the flight plane, multi-angle structural surfaces are constructed, and the camera is perpendicular to the structural surface, and visible light image and thermal infrared image data are obtained at the same time; S2:图像配准:采用尺度不变特征转换方法将可见光图像与热红外图像内容信息对齐;S2: Image registration: Use scale-invariant feature transformation method to align the content information of visible light image and thermal infrared image; S3:图像预处理:分别对可见光图像和热红外图像进行改进型的Laplacian锐化和颜色还原的多尺度Retinex增强处理,改进型的Laplacian锐化具体为:对图像像素进行二阶微分来改善边缘的细节,可以表示为:S3: Image preprocessing: Improved Laplacian sharpening and multi-scale Retinex enhancement of color restoration are performed on visible light images and thermal infrared images respectively. The improved Laplacian sharpening is to perform second-order differential on image pixels to improve edge details, which can be expressed as: 其中,I(x,y)表示输入的可见光图像其像素坐标为(x,y),γ和Γ分别是调节锐化程度和边缘亮度的参数;Where I(x,y) represents the input visible light image with pixel coordinates (x,y), γ and Γ are parameters for adjusting the sharpness and edge brightness respectively; 结合公式(2)和(3)通过二阶微分计算得到拉普拉斯算子,其表示为Combining formulas (2) and (3), the Laplace operator is obtained by second-order differential calculation, which is expressed as 表示拉普拉斯算子,Λ表示权重参数。最终经过Laplacian锐化后的图像ILS(x,y)可以表示为 represents the Laplacian operator, and Λ represents the weight parameter. The image I LS (x, y) after Laplacian sharpening can be expressed as 颜色还原的多尺度Retinex增强处理为:通过取单尺度视网膜SSR的不同加权尺度之和得到多尺度视网膜MSR,随后在多尺度视网膜MSR中加入颜色恢复函数C(x,y),如公式(9),该函数对图像的三个颜色通道的百分比进行调节:The multi-scale Retinex enhancement process of color restoration is as follows: the multi-scale retinal MSR is obtained by taking the sum of different weighted scales of the single-scale retinal SSR, and then the color restoration function C(x, y) is added to the multi-scale retinal MSR, as shown in formula (9). This function adjusts the percentages of the three color channels of the image: 表示图像I(x,y)的第i个反射分量,G(x,y)表示包覆支持函数以获取反射分量,N表示不同尺度,wn表示权重并满足M表示输入图像的像素总数,表示输出的增强后的图像; represents the i-th reflection component of the image I(x, y), G(x, y) represents the covering support function to obtain the reflection component, N represents different scales, w n represents the weight and satisfies M represents the total number of pixels in the input image, represents the enhanced image of the output; S4:图像的分解和融合:对S2中增强处理的图像ILS和RMSRCR分别提取其三通道(R、G、B)的图像,并对其进行分解和融合策略,保留每个单通道的低秩部分和显著部分,放弃单通道的稀疏噪声部分,再采用平均策略融合每个通道的低秩部分,采用求和策略融合每个通道的显著部分;S4: Image decomposition and fusion: Extract the three-channel (R, G, B) images of the enhanced images I LS and R MSRCR in S2 respectively, and perform decomposition and fusion strategies on them, retain the low-rank part and significant part of each single channel, abandon the sparse noise part of the single channel, and then use the average strategy to fuse the low-rank part of each channel, and use the summation strategy to fuse the significant part of each channel; S5:裂隙识别:通过采用Frangi vesselness滤波器S4中得到的融合图像,并从中提取线状目标。S5: Crack identification: The fused image obtained in S4 is obtained by applying the Frangi vesselness filter and extracting linear targets from it. 2.根据权利要求1所述的基于多源图像数据融合的岩质斜坡裂隙识别方法,其特征在于,S4中采用MSRCR和Laplacian锐化分别处理输入的可见光图像,并得到处理后的图像ILS和RMSRCR。然后对ILS和RMSRCR分解并分别提取其三通道(R、G、B)的图像;将ILS和RMSRCR的三通道图像开展分解和融合策略,在分解后的结果中放弃单个通道的稀疏噪声部分,保留ILS和RMSRCR图像中的低秩部分和显著部分,该过程可以表示为2. The rock slope crack identification method based on multi-source image data fusion according to claim 1 is characterized in that, in S4, MSRCR and Laplacian sharpening are used to process the input visible light image respectively, and the processed images I LS and R MSRCR are obtained. Then, I LS and R MSRCR are decomposed and their three-channel (R, G, B) images are extracted respectively; the three-channel images of I LS and R MSRCR are subjected to decomposition and fusion strategies, and the sparse noise part of a single channel is abandoned in the decomposed result, and the low-rank part and the significant part in the I LS and R MSRCR images are retained. The process can be expressed as 其中i表示[R,G,B]通道中的一个,表示RMSRCR和ILS的单通道低秩部分,表示RMSRCR和ILS的单通道显著部分,在此基础上采用平均策略融合每个通道的低秩部分,采用求和策略融合每个通道的显著部分,其被定义为Where i represents one of the [R, G, B] channels, and represents the single-channel low-rank part of R MSRCR and I LS , and Represents the single channel salient part of R MSRCR and I LS , on this basis, the average strategy is used to fuse the low rank part of each channel, and the summation strategy is used to fuse the salient part of each channel, which is defined as F1 i_Lrr表示低秩部分融合结果,F1 i_S表示显著部分融合结果。本发明专利通过如下方式实现单通道的图像融合F 1 i_Lrr represents the fusion result of the low-rank part, and F 1 i_S represents the fusion result of the significant part. The present invention realizes single-channel image fusion in the following way 表示单通道重建图像,最后通过融合实现可见光图像最终的分解与融合输出结果I1 Represents a single-channel reconstructed image, and finally fused and The final decomposition and fusion output result I 1 of the visible light image is achieved. 3.根据权利要求1所述的基于多源图像数据融合的岩质斜坡裂隙识别方法,其特征在于,S4中采用Laplacian锐化对热红外图像V进行预处理,后按照公式(25)至(31)对预处理后的热红外图像VLS和增强的可见光图像I1进行分解和融合;3. The rock slope crack identification method based on multi-source image data fusion according to claim 1 is characterized in that, in S4, Laplacian sharpening is used to pre-process the thermal infrared image V, and then the pre-processed thermal infrared image V LS and the enhanced visible light image I 1 are decomposed and fused according to formulas (25) to (31); 其中,表示I1和VLS的单通道低秩部分,表示I1和VLS的单通道显著部分,表示单通道的低秩融合结果,表示单通道的显著融合结果,表示单通道重建图像,最后通过融合实现可见光图像与热红外图像最终的分解与融合输出结果F。in, and represents the single-channel low-rank part of I 1 and V LS , and represents the single channel significant portion of I1 and VLS , Represents the low-rank fusion result of a single channel, represents the significant fusion result of a single channel, Represents a single-channel reconstructed image, and finally fused and The final decomposition and fusion output result F of the visible light image and thermal infrared image is achieved. 4.根据权利要求3所述的基于多源图像数据融合的岩质斜坡裂隙识别方法,其特征在于,首先对待处理图像F转化为灰度图像并使用高斯滤波器进行滤波以降低噪声,在滤波后的图像基础上获得二值图像,为了提取岩石裂缝的骨架,采用形态学闭合和定向闭合策略来生成骨架化的二元图像,最后,在多源融合图像数据中显示出裂隙的分布。4. The rock slope crack identification method based on multi-source image data fusion according to claim 3 is characterized in that the image to be processed F is first converted into a grayscale image and filtered using a Gaussian filter to reduce noise, and a binary image is obtained based on the filtered image. In order to extract the skeleton of the rock cracks, morphological closure and directional closure strategies are used to generate a skeletonized binary image. Finally, the distribution of the cracks is displayed in the multi-source fusion image data.
CN202311102180.XA 2023-08-30 2023-08-30 Rock slope crack identification method based on multi-source image data fusion Pending CN117315221A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311102180.XA CN117315221A (en) 2023-08-30 2023-08-30 Rock slope crack identification method based on multi-source image data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311102180.XA CN117315221A (en) 2023-08-30 2023-08-30 Rock slope crack identification method based on multi-source image data fusion

Publications (1)

Publication Number Publication Date
CN117315221A true CN117315221A (en) 2023-12-29

Family

ID=89285588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311102180.XA Pending CN117315221A (en) 2023-08-30 2023-08-30 Rock slope crack identification method based on multi-source image data fusion

Country Status (1)

Country Link
CN (1) CN117315221A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803253A (en) * 2017-01-17 2017-06-06 四川大学 Rock 3-D view crack extract method
CN113192049A (en) * 2021-05-17 2021-07-30 杭州电子科技大学 Visible light and infrared image fusion method based on LatLRR and Retinex enhancement
WO2021205735A1 (en) * 2020-04-08 2021-10-14 Mitsubishi Electric Corporation Systems and methods for blind multi- spectral image fusion
CN114511788A (en) * 2022-01-27 2022-05-17 湖南北斗微芯产业发展有限公司 Slope crack identification method, system, equipment and storage medium
CN115456924A (en) * 2022-09-22 2022-12-09 中国科学院长春光学精密机械与物理研究所 Adaptive infrared and visible fusion method based on multi-scale latent low-rank representation
CN115546113A (en) * 2022-09-15 2022-12-30 山东大学 Method and system for predicting face crack images and front three-dimensional structural parameters
US20230114877A1 (en) * 2020-06-29 2023-04-13 Southwest Electronics Technology Research Institute ( China Electronics Technology Group Corporation Unsupervised Latent Low-Rank Projection Learning Method for Feature Extraction of Hyperspectral Images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803253A (en) * 2017-01-17 2017-06-06 四川大学 Rock 3-D view crack extract method
WO2021205735A1 (en) * 2020-04-08 2021-10-14 Mitsubishi Electric Corporation Systems and methods for blind multi- spectral image fusion
US20230114877A1 (en) * 2020-06-29 2023-04-13 Southwest Electronics Technology Research Institute ( China Electronics Technology Group Corporation Unsupervised Latent Low-Rank Projection Learning Method for Feature Extraction of Hyperspectral Images
CN113192049A (en) * 2021-05-17 2021-07-30 杭州电子科技大学 Visible light and infrared image fusion method based on LatLRR and Retinex enhancement
CN114511788A (en) * 2022-01-27 2022-05-17 湖南北斗微芯产业发展有限公司 Slope crack identification method, system, equipment and storage medium
CN115546113A (en) * 2022-09-15 2022-12-30 山东大学 Method and system for predicting face crack images and front three-dimensional structural parameters
CN115456924A (en) * 2022-09-22 2022-12-09 中国科学院长春光学精密机械与物理研究所 Adaptive infrared and visible fusion method based on multi-scale latent low-rank representation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HUI LI等: "Infrared and visible image fusion using Latent Low-Rank Representation", ARXIV, 29 January 2022 (2022-01-29), pages 2 - 3 *
刘申澳等: "基于MSRCR-拉普拉斯金字塔方法的低照度 图像增强", 汉斯, 21 July 2022 (2022-07-21), pages 2 *
卢允伟等: "基于拉普拉斯算法的图像锐化算法研究和实现", 《电脑知识与技术》, 28 February 2009 (2009-02-28), pages 2 *
董文川等: "高陡岩质斜坡坡表实景模型构建与岩体结构面 自动解译方法及平台研发", 地球科学与环境学报, 25 November 2022 (2022-11-25), pages 1066 - 1081 *

Similar Documents

Publication Publication Date Title
Chaiyasarn et al. Integrated pixel-level CNN-FCN crack detection via photogrammetric 3D texture mapping of concrete structures
CN111126184B (en) Post-earthquake building damage detection method based on unmanned aerial vehicle video
Gao et al. A general deep learning based framework for 3D reconstruction from multi-view stereo satellite images
CN109948471B (en) Traffic haze visibility detection method based on improved IncepotionV 4 network
CN116258817A (en) A method and system for constructing an autonomous driving digital twin scene based on multi-view 3D reconstruction
CN113160053B (en) An underwater video image restoration and stitching method based on pose information
WO2018053952A1 (en) Video image depth extraction method based on scene sample library
CN114792327B (en) Image processing method and system
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN108564620A (en) A Scene Depth Estimation Method for Light Field Array Camera
Ebrahimikia et al. True orthophoto generation based on unmanned aerial vehicle images using reconstructed edge points
CN115527082A (en) Deep learning small target detection method based on image multi-preprocessing
CN111683221B (en) Real-time video monitoring method and system of natural resources embedded with vector red line data
CN117350925A (en) Infrared and visible light image fusion method, device and equipment for inspection images
CN111915735A (en) Depth optimization method for three-dimensional structure contour in video
Nouduri et al. Deep realistic novel view generation for city-scale aerial images
CN118298338B (en) Road crack rapid identification and calculation method based on unmanned aerial vehicle low-altitude photography
Motayyeb et al. Enhancing contrast of images to improve geometric accuracy of a UAV photogrammetry project
CN117315221A (en) Rock slope crack identification method based on multi-source image data fusion
CN114972358B (en) Artificial intelligence-based urban surveying and mapping laser point cloud offset detection method
CN115115847B (en) Three-dimensional sparse reconstruction method and device and electronic device
Moon et al. Analysis of accuracy factor and pre-processing methodology of image compensation for 3D reconstruction using 2D image obtained from unmanned aerial vehicle (UAV)
Yu et al. Advanced approach for automatic reconstruction of 3d buildings from aerial images
CN115017968A (en) A classification and extraction method of complex buildings based on U-Net model
CN112598777A (en) Haze fusion method based on dark channel prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination