CN118111345A - Tunnel foundation pit surrounding rock displacement, crack and ponding monitoring system - Google Patents
Tunnel foundation pit surrounding rock displacement, crack and ponding monitoring system Download PDFInfo
- Publication number
- CN118111345A CN118111345A CN202410109532.2A CN202410109532A CN118111345A CN 118111345 A CN118111345 A CN 118111345A CN 202410109532 A CN202410109532 A CN 202410109532A CN 118111345 A CN118111345 A CN 118111345A
- Authority
- CN
- China
- Prior art keywords
- foundation pit
- side wall
- camera
- data
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
- G01B11/06—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
- G01B11/0608—Height gauges
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/16—Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Geometry (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Computer Graphics (AREA)
- Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
本发明属于工程监测技术领域,尤其涉及一种基于激光雷达、机器视觉技术的隧道基坑围岩位移、裂缝及积水变化监测系统,包括:轨道,铺设于基坑顶部;移动小车,设置于轨道上,其能够沿着轨道进行移动巡检;以及监测装置,搭载于所述移动小车上,用于获得移动小车巡检基坑侧壁的信息;其中,所述监测装置利用激光雷达和结构光相机,对基坑侧壁进行扫描,以获取三维点云数据,并在激光雷达坐标系下,建立基坑测壁的三维点云模型,通过区域划分和数据拟合,精准测量基坑侧壁整体、局部位移形变情况及侧壁裂缝、渗水情况。
The present invention belongs to the technical field of engineering monitoring, and in particular relates to a monitoring system for displacement, cracks and water accumulation changes of surrounding rocks of a tunnel foundation pit based on laser radar and machine vision technology, comprising: a track laid on the top of a foundation pit; a mobile vehicle arranged on the track, which can move and inspect along the track; and a monitoring device, mounted on the mobile vehicle, for obtaining information about the side wall of the foundation pit inspected by the mobile vehicle; wherein the monitoring device uses a laser radar and a structured light camera to scan the side wall of the foundation pit to obtain three-dimensional point cloud data, and establishes a three-dimensional point cloud model of the foundation pit wall measurement in a laser radar coordinate system, and accurately measures the overall and local displacement deformation of the side wall of the foundation pit and the cracks and water seepage of the side wall through area division and data fitting.
Description
技术领域Technical Field
本发明属于工程监测技术领域,尤其涉及一种基于激光雷达、机器视觉技术的隧道基坑围岩位移、裂缝及积水变化监测系统。The present invention belongs to the technical field of engineering monitoring, and in particular relates to a tunnel foundation pit surrounding rock displacement, crack and water accumulation change monitoring system based on laser radar and machine vision technology.
背景技术Background technique
随着社会的发展,环境保护、安全生产等问题日益受到关注,建筑施工过程中会消耗大量的电力能源、钢筋混凝土等资源,会对环境产生很大影响,同时施工现场的复杂性、危险性决定了安全施工的重要性。因此,绿色与安全施工的理念越来越受到政府、建设单位、施工单位的重视,相应的国家、地方、企业的相关政策与标准也不断出台和更新。在此背景下,开展针对建筑工程的安全实时监测与报警的技术研究与应用,对施工过程中资源、环境、安全信息的实时监测及管理非常有必要。With the development of society, issues such as environmental protection and safe production have received increasing attention. A large amount of electricity, energy, reinforced concrete and other resources will be consumed during the construction process, which will have a great impact on the environment. At the same time, the complexity and danger of the construction site determine the importance of safe construction. Therefore, the concept of green and safe construction is increasingly valued by the government, construction units, and construction units, and the corresponding national, local, and corporate policies and standards are constantly being introduced and updated. In this context, it is very necessary to carry out technical research and application for real-time safety monitoring and alarm of construction projects, and real-time monitoring and management of resources, environment, and safety information during the construction process.
随着隧道的建设难度在不断增加,所以对公路隧道结构的稳定性检测尤为重要。传统公路隧道稳定性检测中,常采用目视、破检或者人工无损检测方法对隧道进行检测,这些方法的检测不稳定,且检测效率低下,因此如何快速准确地检测隧道稳定性是一个研究热点。As the difficulty of tunnel construction continues to increase, the stability detection of highway tunnel structures is particularly important. In traditional highway tunnel stability detection, visual inspection, destructive inspection or manual non-destructive testing methods are often used to detect tunnels. These methods are unstable and inefficient, so how to quickly and accurately detect tunnel stability is a research hotspot.
发明内容Summary of the invention
本发明的目的在于提出一种基于激光雷达、机器视觉技术的隧道基坑围岩位移、裂缝及积水变化监测系统,以解决上述技术问题。The purpose of the present invention is to propose a tunnel foundation pit surrounding rock displacement, cracks and water accumulation change monitoring system based on laser radar and machine vision technology to solve the above technical problems.
为实现上述目的,提供如下技术方案:一种隧道基坑围岩位移、裂缝及积水变化监测系统,包括:In order to achieve the above purpose, the following technical solution is provided: a tunnel foundation pit surrounding rock displacement, cracks and water accumulation change monitoring system, comprising:
轨道,铺设于基坑顶部;Tracks, laid on top of the foundation pit;
移动小车,设置于轨道上,其能够沿着轨道进行移动巡检;A mobile trolley is arranged on a track and can perform mobile inspection along the track;
以及监测装置,搭载于所述移动小车上,用于获得移动小车巡检基坑侧壁的信息;and a monitoring device, mounted on the mobile vehicle, for obtaining information about the side wall of the foundation pit inspected by the mobile vehicle;
其中,所述监测装置利用激光雷达和结构光相机,对基坑侧壁进行扫描,以获取三维点云数据,并在激光雷达坐标系下,建立基坑测壁的三维点云模型,通过区域划分和数据拟合,精准测量基坑侧壁整体、局部位移形变情况及侧壁裂缝、渗水情况。Among them, the monitoring device uses laser radar and structured light camera to scan the side wall of the foundation pit to obtain three-dimensional point cloud data, and establishes a three-dimensional point cloud model of the foundation pit wall in the laser radar coordinate system. Through area division and data fitting, the overall and local displacement deformation of the side wall of the foundation pit, as well as the cracks and water seepage of the side wall, are accurately measured.
在上述技术方案中,进一步的,所述监测装置包括:In the above technical solution, further, the monitoring device includes:
立体标定模块;Stereo calibration module;
图像点云拼接融合模块,将图像转换成三维点云数据,将多帧点云拼接成一个三维模型,对拼接的点云进行平滑处理和融合,最后再进行数据处理,以得到最终的三维模型;The image point cloud splicing and fusion module converts the image into 3D point cloud data, splices multiple frames of point cloud into a 3D model, smoothes and fuses the spliced point cloud, and finally processes the data to obtain the final 3D model.
模型拟合模块,根据数据特征和需要拟合的目标,选择相应的线性或非线性模型;其次,使用最小二乘法、最大似然估计或其他算法计算模型的参数值;使用验证数据集评估模型的预测性能,并计算预测误差;若模型的预测性能不满足需求,则尝试微调模型的参数以提高其准确性;使用得到的拟合模型预测未知数据的值;The model fitting module selects the corresponding linear or nonlinear model according to the data characteristics and the target to be fitted; secondly, the least squares method, maximum likelihood estimation or other algorithms are used to calculate the parameter values of the model; the prediction performance of the model is evaluated using the validation data set, and the prediction error is calculated; if the prediction performance of the model does not meet the requirements, the model parameters are fine-tuned to improve its accuracy; the obtained fitting model is used to predict the value of the unknown data;
检测模块,先对数据进行预处理,包括去除噪声、标准化;其次,使用机器学习算法训练人工智能模型,并评估其效果;接着调整模型参数以提高其准确性;在独立数据集上测试模型,并评估其准确性。In the detection module, the data is first preprocessed, including noise removal and standardization; secondly, the artificial intelligence model is trained using a machine learning algorithm and its effectiveness is evaluated; then the model parameters are adjusted to improve its accuracy; the model is tested on an independent data set and its accuracy is evaluated.
在上述任一技术方案中,进一步的,所述立体标定模块包括:In any of the above technical solutions, further, the stereo calibration module includes:
图像采集单元,利用结构光双目相机对标定面进行图像采集;An image acquisition unit uses a structured light binocular camera to acquire images of the calibration surface;
相机校正单元,对采集的图像进行校正;A camera correction unit, which corrects the acquired image;
点云构建单元,利用视差图和深度图,将标定面上的特征点进行三维重建,构建出点云;The point cloud construction unit uses the disparity map and the depth map to perform three-dimensional reconstruction of the feature points on the calibration surface to construct a point cloud;
参数标定单元,对点云数据进行分析,计算出相机内参数和外参数;The parameter calibration unit analyzes the point cloud data and calculates the camera's intrinsic and extrinsic parameters;
相机标定单元,通过参数标定单元所计算出的内参数和外参数,来标定相机的内外参数,内参数为相机矩阵和畸变系数,外参数为旋转矩阵和平移向量。The camera calibration unit calibrates the internal and external parameters of the camera through the intrinsic parameters and extrinsic parameters calculated by the parameter calibration unit. The intrinsic parameters are the camera matrix and distortion coefficients, and the extrinsic parameters are the rotation matrix and translation vector.
在上述任一技术方案中,进一步的,数据评估单元,对标定的结果进行评估,确保标定的准确性。In any of the above technical solutions, further, the data evaluation unit evaluates the calibration result to ensure the accuracy of the calibration.
在上述任一技术方案中,进一步的,在监测基坑侧壁的位移变化过程中,通过激光雷达获取多次测量下基坑侧壁数据,然后根据基坑测量的初始参数过滤噪声数据,并对侧壁数据进行相应的插值处理;根据拍摄高度对比统一高度多次测量下侧壁数据的变化情况来监测侧壁的位置变化情况,计算侧壁位移量。In any of the above technical solutions, further, in the process of monitoring the displacement change of the side wall of the foundation pit, the side wall data of the foundation pit is obtained by multiple measurements through a laser radar, and then the noise data is filtered according to the initial parameters of the foundation pit measurement, and the side wall data is interpolated accordingly; the position change of the side wall is monitored by comparing the shooting height with the change of the side wall data under multiple measurements at a uniform height, and the displacement of the side wall is calculated.
在上述任一技术方案中,进一步的,在监测基坑侧壁的裂缝过程中,采用相机对基坑侧壁裂缝数据进行采集;先采用图像切割的方式将原始图像切割成k×k个小块,然后对图像尺寸进行模型训练;In any of the above technical solutions, further, in the process of monitoring the cracks on the side wall of the foundation pit, a camera is used to collect data on the cracks on the side wall of the foundation pit; the original image is first cut into k×k small blocks by image cutting, and then the model training is performed on the image size;
其中,裂缝高度定位算法采用SVR回归算法,通过构造基坑侧壁参数、相机拍摄的高度、图片中像素点的坐标等特征与图片中像素点对应的实际高度的拟合关系函数,求出测试图片中的不同裂缝检测框的四个角点对应的高度,进而求出此裂缝的高度范围。Among them, the crack height positioning algorithm adopts the SVR regression algorithm. By constructing a fitting relationship function between the parameters of the foundation pit side wall, the height taken by the camera, the coordinates of the pixel points in the picture and the actual height corresponding to the pixel points in the picture, the heights corresponding to the four corner points of the different crack detection frames in the test picture are calculated, and then the height range of the crack is calculated.
在上述任一技术方案中,进一步的,所述监测装置利用激光雷达、结构光相机拍摄的多次基坑顶部固定点参考靶标的RGB图和深度图,提取图像中多组三维特征点,计算多次测量下结构光相机相对于首次拍摄的位姿变换矩阵,并结合结构光相机和激光雷达的联合标定算法,将多次测量下的激光雷达坐标系,统一到首次扫描下的激光雷达坐标系,实现多次测量坐标系的统一。In any of the above technical solutions, further, the monitoring device uses the RGB image and depth image of the fixed point reference target at the top of the foundation pit taken multiple times by the laser radar and the structured light camera to extract multiple groups of three-dimensional feature points in the image, calculates the pose transformation matrix of the structured light camera under multiple measurements relative to the first shooting, and combines the joint calibration algorithm of the structured light camera and the laser radar to unify the laser radar coordinate system under multiple measurements to the laser radar coordinate system under the first scan, thereby realizing the unification of the coordinate systems of multiple measurements.
在上述任一技术方案中,进一步的,所述联合标定算法采用基于平面约束的标定方法对激光雷达、结构光相机进行联合标定。In any of the above technical solutions, further, the joint calibration algorithm adopts a plane constraint-based calibration method to jointly calibrate the lidar and structured light camera.
在上述任一技术方案中,进一步的,相机通过识别标定板上的标签来计算标定板平面在相机坐标系下的坐标;激光雷达所发射的光束打在标定板上,通过利用激光点在激光坐标系下的坐标和在相机坐标系下的坐标构建平面约束来求解外参,In any of the above technical solutions, further, the camera calculates the coordinates of the calibration plate plane in the camera coordinate system by identifying the label on the calibration plate; the light beam emitted by the laser radar hits the calibration plate, and the plane constraints are constructed by using the coordinates of the laser point in the laser coordinate system and the coordinates in the camera coordinate system to solve the external parameters.
其中,平面约束计算公式如下:The plane constraint calculation formula is as follows:
ncT(RPl+t)+dc=0n cT (RP l +t)+d c =0
其中,向量n是标签平面的三维法向量,d是相机坐标系Oc原点到标签平面的距离,点P在激光坐标系Ol下的坐标为Pl,在相机坐标系Oc下的坐标为Pc;点P从激光坐标系Ol变换到相机坐标系Oc为:Among them, vector n is the three-dimensional normal vector of the label plane, d is the distance from the origin of the camera coordinate system O c to the label plane, the coordinates of point P in the laser coordinate system O l are P l , and the coordinates in the camera coordinate system O c are P c ; point P is transformed from the laser coordinate system O l to the camera coordinate system O c :
Pc=RT(Pl-t)P c = RT (P l -t)
代入Pl=(x,y,0)T:Substituting into P l =(x, y, 0) T :
将矩阵H作为新的未知量进行非线性最小二乘求解,以恢复并获得旋转矩阵和平移向量R_T′。The matrix H is treated as a new unknown quantity and a nonlinear least squares solution is performed to recover and obtain the rotation matrix and translation vector R_T′.
本发明的有益效果是:本方法采用立体标定技术、图像拼接技术、点云拼接技术、图像和点云融合技术、线性和非线性拟合技术、人工智能检测技术等,对不同时间段的基坑侧壁进行三维建模计算,根据不同时间段特征点位置变化计算侧壁位移形变情况,并采用人工智能检测技术对侧壁裂缝、渗水情况进行识别检测,以实现对基坑侧壁的实时监测。The beneficial effects of the present invention are as follows: the method adopts stereo calibration technology, image stitching technology, point cloud stitching technology, image and point cloud fusion technology, linear and nonlinear fitting technology, artificial intelligence detection technology, etc., to perform three-dimensional modeling calculation on the side wall of the foundation pit in different time periods, calculates the displacement and deformation of the side wall according to the position changes of feature points in different time periods, and adopts artificial intelligence detection technology to identify and detect cracks and water seepage in the side wall, so as to realize real-time monitoring of the side wall of the foundation pit.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明目标区域点云构建流程图;FIG1 is a flow chart of constructing a point cloud of a target area according to the present invention;
图2是本发明相机和激光雷达的联合标定示意图;FIG2 is a schematic diagram of the joint calibration of a camera and a laser radar according to the present invention;
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。The following will be combined with the drawings in the embodiments of the present application to clearly describe the technical solutions in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, rather than all the embodiments. All other embodiments obtained by ordinary technicians in this field based on the embodiments in the present application belong to the scope of protection of this application.
在本申请的描述中,需要说明的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申请的示例性实施方式。为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为授权说明书的一部分。在这里示出和讨论的所有示例中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它示例可以具有不同的值。应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。In the description of the present application, it should be noted that the terms used herein are only for describing specific embodiments, and are not intended to limit the exemplary embodiments according to the present application. For ease of description, the sizes of the various parts shown in the drawings are not drawn according to the actual proportional relationship. The technology, methods and equipment known to ordinary technicians in the relevant field may not be discussed in detail, but in appropriate cases, the technology, methods and equipment should be regarded as part of the authorization specification. In all examples shown and discussed here, any specific value should be interpreted as merely exemplary, rather than as a limitation. Therefore, other examples of the exemplary embodiments may have different values. It should be noted that similar numbers and letters represent similar items in the following drawings, so once an item is defined in one drawing, it does not need to be further discussed in subsequent drawings.
位移智能视觉监测分析预警系统,属于工程监测领域,其主要包括可调节靶标、智能视觉识别模块、智能采集控制模块、云平台、客户端等部分。The displacement intelligent visual monitoring, analysis and early warning system belongs to the field of engineering monitoring. It mainly includes adjustable targets, intelligent visual recognition modules, intelligent acquisition and control modules, cloud platforms, clients and other parts.
其中,可调节靶标固定在基坑顶部与智能视觉识别模块相对布置;智能视觉识别模块与智能采集控制模块通过导线连接;智能采集控制模块与云平台通过网络连接;云平台则对接收到的数据进行解析和计算,通过与设定的允许变形量阈值比对,并对不同阈值区间用相应的颜色进行标识,以测点颜色的不同表征测点安全状态,并下发告警信息到指定接收人;最后利用客户端直观可视化各测点变形与预警信息。Among them, the adjustable target is fixed on the top of the foundation pit and arranged opposite to the intelligent visual recognition module; the intelligent visual recognition module is connected to the intelligent acquisition control module through a wire; the intelligent acquisition control module is connected to the cloud platform through a network; the cloud platform parses and calculates the received data, compares it with the set allowable deformation threshold, and marks different threshold intervals with corresponding colors, using the different colors of the measuring points to represent the safety status of the measuring points, and sends alarm information to the designated recipient; finally, the deformation and warning information of each measuring point are intuitively visualized using the client.
现有方法是基于单一的视觉监测分析,其对现场环境要求较高,容易受现场施工及人员活动造成的尘土等影响监测效果,且该方案必须使用标靶作为被测点,对选点及安装要求较高,且受施工影响较大,例如施工器械车辆遮挡标靶等情况,会导致监测效果下降,而且监测项也较单一。基于此,申请人现提供一种基于激光雷达、机器视觉技术的隧道基坑围岩位移、裂缝及积水变化监测系统。The existing method is based on a single visual monitoring analysis, which has high requirements for the on-site environment and is easily affected by dust caused by on-site construction and personnel activities. In addition, the scheme must use targets as the measured points, which has high requirements for point selection and installation, and is greatly affected by construction. For example, construction machinery and vehicles block the targets, which will lead to a decrease in monitoring effect, and the monitoring items are also relatively single. Based on this, the applicant now provides a tunnel foundation pit surrounding rock displacement, cracks and water accumulation change monitoring system based on laser radar and machine vision technology.
下面通过以下实施例来对本申请的监测系统进行详细说明。The monitoring system of the present application is described in detail through the following embodiments.
实施例1:Embodiment 1:
本实施例提供了一种隧道基坑围岩位移、裂缝及积水变化监测系统,包括:轨道,铺设于基坑顶部;移动小车,设置于轨道上,其能够沿着轨道进行移动巡检;以及监测装置,搭载于移动小车上,用于获得移动小车巡检基坑侧壁的信息;This embodiment provides a monitoring system for displacement, cracks and water accumulation of surrounding rock in a tunnel foundation pit, including: a track laid on the top of the foundation pit; a mobile vehicle arranged on the track, which can move and inspect along the track; and a monitoring device mounted on the mobile vehicle, used to obtain information about the side wall of the foundation pit inspected by the mobile vehicle;
其中,监测装置利用激光雷达和结构光相机,对基坑侧壁进行扫描,以获取三维点云数据,并在激光雷达坐标系下,建立基坑测壁的三维点云模型,通过区域划分和数据拟合,精准测量基坑侧壁整体、局部位移形变情况及侧壁裂缝、渗水情况。Among them, the monitoring device uses lidar and structured light camera to scan the side walls of the foundation pit to obtain three-dimensional point cloud data, and establishes a three-dimensional point cloud model of the foundation pit wall in the lidar coordinate system. Through area division and data fitting, the overall and local displacement deformation of the side walls of the foundation pit, as well as cracks and water seepage in the side walls, are accurately measured.
具体的,监测装置包括:Specifically, the monitoring device includes:
立体标定模块;立体标定模块包括:图像采集单元,利用结构光双目相机对标定面进行图像采集;相机校正单元,对采集的图像进行校正;点云构建单元,利用视差图和深度图,将标定面上的特征点进行三维重建,构建出点云;参数标定单元,对点云数据进行分析,计算出相机内参数和外参数;相机标定单元,通过参数标定单元所计算出的内参数和外参数,来标定相机的内外参数,内参数为相机矩阵和畸变系数,外参数为旋转矩阵和平移向量;以及数据评估单元,对标定的结果进行评估,确保标定的准确性;点云曲面重建的目的是使其能够形象逼真的反映物体表面的几何特征,便于模型物理信息与几何信息的提取与分析;Stereo calibration module; The stereo calibration module includes: an image acquisition unit, which uses a structured light binocular camera to acquire images of the calibration surface; a camera correction unit, which corrects the acquired images; a point cloud construction unit, which uses a disparity map and a depth map to perform three-dimensional reconstruction of the feature points on the calibration surface to construct a point cloud; a parameter calibration unit, which analyzes the point cloud data and calculates the camera's intrinsic and extrinsic parameters; the camera calibration unit, which uses the intrinsic and extrinsic parameters calculated by the parameter calibration unit to calibrate the camera's internal and external parameters, the intrinsic parameters are the camera matrix and the distortion coefficient, and the extrinsic parameters are the rotation matrix and the translation vector; and a data evaluation unit, which evaluates the calibration results to ensure the accuracy of the calibration; The purpose of point cloud surface reconstruction is to enable it to vividly reflect the geometric features of the object surface, so as to facilitate the extraction and analysis of the model's physical and geometric information;
图像点云拼接融合模块,将图像转换成三维点云数据,将多帧点云拼接成一个三维模型,对拼接的点云进行平滑处理和融合,最后再进行数据处理,以得到最终的三维模型;The image point cloud splicing and fusion module converts the image into 3D point cloud data, splices multiple frames of point cloud into a 3D model, smoothes and fuses the spliced point cloud, and finally processes the data to obtain the final 3D model.
模型拟合模块,根据数据特征和需要拟合的目标,选择相应的线性或非线性模型;其次,使用最小二乘法、最大似然估计或其他算法计算模型的参数值;使用验证数据集评估模型的预测性能,并计算预测误差;若模型的预测性能不满足需求,则尝试微调模型的参数以提高其准确性;使用得到的拟合模型预测未知数据的值;The model fitting module selects the corresponding linear or nonlinear model according to the data characteristics and the target to be fitted; secondly, the least squares method, maximum likelihood estimation or other algorithms are used to calculate the parameter values of the model; the prediction performance of the model is evaluated using the validation data set, and the prediction error is calculated; if the prediction performance of the model does not meet the requirements, the model parameters are fine-tuned to improve its accuracy; the obtained fitting model is used to predict the value of the unknown data;
检测模块,先对数据进行预处理,包括去除噪声、标准化;其次,使用机器学习算法训练人工智能模型,并评估其效果;接着调整模型参数以提高其准确性;在独立数据集上测试模型,并评估其准确性。In the detection module, the data is first preprocessed, including noise removal and standardization; secondly, the artificial intelligence model is trained using a machine learning algorithm and its effectiveness is evaluated; then the model parameters are adjusted to improve its accuracy; the model is tested on an independent data set and its accuracy is evaluated.
在本技术方案中,激光雷达采集点云信息的工作原理是:首先由激光发射器通过观测镜头向被观测目标实体发射高能量激光束,该激光到达物体表面后进行反射,沿相同路径返回,并通过扫描仪接收器接受,从而获得扫描仪与被测物体之间的距离;然后利用角度测量控制模块获取目标物体与扫描仪之间的水平角与竖直角角度信息,利用驱动马达与可转向镜头完成目标点位信息的采集;最后,利用微处理器将采集的距离信息与角度信息转化为坐标信息,并将坐标信息、反射强度、纹理特征等信息储存到存储器中。In the present technical solution, the working principle of the laser radar for collecting point cloud information is: first, the laser transmitter emits a high-energy laser beam to the observed target entity through the observation lens. The laser is reflected after reaching the surface of the object, returns along the same path, and is received by the scanner receiver, thereby obtaining the distance between the scanner and the object being measured; then, the angle measurement control module is used to obtain the horizontal and vertical angle information between the target object and the scanner, and the drive motor and steerable lens are used to complete the collection of target point information; finally, the microprocessor is used to convert the collected distance information and angle information into coordinate information, and the coordinate information, reflection intensity, texture features and other information are stored in the memory.
激光雷达由于仪器本身、外界环境、人员操作等因素的影响,使得采集到的基坑侧壁点云数据中会存在干扰物体几何特征信息的噪声点。没有经过去噪处理的侧壁点云数据,既会影响后续点云数据配准和侧壁三维实体模型建立的精度,同时也会影响后期侧壁动态的变形监测。为此,需要进行去噪。Due to the influence of the laser radar itself, the external environment, human operation and other factors, there will be noise points in the collected point cloud data of the foundation pit side wall that interfere with the geometric feature information of the object. The side wall point cloud data that has not been denoised will not only affect the accuracy of subsequent point cloud data registration and the establishment of the side wall three-dimensional solid model, but also affect the subsequent dynamic deformation monitoring of the side wall. For this reason, denoising is required.
在点云数据中的大尺度噪声主要是指那些离主体点云距离较远且易于分辩的无用信息点,包括分布在点云主体附近的孤立点和离主体点云数据较远、数量较少的漂移点。Large-scale noise in point cloud data mainly refers to those useless information points that are far away from the main point cloud and easy to distinguish, including isolated points distributed near the main point cloud and drift points that are far away from the main point cloud data and have a small number.
激光雷达采集的点云数据中也存在由外界环境导致的与有效数据点混合在一起或离主体点云数据较近且难以分辨的小尺度噪声点。含有小尺度噪声的点云数据建立的模型曲面不平顺、粗糙且效果较差,因此需要对混合在点云数据中的这些点进行去除或过滤。There are also small-scale noise points in the point cloud data collected by LiDAR caused by the external environment, which are mixed with valid data points or are close to the main point cloud data and difficult to distinguish. The model surface built with point cloud data containing small-scale noise is not smooth, rough and has poor effect, so it is necessary to remove or filter these points mixed in the point cloud data.
为此,优化的,在监测基坑侧壁的位移变化过程中,通过激光雷达获取多次测量下基坑侧壁数据,然后根据基坑测量的初始参数(如仪器与侧壁测量初始距离、基坑高度等)过滤噪声数据,并对侧壁数据进行相应的插值处理使得侧壁数据更加稠密;根据拍摄高度对比统一高度多次测量下侧壁数据的变化情况来监测侧壁的位置变化情况,计算侧壁位移量。To this end, in the process of monitoring the displacement changes of the side walls of the foundation pit, the side wall data of the foundation pit are obtained by multiple measurements through lidar, and then the noise data is filtered according to the initial parameters of the foundation pit measurement (such as the initial distance between the instrument and the side wall, the height of the foundation pit, etc.), and the side wall data are interpolated accordingly to make the side wall data denser; the position changes of the side wall are monitored according to the changes in the side wall data under multiple measurements at the same height compared with the shooting height, and the displacement of the side wall is calculated.
在监测基坑侧壁的裂缝过程中,由于采集到的原始图像尺寸过大,不适合直接作为网络模型的输入,所以采用相机对基坑侧壁裂缝数据进行采集;先采用图像切割的方式将原始图像切割成k×k个小块,然后对图像尺寸进行模型训练。In the process of monitoring the cracks on the side walls of foundation pits, the size of the original images collected is too large to be directly used as the input of the network model. Therefore, a camera is used to collect the data of the cracks on the side walls of foundation pits. The original images are first cut into k×k small blocks using image cutting, and then the model is trained on the image size.
由于基坑裂缝检测模型只是得到裂缝在平面图像中的位置,不能获取裂缝在基坑侧壁的具体位置,不利于后续的修补工作,为此在平面图像中识别出裂缝后,需要明确已识别出的裂缝的高度,对于裂缝高度定位算法采用SVR回归算法,通过构造基坑侧壁参数(高度)、相机拍摄的高度、图片中像素点的坐标等特征与图片中像素点对应的实际高度的拟合关系函数,求出测试图片中的不同裂缝检测框的四个角点对应的高度,进而求出此裂缝的高度范围。Since the foundation pit crack detection model only obtains the position of the crack in the plane image, it cannot obtain the specific position of the crack on the side wall of the foundation pit, which is not conducive to the subsequent repair work. Therefore, after identifying the crack in the plane image, it is necessary to clarify the height of the identified crack. The SVR regression algorithm is used for the crack height positioning algorithm. By constructing the fitting relationship function of the foundation pit side wall parameters (height), the height captured by the camera, the coordinates of the pixel points in the picture and the actual height corresponding to the pixel points in the picture, the heights corresponding to the four corner points of the different crack detection boxes in the test picture are obtained, and then the height range of the crack is obtained.
在监测基坑侧壁的裂缝渗水情况中,采用同上的图像识别,通过渗水识别算法,定位渗水位置及高度。In monitoring the water seepage in cracks on the side walls of foundation pits, the same image recognition is used, and the water seepage position and height are located through the water seepage recognition algorithm.
在本实施例中,优化的,监测装置利用激光雷达、结构光相机拍摄的多次基坑顶部固定点参考靶标的RGB图和深度图,提取图像中多组三维特征点,计算多次测量下结构光相机相对于首次拍摄的位姿变换矩阵,并结合结构光相机和激光雷达的联合标定算法,将多次测量下的激光雷达坐标系,统一到首次扫描下的激光雷达坐标系,实现多次测量坐标系的统一。In this embodiment, the optimized monitoring device uses the RGB images and depth images of the fixed point reference targets on the top of the foundation pit taken multiple times by the laser radar and the structured light camera to extract multiple groups of three-dimensional feature points in the image, calculates the pose transformation matrix of the structured light camera under multiple measurements relative to the first shooting, and combines the joint calibration algorithm of the structured light camera and the laser radar to unify the laser radar coordinate system under multiple measurements to the laser radar coordinate system under the first scan, thereby realizing the unification of the coordinate systems of multiple measurements.
由于RGB图像为三维场景在二维平面上的投影,没有记录深度信息,因此为了确定目标在三维空间中的位置和姿态,需要将深度图像对齐到RGB图像,构建点云图像,最后基于点云处理相关算法确定目标的空间位姿。如图1所示,构建出目标区域的点云构建流程图。Since the RGB image is a projection of a three-dimensional scene on a two-dimensional plane and does not record depth information, in order to determine the position and posture of the target in three-dimensional space, it is necessary to align the depth image to the RGB image, construct a point cloud image, and finally determine the spatial position and posture of the target based on the point cloud processing related algorithm. As shown in Figure 1, a point cloud construction flow chart of the target area is constructed.
对于图像特征点提取及旋转矩阵的计算方法如下,The method for extracting image feature points and calculating the rotation matrix is as follows:
SIFT特征是一种图像局部特征提取方法,具有尺度不变性、旋转不变性和一定的光照不变性。算法具体步骤如下所示。SIFT feature is a local feature extraction method for images, which has scale invariance, rotation invariance and certain illumination invariance. The specific steps of the algorithm are as follows.
1)尺度空间极值检测1) Scale space extreme value detection
创建图像的高斯金字塔,计算不同尺度下的高斯差分图像(DoG)。在DoG图像中检测局部极值点,这些极值点可能是潜在的关键点。高斯核函数如下式所示:Create a Gaussian pyramid of the image and calculate the Gaussian difference image (DoG) at different scales. Detect local extreme points in the DoG image, which may be potential key points. The Gaussian kernel function is shown as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y)L(x, y, σ) = G(x, y, σ) * I(x, y)
其中G(x,y,σ)是高斯核函数:Where G(x, y, σ) is the Gaussian kernel function:
2)关键点定位2) Key point positioning
在DoG算法中欠佳的关键点在平行边缘的方向有较大的主曲率,而在垂直于边缘的方向有较小的曲率,两者的比值如果高于某个阈值,就认为该关键点为边界,将被忽略。In the DoG algorithm, a poor key point has a larger principal curvature in the direction parallel to the edge and a smaller curvature in the direction perpendicular to the edge. If the ratio of the two is higher than a certain threshold, the key point is considered to be a boundary and will be ignored.
3)关键点方向确定3) Determine the direction of key points
为每个关键点分配一个方向实现旋转不变性。对于任一关键点,采集其所在高斯金字塔图像以r为半径的区域内所有像素的梯度特征(幅值和幅角)。Each key point is assigned a direction to achieve rotation invariance. For any key point, the gradient features (amplitude and angle) of all pixels in the area with radius r in the Gaussian pyramid image are collected.
4)关键点描述4) Key point description
通过将关键点周围图像区域分块,计算块内的梯度直方图,生成具有特征向量,对图像信息进行抽象。The image area around the key point is divided into blocks, the gradient histogram within the block is calculated, and a feature vector is generated to abstract the image information.
根据图像处理中提取不同时间点靶标区域的多个三维特征点对:P={p1,...,pn}P′={p′1,...,p′n},通过欧式变换R_T,使得pi=Rpi+T。对于这个问题可以用迭代最近点(ICP)进行求解。具体的,利用线性代数的求解(SVD)。通过定义误差项,构建最小二乘问题,求使得误差平方和达到极小的R_T。(ICP)可以分为一下三个步骤求解:According to the image processing, multiple three-dimensional feature point pairs of the target area at different time points are extracted: P = {p 1 , ..., p n }P′ = {p′ 1 , ..., p′ n }, and through the Euclidean transformation R_T, p i = Rp i +T. This problem can be solved by iterative closest point (ICP). Specifically, linear algebra solution (SVD) is used. By defining the error term, the least squares problem is constructed to find R_T that minimizes the sum of squared errors. (ICP) can be solved in the following three steps:
第一步:计算两组点的质心位置p,p′,然后计算每个点的去质心坐标qi=pi-p,q′i=p′i-p′;Step 1: Calculate the centroid positions p and p′ of the two groups of points, and then calculate the centroid coordinates of each point q i = p i -p, q′ i = p′ i -p′;
第二步:定义优化问题计算旋转矩阵Step 2: Define the optimization problem and calculate the rotation matrix
第三步:计算平移变量Step 3: Calculate the translation variable
t=p-Rp′。t=p-Rp′.
关于结构光相机和激光雷达的联合标定,具体的,联合标定算法采用基于平面约束的标定方法对激光雷达、结构光相机进行联合标定。Regarding the joint calibration of structured light cameras and lidar, specifically, the joint calibration algorithm uses a plane constraint-based calibration method to jointly calibrate lidar and structured light cameras.
如图2所示,相机通过识别标定板上的标签来计算标定板平面在相机坐标系下的坐标;激光雷达所发射的光束打在标定板上,通过利用激光点在激光坐标系下的坐标和在相机坐标系下的坐标构建平面约束来求解外参,As shown in Figure 2, the camera calculates the coordinates of the calibration plate plane in the camera coordinate system by identifying the label on the calibration plate; the light beam emitted by the laser radar hits the calibration plate, and the plane constraints are constructed using the coordinates of the laser point in the laser coordinate system and the coordinates in the camera coordinate system to solve the external parameters.
其中,平面约束计算公式如下:The plane constraint calculation formula is as follows:
ncT(RPl+t)+dc=0n cT (RP l +t)+d c =0
其中,向量n是标签平面的三维法向量,d是相机坐标系Oc原点到标签平面的距离,点P在激光坐标系Ol下的坐标为Pl,在相机坐标系Oc下的坐标为Pc;点P从激光坐标系Ol变换到相机坐标系Oc为:Among them, vector n is the three-dimensional normal vector of the label plane, d is the distance from the origin of the camera coordinate system O c to the label plane, the coordinates of point P in the laser coordinate system O l are P l , and the coordinates in the camera coordinate system O c are P c ; point P is transformed from the laser coordinate system O l to the camera coordinate system O c :
Pc=RT(Pl-t)P c = RT (P l -t)
代入Pl=(x,y,0)T:Substituting into P l =(x, y, 0) T :
将矩阵H作为新的未知量进行非线性最小二乘求解,以恢复并获得旋转矩阵和平移向量R_T′。The matrix H is treated as a new unknown quantity and a nonlinear least squares solution is performed to recover and obtain the rotation matrix and translation vector R_T′.
上面结合附图对本申请的实施例进行了描述,在不冲突的情况下,本申请中的实施例及实施例中的特征是可以相互组合的,本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。The embodiments of the present application are described above in conjunction with the accompanying drawings. In the absence of conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present application is not limited to the above-mentioned specific implementation methods. The above-mentioned specific implementation methods are merely illustrative and not restrictive. Under the guidance of the present application, ordinary technicians in this field can also make many forms without departing from the purpose of the present application and the scope of protection of the claims, all of which are within the protection of the present application.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410109532.2A CN118111345A (en) | 2024-01-26 | 2024-01-26 | Tunnel foundation pit surrounding rock displacement, crack and ponding monitoring system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410109532.2A CN118111345A (en) | 2024-01-26 | 2024-01-26 | Tunnel foundation pit surrounding rock displacement, crack and ponding monitoring system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118111345A true CN118111345A (en) | 2024-05-31 |
Family
ID=91217262
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410109532.2A Pending CN118111345A (en) | 2024-01-26 | 2024-01-26 | Tunnel foundation pit surrounding rock displacement, crack and ponding monitoring system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118111345A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119207114A (en) * | 2024-11-13 | 2024-12-27 | 杭州目博科技有限公司 | A berth detection system and method based on dTOF laser radar |
-
2024
- 2024-01-26 CN CN202410109532.2A patent/CN118111345A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119207114A (en) * | 2024-11-13 | 2024-12-27 | 杭州目博科技有限公司 | A berth detection system and method based on dTOF laser radar |
CN119207114B (en) * | 2024-11-13 | 2025-03-11 | 杭州目博科技有限公司 | A berth detection system and method based on dTOF laser radar |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kim et al. | SLAM-driven robotic mapping and registration of 3D point clouds | |
CN110221311B (en) | Method for automatically extracting tree height of high-canopy-closure forest stand based on TLS and UAV | |
CN113345019B (en) | Method, equipment and medium for measuring potential hazards of transmission line channel target | |
Fassi et al. | Comparison between laser scanning and automated 3d modelling techniques to reconstruct complex and extensive cultural heritage areas | |
Charron et al. | Automated bridge inspection using mobile ground robotics | |
CN103868460B (en) | Binocular stereo vision method for automatic measurement based on parallax optimized algorithm | |
CN109900713B (en) | Camera-guided unmanned aerial vehicle wind power blade defect dynamic detection system and method thereof | |
Rau et al. | Bridge crack detection using multi-rotary UAV and object-base image analysis | |
US10930013B2 (en) | Method and system for calibrating imaging system | |
Ioli et al. | UAV photogrammetry for metric evaluation of concrete bridge cracks | |
CN117823741B (en) | Pipe network non-excavation repairing method and system combined with intelligent robot | |
CN116958842B (en) | Underground pipeline inspection method and device based on laser-vision fusion | |
CN105023270A (en) | Proactive 3D stereoscopic panorama visual sensor for monitoring underground infrastructure structure | |
Dhiman et al. | A multi-frame stereo vision-based road profiling technique for distress analysis | |
CN118111345A (en) | Tunnel foundation pit surrounding rock displacement, crack and ponding monitoring system | |
Ma et al. | A low-cost 3D reconstruction and measurement system based on structure-from-motion (SFM) and multi-view stereo (MVS) for sewer pipelines | |
Zhang et al. | Bridge substructure feature extraction based on the underwater sonar point cloud data | |
Lin et al. | Method of 3D reconstruction of underwater concrete by laser line scanning | |
CN118658084A (en) | A method for locating and ranging intruders on power transmission lines by fusing images and point clouds | |
CN116579955B (en) | New energy battery cell weld reflection point denoising and point cloud complement method and system | |
CN118706898B (en) | Municipal road and bridge defect detection method and system based on infrared imaging | |
Gan et al. | Tunnel deformation monitoring based on laser distance measuring and vision assistant | |
CN118710256B (en) | A production equipment intelligent inspection and recording system and method | |
Wu et al. | Power transmission line reconstruction from sequential oblique uav images | |
ALAMOURI et al. | High-resolution Data Capture and Interpretation in Support of Port Infrastructure Maintenance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |