[go: up one dir, main page]

CN115439621A - A 3D map reconstruction and target detection method for a coal mine inspection robot - Google Patents

A 3D map reconstruction and target detection method for a coal mine inspection robot Download PDF

Info

Publication number
CN115439621A
CN115439621A CN202210954381.1A CN202210954381A CN115439621A CN 115439621 A CN115439621 A CN 115439621A CN 202210954381 A CN202210954381 A CN 202210954381A CN 115439621 A CN115439621 A CN 115439621A
Authority
CN
China
Prior art keywords
point cloud
target
point
local
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210954381.1A
Other languages
Chinese (zh)
Inventor
高珺
巨朝晖
刘再斌
李旭
刘晨光
王炳
薛宇强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Research Institute Co Ltd of CCTEG
Original Assignee
Xian Research Institute Co Ltd of CCTEG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Research Institute Co Ltd of CCTEG filed Critical Xian Research Institute Co Ltd of CCTEG
Priority to CN202210954381.1A priority Critical patent/CN115439621A/en
Publication of CN115439621A publication Critical patent/CN115439621A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a three-dimensional map reconstruction and target detection method for a coal mine underground inspection robot, which comprises the following steps: step 1, acquiring a point cloud data set containing selected scene information; step 2, point cloud data is subjected to point cloud registration by adopting a Map-to-Map method to obtain an established three-dimensional Map; step 3, positioning the routing inspection route of the routing inspection robot in real time according to the three-dimensional map; step 4, in the process of the inspection robot advancing, a fusion algorithm is utilized to carry out target detection; and 5, optimally matching the target detection results obtained in the step 4, realizing classification of the detection results, and preferentially outputting the detection results as the obstacle targets. The invention can also carry out effective high-precision map construction for the situation of complex environmental results; meanwhile, partial redundant data can be removed, the algorithm time complexity is reduced, and the image building efficiency is improved; the fusion algorithm takes time domain information into consideration, fuses the multi-frame detection results, and can effectively reduce false detection rate and missed detection rate caused by shielding.

Description

一种煤矿井下巡检机器人三维地图重建及目标检测方法A 3D map reconstruction and target detection method for a coal mine inspection robot

技术领域technical field

本发明属于巡检机器人技术领域,涉及一种煤矿井下巡检机器人三维地图重建及目标检测方法。The invention belongs to the technical field of inspection robots, and relates to a three-dimensional map reconstruction and target detection method for an inspection robot in underground coal mines.

背景技术Background technique

自2016年开始,每年的巡检机器人采购量趋于指数级上升,大部分企业认识到机器人智能巡检替代人力的必要性和趋势,随着行业细分化场景化的越来越明晰,客户根据自身的个性化定制越来越多样,在趋势上升发展时期,往后每年的需求量必然会更多。目前,煤矿井下包含种类繁多的人工驾驶燃油车或电动车用于物料运输、巡逻检查、人员接驳等任务,存在运行效率低、人力资源消耗大以及运行成本高等问题。同时传统巡检通过人工方式,劳动强度大、工作效率低、检测质量低、人身安全难以保障,巡视工作量大、恶劣环境限制,巡检工作无法保质保量。因此,无人巡检机器人替代传统人力巡检是最佳选择。Since 2016, the annual procurement of inspection robots has tended to increase exponentially. Most companies have realized the necessity and trend of replacing manpower with intelligent inspection robots. As industry segmentation and scenarios become more and more clear, customers According to their own personalized customization is more and more diverse, in the period of rising trend, the demand will inevitably increase every year in the future. At present, coal mines contain a wide variety of manually driven fuel vehicles or electric vehicles for tasks such as material transportation, patrol inspection, and personnel connection, which have problems such as low operating efficiency, large consumption of human resources, and high operating costs. At the same time, traditional inspections are done manually, which is labor-intensive, low in efficiency, low in detection quality, difficult to guarantee personal safety, heavy in the inspection workload, and restricted by harsh environments, so that the quality and quantity of the inspection work cannot be guaranteed. Therefore, unmanned inspection robots are the best choice to replace traditional human inspections.

无人巡检机器人最核心的问题为定位问题,不准确的定位将会造成决策控制的重大失误。巡检机器人路径规划的目标是利用合适的路径规划方法寻找从起点到终点的无碰撞的最优路径,在可控的结构化环境中,巡检机器人具有一定的自主性,能够感知周围环境,检测障碍物,并规划一条无碰撞的路径导航到目的地。文献[1]描述了巡检机器人的SLAM(Simultaneous Localization and Mapping)路径规划问题。文献[2]通过比对两种SLAM方案,从而得出不同场景下的选择。文献[3]研究了利用概率方法优化地图定位精度以及降低计算复杂度。文献[4] 则利用稀疏位姿调整解决非线性优化中的矩阵直接求解问题。在研究文献[5]后得出,可采用高斯-牛顿法解决地图定位中的扫描匹配问题,但对于传感器要求较高。最终通过文献[6]的研究,分析出Cartographer算法,通过对局部子图和全局图同时使用闭环检测,减少累积误差。The core problem of unmanned inspection robots is positioning. Inaccurate positioning will cause major mistakes in decision-making and control. The goal of the path planning of the inspection robot is to use the appropriate path planning method to find the optimal path without collision from the starting point to the end point. In a controllable structured environment, the inspection robot has a certain degree of autonomy and can perceive the surrounding environment. Detect obstacles and plan a collision-free path to navigate to your destination. Literature [1] describes the SLAM (Simultaneous Localization and Mapping) path planning problem of inspection robots. Literature [2] compares two SLAM schemes to obtain choices in different scenarios. Literature [3] studied the use of probabilistic methods to optimize map positioning accuracy and reduce computational complexity. Literature [4] uses sparse pose adjustment to solve the matrix direct solution problem in nonlinear optimization. After studying the literature [5], it is concluded that the Gauss-Newton method can be used to solve the scan matching problem in map positioning, but it has high requirements for sensors. Finally, through the research of literature [6], the Cartographer algorithm is analyzed, and the cumulative error is reduced by using closed-loop detection on the local subgraph and the global graph at the same time.

同时传统无人巡检机器人障碍物检测无法满足对于环境感知的需求,因此,多传感器融合的方法被提出,提高了障碍物检测的精确性、实时性和鲁棒性。陆峰等利用卷积神经网络 (Faster R-CNN)算法模型,训练实际机器人采集数据,选取图像目标检测中矩阵框下边沿中心点集与雷达目标检测目标数据点集进行匹配,利用改进的迭代最近点匹配(ICP)算法去除目标伪点对,实现了图像与三维点云的数据融合,有效提高了算法精度和效率。李研芳等利用激光雷达发射激光束并接收目标回波,提取距离最近的回波点进行聚类分析,判断回波点是否为障碍物;图像检测利用YOLO网络训练数据集生成目标框,通过雷达目标检测的检测框与相机目标检测的检测框进行融合,以边界框重叠面积的百分比作为判断障碍物标准,实现激光雷达与相机的决策层融合,在车辆与行人方面检测精度提升5%左右。At the same time, the obstacle detection of traditional unmanned inspection robots cannot meet the needs of environmental perception. Therefore, the method of multi-sensor fusion is proposed to improve the accuracy, real-time and robustness of obstacle detection. Lu Feng et al. used the convolutional neural network (Faster R-CNN) algorithm model to train the actual robot to collect data, and selected the center point set of the lower edge of the matrix frame in the image target detection to match with the radar target detection target data point set. The closest point matching (ICP) algorithm removes the target pseudo point pairs, realizes the data fusion of the image and the 3D point cloud, and effectively improves the accuracy and efficiency of the algorithm. Li Yanfang et al. used lidar to emit laser beams and receive target echoes, extract the closest echo points for cluster analysis, and judge whether the echo points are obstacles; image detection uses YOLO network training data sets to generate target frames, and through radar targets The detected detection frame is fused with the detection frame of the camera target detection, and the percentage of the overlapping area of the bounding frame is used as the criterion for judging obstacles, realizing the decision-making fusion of the lidar and the camera, and improving the detection accuracy of vehicles and pedestrians by about 5%.

综上,如何构建高精度地图和障碍物检查是当前无人巡检的研究热点问题。上述现有技术中主要存在的问题如下:高精度地图的构建受制于传感器以及算法的限制,对于结构复杂的环境,难以进行建图定位操作;传统障碍物检测由于获取的数据量过多,且冗余数据居多,从而导致障碍物在现实世界中的位置估计不精确。To sum up, how to build high-precision maps and obstacle inspections is a hot research issue in current unmanned inspections. The main problems in the above-mentioned existing technologies are as follows: the construction of high-precision maps is limited by sensors and algorithms, and it is difficult to perform mapping and positioning operations for environments with complex structures; Most of the redundant data leads to imprecise estimation of the position of obstacles in the real world.

涉及文献如下:The related documents are as follows:

[1]陈卓,苏卫华,安慰宁,等.移动机器人SLAM与路径规划在ROS框架下的实现[J].医疗卫生装备,2017,38(02):109-113.[1] Chen Zhuo, Su Weihua, Wei Ning, et al. Implementation of Mobile Robot SLAM and Path Planning under ROS Framework [J]. Medical and Health Equipment, 2017,38(02):109-113.

[2]栾佳宁,张伟,孙伟,等.基于二维码视觉与激光雷达融合的高精度定位算法[J].计算机应用,2021,41(05):1484-1491.[2] Luan Jianing, Zhang Wei, Sun Wei, et al. High-precision positioning algorithm based on fusion of two-dimensional code vision and laser radar [J]. Computer Applications, 2021, 41(05): 1484-1491.

[3]刘丽伟,朱绪康,李秀华,等.低成本移动机器人2D SLAM算法地图评估研究[J].计算机仿真,2021,38(04):291-295+320.[3] Liu Liwei, Zhu Xukang, Li Xiuhua, et al. Research on 2D SLAM Algorithm Map Evaluation for Low-cost Mobile Robots [J]. Computer Simulation, 2021,38(04):291-295+320.

[4]赵若愚,关志伟,童敏勇,等.基于单线激光雷达的SLAM系统功能优化设计[J].中国汽车,2021(02):4-9+43.[4] Zhao Ruoyu, Guan Zhiwei, Tong Minyong, et al. Functional optimization design of SLAM system based on single-line lidar [J]. China Automotive, 2021(02):4-9+43.

[5]韩文华.基于单线激光雷达的移动机器人动态环境导航研究[D].哈尔滨工业大学,2019.[5] Han Wenhua. Research on mobile robot dynamic environment navigation based on single-line laser radar [D]. Harbin Institute of Technology, 2019.

[6]姬兴亮.基于三维激光点云分割匹配的同步定位与构图算法研究[D].电子科技大学, 2020.[6] Ji Xingliang. Research on Synchronous Positioning and Composition Algorithm Based on 3D Laser Point Cloud Segmentation and Matching [D]. University of Electronic Science and Technology of China, 2020.

发明内容Contents of the invention

本发明的目的是提供一种煤矿井下巡检机器人三维地图重建及目标检测方法,以解决如何在复杂结构环境下构建高精度地图以及如何在高精度地图中精确估计障碍物的技术问题。The purpose of the present invention is to provide a three-dimensional map reconstruction and target detection method for a coal mine underground inspection robot to solve the technical problems of how to construct a high-precision map in a complex structural environment and how to accurately estimate obstacles in a high-precision map.

为了实现上述目的,本发明所采用的技术方案是:In order to achieve the above object, the technical solution adopted in the present invention is:

一种煤矿井下巡检机器人三维地图重建及目标检测方法,包括如下步骤:A method for three-dimensional map reconstruction and target detection of a coal mine inspection robot, comprising the following steps:

步骤1,选取构建地图场景,采集选取场景的点云数据并处理,获取包含选取场景信息的点云数据集;Step 1: Select and build a map scene, collect and process the point cloud data of the selected scene, and obtain a point cloud dataset containing the information of the selected scene;

步骤2,对步骤1获取的场景的点云数据采用Map-to-Map方法进行点云配准,得到建立后的三维地图;具体包括如下子步骤:Step 2, use the Map-to-Map method to perform point cloud registration on the point cloud data of the scene obtained in step 1, and obtain the established 3D map; specifically include the following sub-steps:

步骤21,从步骤1获取的点云数据集中获取连续n帧点云数据,对该n帧点云数据建立一个局部子图Submap,将其作为当前最新的局部子图;Step 21, obtaining continuous n frames of point cloud data from the point cloud data set obtained in step 1, establishing a local submap Submap for the n frames of point cloud data, and using it as the current latest local submap;

步骤22,从点云数据集中剩余的点云数据中再次获取连续n帧点云数据,对该n帧点云数据建立一个局部子图Submap;Step 22, reacquire continuous n frames of point cloud data from the remaining point cloud data in the point cloud data set, and establish a local submap Submap for the n frames of point cloud data;

步骤23,利用cartographer算法中的global方式,在后端过程中进行回环检测,如果出现回环,则执行步骤24;否则舍弃步骤22建立的局部子图,返回步骤22;Step 23, use the global method in the cartographer algorithm to perform loopback detection in the backend process, if there is a loopback, go to step 24; otherwise discard the local subgraph established in step 22, and return to step 22;

步骤24,将步骤22建立的局部子图与当前最新的局部子图进行匹配,得到匹配后的局部子图,并将匹配后的局部子图更新为当前最新的局部子图;其中,匹配是指将两个局部子图叠加;Step 24, matching the local subgraph established in step 22 with the current latest local subgraph to obtain the matched local subgraph, and updating the matched local subgraph to the current latest local subgraph; wherein, the matching is Refers to the superposition of two local subgraphs;

步骤25,返回执行步骤22,直至步骤1得到的点云数据集中的点云数据全部被获取,则此时得到的当前最新的局部子图就为所需要的三维地图;Step 25, return to step 22, until all the point cloud data in the point cloud data set obtained in step 1 are acquired, then the current latest local submap obtained at this time is the required three-dimensional map;

步骤3,根据步骤2所生成的三维地图,对巡检机器人的巡检路线进行实时定位;Step 3, according to the three-dimensional map generated in step 2, perform real-time positioning on the inspection route of the inspection robot;

步骤4,巡检机器人行进过程中,利用融合算法进行目标检测;Step 4, during the traveling process of the inspection robot, use the fusion algorithm to detect the target;

步骤5,对步骤4得到的目标检测结果进行最优匹配,实现检测结果的分类,并择优输出作为障碍物目标。Step 5, perform optimal matching on the target detection results obtained in step 4, realize the classification of the detection results, and select the best output as the obstacle target.

进一步的,所述步骤1中,通过16线激光雷达,运用预留指令进行激光点云的采集,生成包含选取场景信息的bag文件;之后对bag文件进行降噪滤波处理,得到处理后的bag文件,即为包含选取场景信息的点云数据集。Further, in the step 1, the 16-line laser radar is used to collect the laser point cloud by using the reserved command to generate the bag file containing the selected scene information; then the bag file is subjected to noise reduction filtering processing to obtain the processed bag The file is the point cloud dataset containing the selected scene information.

进一步的,所述步骤23中,所述后端指cartographer算法建图优化操作部分;所述出现回环是指步骤22建立的局部子图和当前最新的局部子图之间的相似性大于一定权值。Further, in the step 23, the backend refers to the optimization operation part of the cartographer algorithm; the occurrence of the loop means that the similarity between the local subgraph established in step 22 and the current latest local subgraph is greater than a certain weight value.

进一步的,所述步骤24中,所述将两个局部子图叠加是指舍弃两个局部子图中的冗余的点云帧数据,并将舍弃冗余数据之后的两个局部子图进行拼接得到一个新的局部子图。Further, in the step 24, the superposition of the two local sub-images refers to discarding redundant point cloud frame data in the two local sub-images, and performing two local sub-images after discarding the redundant data. Splicing results in a new local subgraph.

进一步的,所述步骤3包括如下子步骤:Further, said step 3 includes the following sub-steps:

步骤31,采用IMU惯性传感器进行位姿信息的采集;通过指令输出为csv文件信息;Step 31, using the IMU inertial sensor to collect pose information; output as csv file information by command;

步骤32,选用IMU惯性传感器的坐标系作为定位操作坐标系;Step 32, select the coordinate system of the IMU inertial sensor as the positioning operation coordinate system;

步骤33,利用步骤31中生成的csv文件以及步骤2生成的三维地图,配合Cartographer-SLAM算法定位部分源码进行巡检机器人的行进路线定位操作。Step 33, use the csv file generated in step 31 and the 3D map generated in step 2, and cooperate with the Cartographer-SLAM algorithm to locate part of the source code to perform the route positioning operation of the inspection robot.

进一步的,所述步骤4包括如下子步骤:Further, said step 4 includes the following sub-steps:

步骤41,将步骤1采集到的点云数据进行通过统计滤波进行降噪处理,输出降噪后的点云信息;Step 41, performing noise reduction processing on the point cloud data collected in step 1 through statistical filtering, and outputting the point cloud information after noise reduction;

步骤42,使用RealsenseD435i摄像头对步骤1中选取场景采集图像信息,然后将图像信息输入至融合算法CenterNet网络进行目标检测,输出目标中心点位置及目标类别C,类别由 KIITI数据集已知;Step 42, use the RealsenseD435i camera to collect image information for the scene selected in step 1, then input the image information to the fusion algorithm CenterNet network for target detection, and output the position of the center point of the target and the target category C, which is known from the KIITI dataset;

步骤43,将步骤41中输出的点云数据输入至融合算法CenterPiont网络进行目标检测,输出目标3D检测框、距离和类别C,并提取3D目标检测框的中心点。Step 43, input the point cloud data output in step 41 to the fusion algorithm CenterPiont network for target detection, output the target 3D detection frame, distance and category C, and extract the center point of the 3D target detection frame.

7、如权利要求1所述的煤矿井下巡检机器人三维地图重建及目标检测方法,其特征在于,所述步骤5包括如下子步骤:7. The three-dimensional map reconstruction and target detection method of the coal mine underground inspection robot according to claim 1, wherein the step 5 includes the following sub-steps:

步骤51,分别选取步骤42得到的目标中心点位置和步骤43得到的目标3D检测框中心点位置一一配成点对作为改进KNN算法输入,共得到n×n个点对;其中,步骤42得到的目标中心点位置表示为

Figure BDA0003790595860000041
步骤43得到的目标3D检测框中心点位置表示为
Figure BDA0003790595860000042
n为选取的中心点的个数;Step 51, respectively select the position of the center point of the target obtained in step 42 and the position of the center point of the target 3D detection frame obtained in step 43 to form point pairs as the input of the improved KNN algorithm, and obtain n×n point pairs in total; wherein, step 42 The obtained target center point position is expressed as
Figure BDA0003790595860000041
The position of the center point of the target 3D detection frame obtained in step 43 is expressed as
Figure BDA0003790595860000042
n is the number of selected center points;

步骤52,计算步骤51选取的每个点对中的两中心点之间的欧氏距离;Step 52, calculating the Euclidean distance between two center points in each point pair selected in step 51;

步骤53,在步骤52得到的所有点对的欧式距离中,选择

Figure BDA0003790595860000043
对应的n个点对的欧式距离最小的点对作为最优点对,共得到n个最优点对,对每个最优点对进行融合作为检测到的障碍物目标,共得到n个障碍物目标。Step 53, in the Euclidean distance of all point pairs obtained in step 52, select
Figure BDA0003790595860000043
The point pair with the smallest Euclidean distance of the corresponding n point pairs is used as the optimal point pair, and a total of n optimal point pairs are obtained, and each optimal point pair is fused as the detected obstacle target, and a total of n obstacle targets are obtained.

本发明的有益效果是:The beneficial effects of the present invention are:

(1)建图部分:基于Cartographer算法,提出了Map to Map的方法对局部子图信息进行匹配,从而做出回环检测,提高其匹配度,同时去掉数据帧之间的冗余信息,提高地图精度以及建图效率。更加适用于移动机器人的建图。算法进行改进之后,对于环境结果复杂的情况,也能够进行有效的高精度地图构建;同时,可去除部分冗余数据,降低算法时间复杂度,提高建图效率;(1) Mapping part: Based on the Cartographer algorithm, a Map to Map method is proposed to match local subgraph information, thereby making loopback detection and improving its matching degree. At the same time, redundant information between data frames is removed, and the map is improved. accuracy and mapping efficiency. It is more suitable for mapping of mobile robots. After the algorithm is improved, it can also construct effective high-precision maps for complex environmental results; at the same time, it can remove some redundant data, reduce the time complexity of the algorithm, and improve the efficiency of map construction;

(2)目标检测部分:基于多传感器融合,对图像和点云原始数据分别进行目标检测与跟踪,利用改进的KNN算法对检测结果进行最优匹配,并结合点云到图像的投影对未匹配的图像检测中心点进一步匹配,最终对融合结果择优输出。传感器融合提升了数据的可靠性,提升了目标检测精度;同时,融合算法考虑时域信息,将多帧检测结果进行融合,能够有效减少因遮挡导致的误检率和漏检率。(2) Target detection part: Based on multi-sensor fusion, target detection and tracking are performed on the original data of the image and point cloud respectively, and the improved KNN algorithm is used to optimally match the detection results, and the projection of the point cloud to the image is used to correct the mismatch The center points of the image detection are further matched, and finally the fusion result is selected and output. Sensor fusion improves data reliability and target detection accuracy; at the same time, the fusion algorithm considers time domain information and fuses multi-frame detection results, which can effectively reduce the false detection rate and missed detection rate caused by occlusion.

附图说明Description of drawings

图1显示了现有技术由于前端匹配错误,导致后端回环检测错误;Figure 1 shows the back-end loopback detection error due to the front-end matching error in the prior art;

图2是Cartographer算法改进前后建图效果(点云配准图)对比;Figure 2 is a comparison of the effect of Cartographer algorithm before and after the improvement of the map (point cloud registration map);

图3是改进前后建图效果对比;Figure 3 is a comparison of the mapping effect before and after improvement;

图4是同一服务器下,改进前后CPU占用率对比;Figure 4 is a comparison of CPU usage before and after improvement under the same server;

图5是RVIZ下的后端闭环检测定位效果;Figure 5 is the positioning effect of back-end closed-loop detection under RVIZ;

图6是巡检机器人示意图;Fig. 6 is a schematic diagram of an inspection robot;

图7是融合算法框架;Figure 7 is the fusion algorithm framework;

图8是目标中心点匹配结构框图;Fig. 8 is a block diagram of target center point matching structure;

图9是中心点集与图像检测中心点连接关系图;Fig. 9 is a connection relationship diagram between a center point set and an image detection center point;

图10是图像检测测试效果;其中,(a)为目标遮挡场景,(b)为光照不均匀场景,(c)为距离较远场景;Figure 10 is the image detection test results; where (a) is the target occlusion scene, (b) is the scene with uneven illumination, and (c) is the scene with a long distance;

图11是点云目标检测;其中,(a)为目标遮挡场景,(b)为光照不均匀场景,(c) 为距离较远场景;Figure 11 is point cloud target detection; among them, (a) is the target occlusion scene, (b) is the scene with uneven illumination, and (c) is the scene with a long distance;

图12是不同场景下激光雷达的检测结果(检测回归3D框);其中,(a)为目标遮挡场景,(b)为光照不均匀场景,(c)为距离较远场景;Figure 12 is the detection results of lidar in different scenes (detection regression 3D frame); among them, (a) is the target occlusion scene, (b) is the scene with uneven illumination, and (c) is the scene with a long distance;

图13是点云与图像检测融合检测结果;其中,(a)为图像检测结果,(b)为融合算法检测结果。Figure 13 is the fusion detection result of point cloud and image detection; where (a) is the image detection result, and (b) is the fusion algorithm detection result.

以下结合附图和具体实施方式对本发明进一步解释说明。The present invention will be further explained below in conjunction with the accompanying drawings and specific embodiments.

具体实施方式detailed description

1.SLAM建图与定位1. SLAM mapping and positioning

1.1图优化算法1.1 Graph optimization algorithm

Cartographer算法属于图优化的SLAM算法,图优化的SLAM是通过后端的回环检测对前端位姿进行调整,最终得到当下最接近真实值的机器人位置和姿势。图优化SLAM问题分解成两个任务:The Cartographer algorithm belongs to the graph-optimized SLAM algorithm. The graph-optimized SLAM adjusts the front-end pose through the back-end loop detection, and finally obtains the current robot position and pose closest to the real value. The graph optimization SLAM problem is decomposed into two tasks:

(1)构建图,机器人位姿当作顶点,位姿间的关系当作边(前端(front-end)--传感器信息的堆积);(1) Construct the graph, the pose of the robot is regarded as the vertex, and the relationship between the poses is regarded as the edge (front-end - the accumulation of sensor information);

(2)优化图,调整机器人位姿顶点满足边的约束(后端(back-end));(2) Optimize the graph, adjust the robot pose vertices to meet the constraints of the edges (back-end);

1.1.1Cartographer算法1.1.1 Cartographer Algorithm

Cartographer算法是谷歌于2016年提出的开源算法。提出了一种新的基于激光雷达点云数据的回环检测方法,这种方法可以减少计算量,可以满足大空间的建图需要,并且对大规模数据进行实时优化。整个算法分为两大部分:Local SLAM(前端匹配),还有GlobalSLAM (后端闭环)。本发明在Local前端进行改进,配合Cartographer算法的后端闭环环节,可提高重建地图的精度。The Cartographer algorithm is an open source algorithm proposed by Google in 2016. A new loop detection method based on lidar point cloud data is proposed. This method can reduce the amount of calculation, meet the needs of large-scale mapping, and optimize large-scale data in real time. The whole algorithm is divided into two parts: Local SLAM (front-end matching), and GlobalSLAM (back-end closed loop). The present invention improves the front end of the Local and cooperates with the back-end closed-loop link of the Cartographer algorithm to improve the accuracy of the reconstructed map.

原算法在前端点云配准采用SLAM点云配准中的Scan-to-Map配准方式,每获取到一帧 laser scan(雷达帧数据)数据后,便与当前最近建立的Map进行匹配,使这一帧的laser scan 数据插入到Map,这种方式的弊端在于,当环境结构相似时,采集的点云数据帧会存在相似重复的情况,用于后端闭环检测时,会导致无法闭环,从而建图失败。The original algorithm adopts the Scan-to-Map registration method in the SLAM point cloud registration in the front-end point cloud registration. After each frame of laser scan (radar frame data) data is obtained, it is matched with the currently recently established Map. Insert the laser scan data of this frame into the Map. The disadvantage of this method is that when the environment structure is similar, the collected point cloud data frames will have similar repetitions. When used for back-end closed-loop detection, it will lead to failure to close the loop , and thus failed to build the map.

如图1为由于前端匹配错误,导致后端检测无法闭环。As shown in Figure 1, due to the front-end matching error, the back-end detection cannot be closed.

1.1.2点云配准1.1.2 Point cloud registration

故在Cartographer算法的基础上,提出一种新的激光雷达点云配准方法Map-to-Map进行点云配准。在获取某几帧点云数据后,建立Submap(局部子图),每建立一个局部子图后,便与当前最近建立的Submap进行匹配,从而避免环境结构相似的点云数据帧匹配,导致后端闭环检测错误。具体过程如图2。Therefore, based on the Cartographer algorithm, a new lidar point cloud registration method Map-to-Map is proposed for point cloud registration. After obtaining certain frames of point cloud data, a Submap (local submap) is established. After each local submap is established, it is matched with the most recently established Submap, thereby avoiding the matching of point cloud data frames with similar environmental structures, resulting in subsequent Terminal closed loop detection error. The specific process is shown in Figure 2.

2.目标检测2. Target detection

在目标检测之前需要进行激光雷达与深度相机的联合标定,联合标定是数据融合的基础,通过联合标定得出相机坐标系与激光雷达坐标系的转换矩阵,将雷达与相机的数据融合起来,得到的数据信息更加全面、准确。如图6为联合标定图。Before target detection, joint calibration of lidar and depth camera is required. Joint calibration is the basis of data fusion. Through joint calibration, the transformation matrix of camera coordinate system and lidar coordinate system is obtained, and the data of radar and camera are fused to obtain The data information is more comprehensive and accurate. Figure 6 is the joint calibration diagram.

本发明的煤矿井下巡检机器人三维地图重建及目标检测方法包括如下步骤:The three-dimensional map reconstruction and target detection method of the coal mine underground inspection robot of the present invention comprises the following steps:

步骤1,选取构建地图场景,采集选取场景的点云数据并处理,获取包含选取场景信息的点云数据集:通过16线激光雷达(型号为robsense),运用预留指令(rosbag record+话题名)进行激光点云的采集,生成包含选取场景信息的bag文件;之后对bag文件进行降噪滤波处理(统计滤波算法),得到处理后的bag文件;Step 1. Select and build a map scene, collect and process the point cloud data of the selected scene, and obtain the point cloud dataset containing the information of the selected scene: through the 16-line laser radar (model is robsense), use the reserved command (rosbag record+topic name) Collect the laser point cloud and generate a bag file containing the selected scene information; then perform noise reduction filter processing (statistical filter algorithm) on the bag file to obtain the processed bag file;

步骤2,对步骤1获取的场景的点云数据采用Map-to-Map方法进行点云配准,得到建立后的三维地图;如图2所示,具体操作如下:Step 2, use the Map-to-Map method to perform point cloud registration on the point cloud data of the scene obtained in step 1, and obtain the established 3D map; as shown in Figure 2, the specific operations are as follows:

步骤21,从步骤1获取的点云数据集中获取连续n帧点云数据,对该n帧点云数据建立一个局部子图Submap,将其作为当前最新的局部子图;Step 21, obtaining continuous n frames of point cloud data from the point cloud data set obtained in step 1, establishing a local submap Submap for the n frames of point cloud data, and using it as the current latest local submap;

步骤22,从点云数据集中剩余的点云数据中再次获取连续n帧点云数据,对该n帧点云数据建立一个局部子图Submap;Step 22, reacquire continuous n frames of point cloud data from the remaining point cloud data in the point cloud data set, and establish a local submap Submap for the n frames of point cloud data;

步骤23,利用cartographer算法中的global(全局)方式,在后端过程中进行回环检测,如果出现回环,则执行步骤24;否则舍弃步骤22建立的局部子图,返回步骤22;Step 23, utilize the global (global) mode in the cartographer algorithm, carry out loopback detection in the backend process, if loopback occurs, then perform step 24; otherwise discard the local subgraph established in step 22, and return to step 22;

所述后端指的是:cartographer算法建图优化操作部分;所述出现回环是指步骤22建立的局部子图和当前最新的局部子图之间的相似性大于一定权值,该权值可自行设定;Described back end refers to: cartographer algorithm map construction optimization operation part; Described loopback refers to that the similarity between the local subgraph established in step 22 and the current latest local subgraph is greater than a certain weight, and the weight can be set by yourself;

步骤24,将步骤22建立的局部子图与当前最新的局部子图进行匹配,得到匹配后的局部子图,并将匹配后的局部子图更新为当前最新的局部子图;其中,匹配是指将两个局部子图叠加(即舍弃两个局部子图中的冗余的点云帧数据,并将舍弃冗余数据之后的两个局部子图进行拼接得到一个新的局部子图);Step 24, matching the local subgraph established in step 22 with the current latest local subgraph to obtain the matched local subgraph, and updating the matched local subgraph to the current latest local subgraph; wherein, the matching is Refers to superimposing two local submaps (that is, discarding the redundant point cloud frame data in the two local submaps, and splicing the two local submaps after discarding the redundant data to obtain a new local submap);

其中,两个局部子图匹配过程中,步骤22建立局部子图的点云数据插入当前最新的局部子图时位姿的转换公式为:Among them, in the matching process of two local sub-images, step 22 establishes the point cloud data of the local sub-image and inserts the pose conversion formula when inserting the current latest local sub-image as follows:

Figure BDA0003790595860000071
Figure BDA0003790595860000071

其中,

Figure BDA0003790595860000072
表示包含场景信息的点云数据集中的每个点云的姿态,
Figure BDA0003790595860000073
为点云在x,y方向的平移,
Figure BDA0003790595860000074
为平面的旋转量;
Figure BDA0003790595860000075
为点云在当前局部子图中的位姿;p为障碍物存在的概率值,可自行设定。in,
Figure BDA0003790595860000072
Represents the pose of each point cloud in the point cloud dataset containing scene information,
Figure BDA0003790595860000073
is the translation of the point cloud in the x and y directions,
Figure BDA0003790595860000074
is the rotation amount of the plane;
Figure BDA0003790595860000075
is the pose of the point cloud in the current local submap; p is the probability value of the existence of obstacles, which can be set by yourself.

步骤25,返回执行步骤22,直至步骤1得到的点云数据集中的点云数据全部被获取,则此时得到的当前最新的局部子图就为所需要的三维地图。Step 25, return to step 22 until all point cloud data in the point cloud data set obtained in step 1 are acquired, then the current latest local submap obtained at this time is the required 3D map.

利用步骤2中Map-to-Map的方法进行点云配准,从而构建三维地图的方式,能够避免现有方法:每获取一帧包含选取场景信息数据就将其放入建立的子图中,导致环境结构相似的点云数据帧匹配,导致后端闭环检测错误。Using the Map-to-Map method in step 2 for point cloud registration to construct a 3D map can avoid the existing method: every time a frame is obtained containing the selected scene information data, it is put into the established sub-graph, The matching of point cloud data frames with similar environmental structures leads to back-end closed-loop detection errors.

步骤3,根据步骤2所生成的三维地图,对巡检机器人的巡检路线进行实时定位;具体操作如下:Step 3, according to the three-dimensional map generated in step 2, perform real-time positioning on the inspection route of the inspection robot; the specific operation is as follows:

步骤31,采用IMU惯性传感器,连接电脑,进行位姿信息的采集;通过指令输出为csv 文件信息;Step 31, use the IMU inertial sensor, connect to the computer, and collect the pose information; output the csv file information through the command;

步骤32,选用IMU惯性传感器的坐标系作为定位操作坐标系;Step 32, select the coordinate system of the IMU inertial sensor as the positioning operation coordinate system;

步骤33,利用步骤31中生成的csv文件以及步骤2生成的三维地图,配合Cartographer-SLAM算法定位部分源码(在源码中输入文件的绝对路径,修改配置文件参数) 进行巡检机器人的行进路线定位操作。得到图5结果(图中实线为定位操作的轨迹)。Step 33, use the csv file generated in step 31 and the 3D map generated in step 2 to coordinate with the Cartographer-SLAM algorithm to locate part of the source code (enter the absolute path of the file in the source code and modify the parameters of the configuration file) to locate the route of the inspection robot operate. The results in Figure 5 are obtained (the solid line in the figure is the trajectory of the positioning operation).

步骤4,巡检机器人行进过程中,利用融合算法进行目标检测,具体操作如下:Step 4. During the inspection robot's travel, use the fusion algorithm to detect the target. The specific operation is as follows:

步骤41,将步骤1采集到的点云数据进行通过统计滤波进行降噪处理,输出降噪后的点云信息。In step 41, the point cloud data collected in step 1 is subjected to noise reduction processing through statistical filtering, and the point cloud information after noise reduction is output.

步骤42,使用RealsenseD435i摄像头对步骤1中选取场景采集图像信息,然后将图像信息输入至融合算法CenterNet网络进行目标检测,输出目标中心点位置及目标类别C,类别由KIITI数据集已知;Step 42, use the RealsenseD435i camera to collect image information for the scene selected in step 1, then input the image information to the fusion algorithm CenterNet network for target detection, and output the position of the center point of the target and the target category C, which is known from the KIITI dataset;

步骤43,将步骤41中输出的点云数据输入至融合算法CenterPiont网络进行目标检测,输出目标3D检测框、距离和类别C,并提取3D目标检测框的中心点。Step 43, input the point cloud data output in step 41 to the fusion algorithm CenterPiont network for target detection, output the target 3D detection frame, distance and category C, and extract the center point of the 3D target detection frame.

具体的,如图6所示,巡检机器人95cm高,底盘高25cm。激光雷达和相机都安装在巡检机器人上,相机位于激光雷达的正下方,IMU惯性传感器安装在巡检机器人的底盘上。Specifically, as shown in Figure 6, the inspection robot is 95cm high and the chassis is 25cm high. Both the laser radar and the camera are installed on the inspection robot, the camera is located directly below the laser radar, and the IMU inertial sensor is installed on the chassis of the inspection robot.

步骤5,步骤4分别输出图像目标中心点位置和点云目标中心点位置,在这些中心点中,存在多种检测情况,包括正确检测且完成跟踪的目标、错误检测目标等。利用改进后的KNN 算法对步骤4得到的目标检测结果进行最优匹配,实现检测结果的分类,并择优输出作为障碍物目标,具体操作如下:Step 5 and Step 4 respectively output the position of the center point of the image target and the position of the point cloud target. In these center points, there are many detection situations, including correctly detected and tracked targets, wrongly detected targets, etc. Use the improved KNN algorithm to optimally match the target detection results obtained in step 4, realize the classification of the detection results, and select the optimal output as the obstacle target. The specific operations are as follows:

步骤51,分别选取步骤42得到的目标中心点位置和步骤43得到的目标3D检测框中心点位置一一配成点对作为改进KNN算法输入,共得到n×n个点对。其中,步骤42得到的目标中心点位置表示为

Figure BDA0003790595860000081
步骤43得到的目标3D检测框中心点位置表示为
Figure BDA0003790595860000082
n为选取的中心点的个数;In step 51, the position of the center point of the target obtained in step 42 and the position of the center point of the target 3D detection frame obtained in step 43 are respectively selected to form point pairs as the input of the improved KNN algorithm, and a total of n×n point pairs are obtained. Wherein, the target central point position obtained in step 42 is expressed as
Figure BDA0003790595860000081
The position of the center point of the target 3D detection frame obtained in step 43 is expressed as
Figure BDA0003790595860000082
n is the number of selected center points;

步骤52,计算步骤51选取的每个点对中的两中心点之间的欧氏距离;Step 52, calculating the Euclidean distance between two center points in each point pair selected in step 51;

步骤53,在步骤52得到的所有点对的欧式距离中,选择

Figure BDA0003790595860000083
对应的n个点对的欧式距离最小的点对作为最优点对,共得到n个最优点对,对每个最优点对进行融合,此处融合是指将该点对中的两个点合并为一个点(任选其一作为点对的中心点位置),作为检测到的障碍物目标,共得到n个障碍物目标。Step 53, in the Euclidean distance of all point pairs obtained in step 52, select
Figure BDA0003790595860000083
The point pair with the smallest Euclidean distance of the corresponding n point pairs is used as the optimal point pair, and a total of n optimal point pairs are obtained, and each optimal point pair is fused. Fusion here refers to merging two points in the point pair is a point (choose one of them as the center point position of the point pair), and as the detected obstacle target, a total of n obstacle targets are obtained.

在中心点进行融合的过程中,当前帧图像目标点与点云中目标检测框中心点实现数据融合,黑色点是点云数据中目标检测中心点集,白色点是图像检测结果的中心点(见图9),两者通过步骤6最近距离匹配,融合成同一目标中心点,连接关系如图8所示。In the process of center point fusion, the current frame image target point and the center point of the target detection frame in the point cloud realize data fusion. The black point is the target detection center point set in the point cloud data, and the white point is the center point of the image detection result ( See Figure 9), the two are matched by the closest distance in step 6, and merged into the same target center point, and the connection relationship is shown in Figure 8.

为了验证本发明的可行性和有效性,进行了如下试验:In order to verify the feasibility and effectiveness of the present invention, carried out following test:

1.SLAM建图与定位1. SLAM mapping and positioning

为证明改进后的Cartographer算法在环境结构相似的情况下存在更好的建图效果,本实验场地选用模拟煤矿井下,进行激光点云数据的采集,同时利用改进前后的算法进行地图的构建,通过对比建图时间效率、建图效果以及建图精度(通过采集处理的bag文件进行建图操作,建图效率如图4所示),从而说明改进之后的算法优于改进前的算法。In order to prove that the improved Cartographer algorithm has a better mapping effect in the case of similar environmental structures, this experiment site chooses a simulated coal mine underground to collect laser point cloud data, and uses the improved algorithm before and after to construct the map. Comparing the mapping time efficiency, mapping effect and mapping accuracy (the mapping operation is performed through the collected and processed bag files, the mapping efficiency is shown in Figure 4), which shows that the improved algorithm is better than the pre-improved algorithm.

(1)建图效果对比图:(1) Contrast chart of mapping effect:

图3为改进前后建图效果对比:Figure 3 is a comparison of the effect of map building before and after improvement:

改进前:环境结构相似的情况,存在回环检测错误,导致构建地图精度不高;Before improvement: When the environment structure is similar, there is a loopback detection error, resulting in low accuracy of the map construction;

改进后:对于环境结构复杂且相似的情况,仍可以进行回环检测,达到闭环的效果After improvement: For situations with complex and similar environmental structures, loopback detection can still be performed to achieve the effect of closed loop

(2)建图效率对比(2) Comparison of mapping efficiency

通过对比改进算法前后,建图过程中对于CPU的占用率(同一服务器下),从而得出改进算法前后的建图效率。By comparing the CPU occupancy rate (under the same server) during the map building process before and after the improved algorithm, the map building efficiency before and after the improved algorithm can be obtained.

本次实验采用四核处理器进行建图实验,由图4可以看出,改进前的CPU占用率比改进后的CPU占用率高,改进后的CPU占用率较低,且随着时间增加趋于平稳。因此在相同实验环境下,改进后的Cartographer算法比改进前更适用于移动机器人建图。In this experiment, a quad-core processor was used for the map building experiment. It can be seen from Figure 4 that the CPU occupancy rate before improvement is higher than that after improvement, and the CPU occupancy rate after improvement is lower, and it tends to increase with time. Yu stable. Therefore, under the same experimental environment, the improved Cartographer algorithm is more suitable for mobile robot mapping than before.

(3)后端闭环定位(3) Back-end closed-loop positioning

通过提出的新的激光雷达点云配准方法Map-to-Map。在获取某几帧点云数据后,建立 Submap(局部子图),进行点云配准,同时去除冗余数据。传至后端进行闭环检测定位。如图5为RVIZ下的后端闭环检测定位效果:Through the proposed new lidar point cloud registration method Map-to-Map. After acquiring certain frames of point cloud data, establish a Submap (local submap), perform point cloud registration, and remove redundant data at the same time. It is transmitted to the back end for closed-loop detection and positioning. Figure 5 shows the positioning effect of back-end closed-loop detection under RVIZ:

由于上述步骤操作在点云匹配方面进行了改进,从而得到的三维地图效果相比于之前能够达到更好的效果,那么三维地图作为定位操作的输入可以保障精度的要求,故可以从RVIZ 软件中看出(图中所描绘的轨迹线),效果较好。Since the above steps have been improved in point cloud matching, the obtained 3D map effect can achieve better results than before. Then the 3D map can be used as the input of the positioning operation to ensure the accuracy requirements, so it can be obtained from the RVIZ software. It can be seen (trajectory line depicted in the figure) that the effect is better.

2.目标检测2. Target detection

①KITTI数据集训练与处理之后,选取3份不同车辆行驶场景的数据进行测试,并标记类别和目标所在位置,图像检测测试效果如图9所示;①After the training and processing of the KITTI dataset, select 3 data sets of different vehicle driving scenarios for testing, and mark the categories and target locations. The image detection test results are shown in Figure 9;

②依赖于相机的CenterNet图像目标检测有很大的局限性,图10中目标遮挡场景与光照不均匀的场景下被遮挡车辆存在明显漏检现象,相机检测结果会受到照明条件的影响,使得测量精度大幅降低,导致大量目标漏检,测试结果表明基于相机的目标检测很难满足露天自动驾驶矿卡的要求。②Camera-dependent CenterNet image target detection has great limitations. In Figure 10, the target occlusion scene and the scene with uneven illumination have obvious omissions of occluded vehicles. The accuracy is greatly reduced, resulting in a large number of missed targets. The test results show that camera-based target detection is difficult to meet the requirements of open-air automatic driving mining trucks.

③图11(a)、(b)、(c)分别为不同场景下激光雷达的检测结果,从图11(a)中的检测框可以得知,图像检测未检测到目标,而激光雷达检测到了目标。从图11(b)中可以得知,激光雷达检测的目标位置与实际目标位置有偏差,由于点云数据在形式上呈离散分布,数据点的位置、间隔在三维空间中分布不规则,导致相同类型的点难以选取,目标位置不精准。如图11(b)所示,由于点云数据自身不规则,在不同场景下红色检测框与绿色检测框的位置有偏差。要获取更加精确的目标位置信息,将图像的检测结果与点云检测结果融合,以解决该问题。③ Figure 11(a), (b), and (c) are the detection results of lidar in different scenarios. From the detection frame in Figure 11(a), it can be known that the image detection did not detect the target, while the lidar detection to the target. It can be seen from Fig. 11(b) that the target position detected by lidar deviates from the actual target position. Since the point cloud data is discretely distributed in form, the position and interval of data points are irregularly distributed in three-dimensional space, resulting in Points of the same type are difficult to select, and the target position is not precise. As shown in Figure 11(b), due to the irregularity of the point cloud data itself, the positions of the red detection frame and the green detection frame deviate in different scenarios. To obtain more accurate target location information, the image detection results are fused with the point cloud detection results to solve this problem.

④在不同测试场景中,图像检测存在明显的漏检,点云目标检测中,处于远距离并且模糊的目标,能有效地定位和捕捉到位置框,如图12(b)所示,融合算法较于单一图像检测方法,遮挡车辆检测数增加7,显然,在遮挡、光照不均匀等情况下,融合算法有效减少了漏检率。④In different test scenarios, there are obvious missing detections in image detection. In point cloud target detection, distant and blurred targets can be effectively located and captured. As shown in Figure 12(b), the fusion algorithm Compared with the single image detection method, the number of occluded vehicle detections increases by 7. Obviously, in the case of occlusion and uneven illumination, the fusion algorithm effectively reduces the missed detection rate.

结论:in conclusion:

(1)建图定位:本文研究Graph-slam(图优化)建图算法--Cartographer算法,分析出该算法的点云配准方法Scan to Map(帧与局部子图)方法,对于环境结构相似的情况,存在回环检测错误,导致构建地图精度不高,同时帧数据之间存在冗余数据信息,建图效率低。针对这类问题,在该算法的基础上提出了一种新的激光雷达点云配准方法--Map toMap(局部子图与局部子图)配准方法,获取点云数据构建局部子图,局部子图与局部子图之间进行配准,从而做出回环检测。提高其匹配度,同时去掉数据帧之间的冗余信息,提高地图精度以及建图效率。更加适用于移动机器人的建图。(1) Mapping positioning: This paper studies the Graph-slam (graph optimization) mapping algorithm--Cartographer algorithm, and analyzes the point cloud registration method Scan to Map (frame and local submap) method of the algorithm, which is similar to the environment structure In some cases, there are loopback detection errors, resulting in low accuracy of map construction. At the same time, there are redundant data information between frame data, and the efficiency of map construction is low. Aiming at such problems, a new LiDAR point cloud registration method--Map toMap (local submap and local submap) registration method is proposed on the basis of the algorithm, and the point cloud data is obtained to construct a local submap. Local submaps are registered with local submaps to perform loop closure detection. Improve its matching degree, remove redundant information between data frames at the same time, improve map accuracy and map construction efficiency. It is more suitable for mapping of mobile robots.

(2)障碍物检测:本发明提出雷达与相机决策层融合方法,利用16线激光雷达与Realsense D435深度相机获取点云数据,通过对激光雷达与相机联合标定,将点云数据与图像数据投影至相同坐标系,结合目标检测算法,将图像中的障碍物回归成目标中心点(包含深度、大小、类别),将点云中的障碍物回归成目标中心点集。利用KNN(K-NearestNeighbor) 法对图像和点云数据目标中心点进行融合实现对检测结果进行最优匹配,最后输出可靠性高的融合结果。(2) Obstacle detection: This invention proposes a radar and camera decision-making layer fusion method, using 16-line laser radar and Realsense D435 depth camera to obtain point cloud data, and projecting point cloud data and image data through joint calibration of laser radar and camera To the same coordinate system, combined with the target detection algorithm, the obstacles in the image are returned to the target center point (including depth, size, category), and the obstacles in the point cloud are returned to the target center point set. Use the KNN (K-NearestNeighbor) method to fuse the center point of the image and point cloud data to achieve optimal matching of the detection results, and finally output a fusion result with high reliability.

Claims (7)

1.一种煤矿井下巡检机器人三维地图重建及目标检测方法,其特征在于,包括如下步骤:1. A coal mine underground inspection robot three-dimensional map reconstruction and target detection method, is characterized in that, comprises the steps: 步骤1,选取构建地图场景,采集选取场景的点云数据并处理,获取包含选取场景信息的点云数据集;Step 1: Select and build a map scene, collect and process the point cloud data of the selected scene, and obtain a point cloud dataset containing the information of the selected scene; 步骤2,对步骤1获取的包含场景信息的点云数据采用Map-to-Map方法进行点云配准,得到建立后的三维地图;具体包括如下子步骤:Step 2, use the Map-to-Map method to perform point cloud registration on the point cloud data containing scene information obtained in step 1, and obtain the established 3D map; specifically include the following sub-steps: 步骤21,从步骤1获取的点云数据集中获取连续n帧点云数据,对该n帧点云数据建立一个局部子图Submap,将其作为当前最新的局部子图;Step 21, obtaining continuous n frames of point cloud data from the point cloud data set obtained in step 1, establishing a local submap Submap for the n frames of point cloud data, and using it as the current latest local submap; 步骤22,从点云数据集中剩余的点云数据中再次获取连续n帧点云数据,对该n帧点云数据建立一个局部子图Submap;Step 22, reacquire continuous n frames of point cloud data from the remaining point cloud data in the point cloud data set, and establish a local submap Submap for the n frames of point cloud data; 步骤23,利用cartographer算法中的global方式,在后端过程中进行回环检测,如果出现回环,则执行步骤24;否则舍弃步骤22建立的局部子图,返回步骤22;Step 23, use the global method in the cartographer algorithm to perform loopback detection in the backend process, if there is a loopback, go to step 24; otherwise discard the local subgraph established in step 22, and return to step 22; 步骤24,将步骤22建立的局部子图与当前最新的局部子图进行匹配,得到匹配后的局部子图,并将匹配后的局部子图更新为当前最新的局部子图;其中,匹配是指将两个局部子图叠加;Step 24, matching the local subgraph established in step 22 with the current latest local subgraph to obtain the matched local subgraph, and updating the matched local subgraph to the current latest local subgraph; wherein, the matching is Refers to the superposition of two local subgraphs; 步骤25,返回执行步骤22,直至步骤1得到的点云数据集中的点云数据全部被获取,则此时得到的当前最新的局部子图就为所需要的三维地图;Step 25, return to step 22, until all the point cloud data in the point cloud data set obtained in step 1 are acquired, then the current latest local submap obtained at this time is the required three-dimensional map; 步骤3,根据步骤2所生成的三维地图,对巡检机器人的巡检路线进行实时定位;Step 3, according to the three-dimensional map generated in step 2, perform real-time positioning on the inspection route of the inspection robot; 步骤4,巡检机器人行进过程中,利用融合算法进行目标检测;Step 4, during the traveling process of the inspection robot, use the fusion algorithm to detect the target; 步骤5,对步骤4得到的目标检测结果进行最优匹配,实现检测结果的分类,并择优输出作为障碍物目标。Step 5, perform optimal matching on the target detection results obtained in step 4, realize the classification of the detection results, and select the best output as the obstacle target. 2.如权利要求1所述的煤矿井下巡检机器人三维地图重建及目标检测方法,其特征在于,所述步骤1中,通过16线激光雷达,运用预留指令进行激光点云的采集,生成包含选取场景信息的bag文件;之后对bag文件进行降噪滤波处理,得到处理后的bag文件,即为包含选取场景信息的点云数据集。2. The three-dimensional map reconstruction and target detection method of the coal mine underground inspection robot as claimed in claim 1, wherein in the step 1, the 16-line laser radar is used to collect laser point clouds using reserved instructions to generate The bag file containing the selected scene information; after that, the bag file is subjected to noise reduction and filtering processing, and the processed bag file is obtained, which is the point cloud dataset containing the selected scene information. 3.如权利要求1所述的煤矿井下巡检机器人三维地图重建及目标检测方法,其特征在于,所述步骤23中,所述后端指cartographer算法建图优化操作部分;所述出现回环是指步骤22建立的局部子图和当前最新的局部子图之间的相似性大于一定权值。3. coal mine underground inspection robot three-dimensional map reconstruction and target detection method as claimed in claim 1, it is characterized in that, in described step 23, described back end refers to cartographer algorithm map building optimization operation part; Described occurrence loop is It means that the similarity between the local subgraph established in step 22 and the latest local subgraph is greater than a certain weight. 4.如权利要求1所述的煤矿井下巡检机器人三维地图重建及目标检测方法,其特征在于,所述步骤24中,所述将两个局部子图叠加是指舍弃两个局部子图中的冗余的点云帧数据,并将舍弃冗余数据之后的两个局部子图进行拼接得到一个新的局部子图。4. The three-dimensional map reconstruction and target detection method of the coal mine underground inspection robot as claimed in claim 1, wherein in the step 24, the superposition of the two local sub-graphs refers to discarding the two local sub-graphs The redundant point cloud frame data, and splicing the two local submaps after discarding the redundant data to obtain a new local submap. 5.如权利要求1所述的煤矿井下巡检机器人三维地图重建及目标检测方法,其特征在于,所述步骤3包括如下子步骤:5. The three-dimensional map reconstruction and target detection method of the coal mine underground inspection robot as claimed in claim 1, wherein said step 3 comprises the following sub-steps: 步骤31,采用IMU惯性传感器进行位姿信息的采集;通过指令输出为csv文件信息;Step 31, using the IMU inertial sensor to collect pose information; output as csv file information by command; 步骤32,选用IMU惯性传感器的坐标系作为定位操作坐标系;Step 32, select the coordinate system of the IMU inertial sensor as the positioning operation coordinate system; 步骤33,利用步骤31中生成的csv文件以及步骤2生成的三维地图,配合Cartographer-SLAM算法定位部分源码进行巡检机器人的行进路线定位操作。Step 33, use the csv file generated in step 31 and the 3D map generated in step 2, and cooperate with the Cartographer-SLAM algorithm to locate part of the source code to perform the route positioning operation of the inspection robot. 6.如权利要求1所述的煤矿井下巡检机器人三维地图重建及目标检测方法,其特征在于,所述步骤4包括如下子步骤:6. The three-dimensional map reconstruction and target detection method of the coal mine underground inspection robot according to claim 1, wherein the step 4 includes the following sub-steps: 步骤41,将步骤1采集到的点云数据进行通过统计滤波进行降噪处理,输出降噪后的点云信息;Step 41, performing noise reduction processing on the point cloud data collected in step 1 through statistical filtering, and outputting the point cloud information after noise reduction; 步骤42,使用RealsenseD435i摄像头对步骤1中选取场景采集图像信息,然后将图像信息输入至融合算法CenterNet网络进行目标检测,输出目标中心点位置及目标类别C,类别由KIITI数据集已知;Step 42, use the RealsenseD435i camera to collect image information for the scene selected in step 1, then input the image information to the fusion algorithm CenterNet network for target detection, and output the position of the center point of the target and the target category C, which is known from the KIITI dataset; 步骤43,将步骤41中输出的点云数据输入至融合算法CenterPiont网络进行目标检测,输出目标3D检测框、距离和类别C,并提取3D目标检测框的中心点。Step 43, input the point cloud data output in step 41 to the fusion algorithm CenterPiont network for target detection, output the target 3D detection frame, distance and category C, and extract the center point of the 3D target detection frame. 7.如权利要求1所述的煤矿井下巡检机器人三维地图重建及目标检测方法,其特征在于,所述步骤5包括如下子步骤:7. The three-dimensional map reconstruction and target detection method of the coal mine underground inspection robot according to claim 1, wherein the step 5 includes the following sub-steps: 步骤51,分别选取步骤42得到的目标中心点位置和步骤43得到的目标3D检测框中心点位置一一配成点对作为改进KNN算法输入,共得到n×n个点对;其中,步骤42得到的目标中心点位置表示为
Figure FDA0003790595850000022
步骤43得到的目标3D检测框中心点位置表示为
Figure FDA0003790595850000021
n为选取的中心点的个数;
Step 51, respectively select the position of the center point of the target obtained in step 42 and the position of the center point of the target 3D detection frame obtained in step 43 to form point pairs as the input of the improved KNN algorithm, and obtain n×n point pairs in total; wherein, step 42 The obtained target center point position is expressed as
Figure FDA0003790595850000022
The position of the center point of the target 3D detection frame obtained in step 43 is expressed as
Figure FDA0003790595850000021
n is the number of selected center points;
步骤52,计算步骤51选取的每个点对中的两中心点之间的欧氏距离;Step 52, calculating the Euclidean distance between two center points in each point pair selected in step 51; 步骤53,在步骤52得到的所有点对的欧式距离中,选择Pyi对应的n个点对的欧式距离最小的点对作为最优点对,共得到n个最优点对,对每个最优点对进行融合作为检测到的障碍物目标,共得到n个障碍物目标。Step 53, in the Euclidean distance of all point pairs obtained in step 52, select the point pair with the minimum Euclidean distance of n point pairs corresponding to P yi as the optimal point pair, and obtain n optimal point pairs altogether, for each optimal point Fusion is carried out as the detected obstacle target, and a total of n obstacle targets are obtained.
CN202210954381.1A 2022-08-10 2022-08-10 A 3D map reconstruction and target detection method for a coal mine inspection robot Pending CN115439621A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210954381.1A CN115439621A (en) 2022-08-10 2022-08-10 A 3D map reconstruction and target detection method for a coal mine inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210954381.1A CN115439621A (en) 2022-08-10 2022-08-10 A 3D map reconstruction and target detection method for a coal mine inspection robot

Publications (1)

Publication Number Publication Date
CN115439621A true CN115439621A (en) 2022-12-06

Family

ID=84243534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210954381.1A Pending CN115439621A (en) 2022-08-10 2022-08-10 A 3D map reconstruction and target detection method for a coal mine inspection robot

Country Status (1)

Country Link
CN (1) CN115439621A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115877855A (en) * 2023-03-03 2023-03-31 天津滨电电力工程有限公司 Intelligent power inspection robot and inspection method for adaptive environment path planning
CN116448115A (en) * 2023-04-07 2023-07-18 连云港杰瑞科创园管理有限公司 Unmanned ship probability distance map construction method based on navigation radar and photoelectricity
CN119206246A (en) * 2024-08-19 2024-12-27 北京建筑大学 Point cloud feature extraction method, device and system based on inspection humanoid robot
CN120088410A (en) * 2025-04-30 2025-06-03 国网江苏省电力有限公司苏州供电分公司 Underground cable and pipe ditch mapping and recording method based on artificial intelligence

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115877855A (en) * 2023-03-03 2023-03-31 天津滨电电力工程有限公司 Intelligent power inspection robot and inspection method for adaptive environment path planning
CN116448115A (en) * 2023-04-07 2023-07-18 连云港杰瑞科创园管理有限公司 Unmanned ship probability distance map construction method based on navigation radar and photoelectricity
CN116448115B (en) * 2023-04-07 2024-03-19 连云港杰瑞科创园管理有限公司 Unmanned ship probability distance map construction method based on navigation radar and photoelectricity
CN119206246A (en) * 2024-08-19 2024-12-27 北京建筑大学 Point cloud feature extraction method, device and system based on inspection humanoid robot
CN119206246B (en) * 2024-08-19 2025-04-25 北京建筑大学 Point cloud feature extraction method, device and system based on inspection humanoid robot
CN120088410A (en) * 2025-04-30 2025-06-03 国网江苏省电力有限公司苏州供电分公司 Underground cable and pipe ditch mapping and recording method based on artificial intelligence
CN120088410B (en) * 2025-04-30 2025-07-15 国网江苏省电力有限公司苏州供电分公司 Surveying and recording method of underground cables and trenches based on artificial intelligence

Similar Documents

Publication Publication Date Title
Kim et al. Deep learning based vehicle position and orientation estimation via inverse perspective mapping image
CN108983781B (en) An environment detection method in an unmanned vehicle target search system
Zhe et al. Inter-vehicle distance estimation method based on monocular vision using 3D detection
CN115439621A (en) A 3D map reconstruction and target detection method for a coal mine inspection robot
CN103413313B (en) The binocular vision navigation system of electrically-based robot and method
CN114862901A (en) Road-end multi-source sensor fusion target sensing method and system for surface mine
CN108955702A (en) Based on the lane of three-dimensional laser and GPS inertial navigation system grade map creation system
CN111060924A (en) SLAM and target tracking method
CN114494618A (en) Map generation method and device, electronic equipment and storage medium
CN111652072A (en) Track acquisition method, track acquisition device, storage medium and electronic equipment
CN109917359B (en) Robust vehicle distance estimation method based on vehicle-mounted monocular vision
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
Ruf et al. Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision
WO2024216523A1 (en) Method and system for sensing foreign matter within urban rail train travellng clearance, and apparatus and medium
CN116573017A (en) Method, system, device and medium for sensing foreign objects in urban rail train running boundary
CN115564865A (en) Construction method and system of crowdsourcing high-precision map, electronic equipment and vehicle
CN117011388A (en) 3D target detection method and device based on fusion of laser radar and binocular camera
Ma et al. Semantic geometric fusion multi-object tracking and lidar odometry in dynamic environment
CN117152210A (en) Image dynamic tracking method and related device based on dynamic observation field angle
CN116189150A (en) Monocular 3D target detection method, device, equipment and medium based on fusion output
Hu et al. Semantic navigation for automated robotic inspection and indoor environment quality monitoring
CN116508067A (en) Processing images to extract information about known objects
Huang et al. A coarse-to-fine LiDAR-based SLAM with dynamic object removal in dense urban areas
US20230230317A1 (en) Method for generating at least one ground truth from a bird's eye view
CN118196205A (en) On-line self-calibration method and system for external parameters of vehicle-mounted camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination