[go: up one dir, main page]

CN111985322A - A Lidar-based Road Environment Element Perception Method - Google Patents

A Lidar-based Road Environment Element Perception Method Download PDF

Info

Publication number
CN111985322A
CN111985322A CN202010674121.XA CN202010674121A CN111985322A CN 111985322 A CN111985322 A CN 111985322A CN 202010674121 A CN202010674121 A CN 202010674121A CN 111985322 A CN111985322 A CN 111985322A
Authority
CN
China
Prior art keywords
data
points
grid
point
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010674121.XA
Other languages
Chinese (zh)
Other versions
CN111985322B (en
Inventor
宁小娟
马婷
张金磊
王映辉
金海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202010674121.XA priority Critical patent/CN111985322B/en
Publication of CN111985322A publication Critical patent/CN111985322A/en
Application granted granted Critical
Publication of CN111985322B publication Critical patent/CN111985322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明公开了一种基于激光雷达的道路环境要素感知方法,步骤包括:步骤1,对原始的激光雷达的点云数据进行下采样、感兴趣区域的获取,获得精简点云数据以及对地面数据进行分割,得到结果;步骤2,采用基于体素化的DBSCAN聚类算法,实现非地面数据的障碍物分割;基于多特征的障碍物分类识别;步骤3,利用数据点高程突变设计一种适合车载激光雷达数据的道路边界提取方法,提取出道路边界点后使用先验知识进行边界点的筛选,并采用最小二乘法进行道路边界线的拟合;利用栅格标记无障碍区域,实现激光雷达道路场景中可通行区域的检测,即成。本发明的方法,算法简单,数据误差小,安全可靠。

Figure 202010674121

The invention discloses a method for perceiving road environment elements based on laser radar. The steps include: step 1, down-sampling the original laser radar point cloud data, acquiring the region of interest, and obtaining simplified point cloud data and ground data. Segmentation to obtain results; Step 2, use the voxel-based DBSCAN clustering algorithm to achieve obstacle segmentation of non-ground data; classify and identify obstacles based on multi-features; Step 3, use the data point elevation mutation to design a suitable The road boundary extraction method of vehicle lidar data, after extracting road boundary points, use prior knowledge to screen the boundary points, and use the least squares method to fit the road boundary line; use the grid to mark the barrier-free area to realize lidar The detection of the passable area in the road scene is done. The method of the invention has the advantages of simple algorithm, small data error, safety and reliability.

Figure 202010674121

Description

一种基于激光雷达的道路环境要素感知方法A Lidar-based Road Environment Element Perception Method

技术领域technical field

本发明属于计算机视觉及人工智能技术领域,涉及一种基于激光雷达的道路环境要素感知方法。The invention belongs to the technical field of computer vision and artificial intelligence, and relates to a road environment element perception method based on laser radar.

背景技术Background technique

道路环境感知技术是无人驾驶的重要技术支持,是计算机视觉以及人工智能中的核心研究内容之一,具有广泛的应用前景。Road environment perception technology is an important technical support for unmanned driving. It is one of the core research contents in computer vision and artificial intelligence, and has broad application prospects.

在计算机视觉领域,由于受到道路环境的复杂性和实时多变性的影响,通常需要从不同的视角、不同的时刻采集多个数据集,每个数据集包含大量的数据和实时特征。通过将它们分析处理感知实时路况,进而实现车辆实时控制、路径规划、障碍规避等重要任务。目前,很多道路环境感知研究中采用的场景数据比较简单,场景中只包含数目不多的几个障碍物,这种情况实现的感知方法往往能得到好的结果,但不能适应具有较多障碍物元素的复杂场景。另外,实现感知任务比较单一,一种场景数据只适合解决某一类感知任务,无法做到一种场景数据处理多个任务的感知操作。In the field of computer vision, due to the complexity and real-time variability of the road environment, it is usually necessary to collect multiple datasets from different perspectives and at different times, and each dataset contains a large amount of data and real-time features. By analyzing and processing them to perceive real-time road conditions, important tasks such as real-time vehicle control, path planning, and obstacle avoidance can be realized. At present, the scene data used in many road environment perception research is relatively simple, and the scene contains only a few obstacles. A complex scene of elements. In addition, the realization of the perception task is relatively simple, and one type of scene data is only suitable for solving a certain type of perception task, and it is impossible to perform the perception operation of processing multiple tasks with one scene data.

因此,选择以车载激光雷达获取的点云数据为研究对象,围绕无人车道路环境感知的几个关键技术展开研究,包括地面分割、障碍物分割与识别、道路边界检测以及可通行区域检测。通过几个关键技术的实现,共同完成道路环境的感知任务,从而弥补目前环境感知研究中数据包含障碍物较少,感知任务单一的缺陷。通过激光雷达环境感知的研究,完成对无人车周围环境的探测感知,这对后续分析实时路况以及实现车辆控制、路径规划等操作具有重要意义。Therefore, the point cloud data obtained by vehicle lidar is selected as the research object, and research is carried out around several key technologies of road environment perception of unmanned vehicles, including ground segmentation, obstacle segmentation and recognition, road boundary detection and passable area detection. Through the realization of several key technologies, the perception task of the road environment is jointly completed, thereby making up for the defects of the current environment perception research that the data contains less obstacles and the perception task is single. Through the research of lidar environment perception, the detection and perception of the surrounding environment of the unmanned vehicle is completed, which is of great significance for the subsequent analysis of real-time road conditions and the realization of vehicle control, path planning and other operations.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提供一种基于激光雷达的道路环境要素感知方法,解决了现有技术无法实现一种场景数据处理多个任务的感知操作,能够实时高效地感知道路环境要素。The purpose of the present invention is to provide a road environment element perception method based on lidar, which solves the problem that the prior art cannot realize a perception operation of multiple tasks of scene data processing, and can perceive road environment elements efficiently in real time.

本发明所采用的技术方案是,一种基于激光雷达的道路环境要素感知方法,按照以下步骤实施:The technical solution adopted in the present invention is a method for perceiving road environment elements based on laser radar, which is implemented according to the following steps:

步骤1,对原始的激光雷达的点云数据进行下采样、感兴趣区域的获取,获得精简点云数据以及对地面数据进行分割,得到结果,具体过程是:Step 1, down-sampling the original lidar point cloud data, acquiring the region of interest, obtaining the simplified point cloud data and segmenting the ground data, and obtaining the result. The specific process is:

1.1)实现基于体素栅格的下采样算法来进行数据的精简操作;1.1) Realize the downsampling algorithm based on the voxel grid to perform data reduction operations;

1.2)消除离散点和远距离无用点,获取后续处理的感兴趣区域;1.2) Eliminate discrete points and long-distance useless points, and obtain areas of interest for subsequent processing;

1.3)将经过步骤1.1)和步骤1.2)获得的精简数据分割出地面数据,在RANSAC平面检测算法基础上,对数据进行分段校准处理后再实现地面分割,以达到对复杂场景提取地面的普适性;1.3) Divide the simplified data obtained in steps 1.1) and 1.2) into ground data. Based on the RANSAC plane detection algorithm, the data is segmented and calibrated and then the ground is segmented, so as to achieve common ground extraction for complex scenes. suitability;

步骤2,实施障碍物分割与障碍物识别,Step 2, implement obstacle segmentation and obstacle recognition,

2.1)采用基于体素化的DBSCAN聚类算法,实现非地面数据的障碍物分割;2.1) Using voxel-based DBSCAN clustering algorithm to achieve obstacle segmentation of non-ground data;

2.2)基于多特征的障碍物分类识别;2.2) Multi-feature-based obstacle classification and recognition;

步骤3,进行道路边界检测与可通行区域分析,Step 3, carry out road boundary detection and passable area analysis,

3.1)利用数据点高程突变设计一种适合车载激光雷达数据的道路边界提取方法,提取出道路边界点后使用先验知识进行边界点的筛选,并采用最小二乘法进行道路边界线的拟合;3.1) Design a road boundary extraction method suitable for vehicle-mounted lidar data by using the elevation mutation of data points. After extracting road boundary points, use prior knowledge to screen the boundary points, and use the least squares method to fit the road boundary line;

3.2)利用栅格标记无障碍区域,实现激光雷达道路场景中可通行区域的检测,即成。3.2) Use the grid to mark the barrier-free area to realize the detection of the passable area in the lidar road scene, and that's it.

本发明的有益效果是,算法简单,数据误差小,安全可靠,丰富了人工智能和计算机视觉的方法体系,支撑了机器环境感知的发展,为无人驾驶的基础技术道路环境感知提供了一种选择。The beneficial effects of the invention are that the algorithm is simple, the data error is small, safe and reliable, the method system of artificial intelligence and computer vision is enriched, the development of machine environment perception is supported, and a road environment perception for the basic technology of unmanned driving is provided. choose.

附图说明Description of drawings

图1a是本发明方法实施例的步骤1中输入原始数据;Fig. 1a is the input raw data in step 1 of the method embodiment of the present invention;

图1b是本发明方法实施例的步骤1中得到下采样并获取感兴趣区域后的数据;Fig. 1b is the data obtained after downsampling and obtaining the region of interest in step 1 of the method embodiment of the present invention;

图1c是本发明方法实施例的步骤1中得到分割后的场景地面数据;Fig. 1c is the scene ground data obtained after segmentation in step 1 of the method embodiment of the present invention;

图2a是本发明方法实施例的步骤2中的输入非地面点云原始数据;Fig. 2a is the input non-ground point cloud original data in step 2 of the method embodiment of the present invention;

图2b是本发明方法实施例的步骤2中障碍物分割后的数据;Fig. 2b is the data after obstacle segmentation in step 2 of the method embodiment of the present invention;

图2c是本发明方法实施例的步骤2中得到障碍物分类识别后的数据;Fig. 2c is the data obtained after classification and identification of obstacles in step 2 of the method embodiment of the present invention;

图3a本发明方法实施例的步骤3得到的道路边界检测结果;Fig. 3a is the road boundary detection result obtained in step 3 of the method embodiment of the present invention;

图3b本发明方法实施例的步骤3得到的可通行区域分析的结果;Fig. 3b is the result of the analysis of the passable area obtained in step 3 of the method embodiment of the present invention;

具体实施方式Detailed ways

下面结合附图和具体实施方式对本发明进行详细说明。The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.

本发明基于激光雷达的道路环境要素感知方法,按照以下步骤实施:The present invention is implemented according to the following steps:

步骤1,如图1a所示,对原始的激光雷达的点云数据进行下采样、感兴趣区域的获取,获得如图1b所示的精简点云数据以及对地面数据进行分割,得到如图1c所示的结果,具体过程是:Step 1, as shown in Figure 1a, downsample the original lidar point cloud data, acquire the region of interest, obtain the simplified point cloud data shown in Figure 1b and segment the ground data, and obtain Figure 1c The results shown, the specific process is:

1.1)实现基于体素栅格的下采样算法来进行数据的精简操作,1.1) Realize the downsampling algorithm based on voxel grid to simplify the data,

激光雷达获得的点云数据是一些散乱无序的点,基于体素栅格的下采样方法不需要构建复杂的拓扑结构,实现简单且算法效率高。根据点云数据分布的稠密程度选择合适栅格单元的三个边长,将点云数据拆分成一个个的小体素栅格,统计包含在单元中的所有点,计算它们的质心作为整个栅格单元内全部点的代表,将所有栅格单元质心点共同组成新的数据,完成下采样,The point cloud data obtained by lidar are scattered and disordered points. The downsampling method based on voxel grid does not need to construct a complex topology structure, and the implementation is simple and the algorithm is efficient. According to the density of the point cloud data distribution, select the three side lengths of the appropriate grid cell, split the point cloud data into small voxel grids, count all the points contained in the cell, and calculate their centroids as the whole The representative of all points in the grid unit, the centroid points of all grid units are combined to form new data, and the downsampling is completed.

1.1.1)读取原始激光雷达的点云数据(如图1a所示);1.1.1) Read the point cloud data of the original lidar (as shown in Figure 1a);

1.1.2)计算点云数据中最大点及最小点,即点云包围盒的右上点maxPoint和左下点minPoint,以此计算出点云包围盒的长Lx、宽Ly、高Lz;1.1.2) Calculate the maximum and minimum points in the point cloud data, that is, the upper right point maxPoint and the lower left point minPoint of the point cloud bounding box, so as to calculate the length Lx, width Ly, and height Lz of the point cloud bounding box;

1.1.3)设定体素栅格单元的长Vx、宽Vy、高Vz;将点云数据进行体素栅格划分,获得m×n×s个单元,其中m=[Lx/Vx],n=[Ly/Vy],s=[Lz/Vz],符号“[]”是取整符号,保证栅格单元数为整数;1.1.3) Set the length Vx, width Vy, and height Vz of the voxel grid unit; divide the point cloud data into a voxel grid to obtain m×n×s units, where m=[Lx/Vx], n=[Ly/Vy], s=[Lz/Vz], the symbol "[]" is the rounding symbol to ensure that the number of grid cells is an integer;

1.1.4)随机选择任意一点p,那么点p对应的栅格单元为:1.1.4) Randomly select any point p, then the grid cell corresponding to point p is:

Figure BDA0002583434820000041
Figure BDA0002583434820000041

Figure BDA0002583434820000042
Figure BDA0002583434820000042

点p所属一维栅格单元编号Index为:The one-dimensional grid cell number Index to which point p belongs is:

Index=Iindex_x*1+Iindex_y*(m+1)+Iindex_z*(m+1)*(n+1)(2)Index=I index_x *1+I index_y *(m+1)+I index_z *(m+1)*(n+1)(2)

1.1.5)遍历原点云所有点并将栅格单元编号Index相同的点插入同一个Index_vector向量中;1.1.5) Traverse all the points of the origin cloud and insert the points with the same grid cell number Index into the same Index_vector vector;

1.1.6)计算相同编号点所在的栅格单元内L个点的质心G(x,y,z)作为代表点实现下采样,计算公式如下:1.1.6) Calculate the centroid G(x, y, z) of L points in the grid cell where the same numbered point is located as a representative point to achieve downsampling. The calculation formula is as follows:

Figure BDA0002583434820000043
Figure BDA0002583434820000043

1.2)消除离散点和远距离无用点,获取后续处理的感兴趣区域:1.2) Eliminate discrete points and long-distance useless points, and obtain the area of interest for subsequent processing:

在分析车载激光雷达获得的点云数据后,为实现离散点和远距离无用点的消除,从而获取后续处理的感兴趣区域,本步骤提出了一种基于点密度统计的感兴趣区域获取方法。图1a是一帧车载激光雷达获得的点云数据,中心空白类圆形区域便是激光雷达设备所在的位置,数据以车辆前行方向为X轴正方向,以车辆右侧方向是Y轴的正方向,向上方向是Z轴正方向。通过统计车辆前进方向两侧也就是Y轴方向获取点的疏密程度,并与设定阈值进行比较,从而划分此处是否属于感兴趣区域。具体过程是:After analyzing the point cloud data obtained by the vehicle-mounted lidar, in order to eliminate discrete points and long-distance useless points, so as to obtain the region of interest for subsequent processing, this step proposes a region of interest acquisition method based on point density statistics. Figure 1a is a frame of point cloud data obtained by vehicle-mounted lidar. The blank circular area in the center is where the lidar device is located. The data takes the forward direction of the vehicle as the positive direction of the X-axis, and the right direction of the vehicle is the Y-axis. The positive direction, the upward direction is the positive direction of the Z axis. By counting the density of points on both sides of the vehicle's forward direction, that is, in the Y-axis direction, and comparing with the set threshold, it can be divided into whether it belongs to the area of interest. The specific process is:

1.2.1)将下采样处理后的点云数据投影到yoz平面;1.2.1) Project the downsampled point cloud data to the yoz plane;

1.2.2)获得投影数据以Y轴方向的最大数据Ymin到最小数据Ymax,在此区域内将数据划分为M段,(依据测试的数据大小,设置M=20);1.2.2) Obtain the projection data from the maximum data Y min to the minimum data Y max in the Y-axis direction, and divide the data into M segments in this area, (according to the data size of the test, set M=20);

1.2.3)统计每段数据中点的密度Di,并绘制整个点云数据的密度曲线;1.2.3) Count the density D i of the points in each piece of data, and draw the density curve of the entire point cloud data;

1.2.4)通过公式(4)计算每段密度所占总密度比r,与设定阈值

Figure BDA0002583434820000052
(根据数据以及对应的密度曲线,阈值
Figure BDA0002583434820000053
优选2000)进行比较,从最左端进行判断,若
Figure BDA0002583434820000054
则认为此段疏密度较低,包含信息较少,将此段删除;继续比较下一段,若出现
Figure BDA0002583434820000055
停止此次判断;再从最右端开始判断,若
Figure BDA0002583434820000057
则将此段删除,直到
Figure BDA0002583434820000056
时结束判断,1.2.4) Calculate the total density ratio r of each segment density by formula (4), and set the threshold value
Figure BDA0002583434820000052
(Based on the data and the corresponding density curve, the threshold
Figure BDA0002583434820000053
Preferably 2000) for comparison, and judge from the leftmost end, if
Figure BDA0002583434820000054
It is considered that this paragraph has a low density and contains less information, and this paragraph is deleted; continue to compare the next paragraph, if there is
Figure BDA0002583434820000055
Stop this judgment; then start the judgment from the far right, if
Figure BDA0002583434820000057
delete this segment until
Figure BDA0002583434820000056
when the judgment ends,

Figure BDA0002583434820000051
Figure BDA0002583434820000051

1.2.5)对于车辆前行方向,选择距离激光雷达设备前后30m作为X轴方向的感兴趣区域,使用直通滤波方法进行区域划分,得到下采样后的感兴趣区域作为最终精简处理后的场景数据,如图1b所示的结果。1.2.5) For the forward direction of the vehicle, select 30m before and after the lidar device as the region of interest in the X-axis direction, use the pass-through filtering method to divide the region, and obtain the down-sampled region of interest as the final simplified scene data. , the results shown in Fig. 1b.

1.3)将经过步骤1.1)和步骤1.2)获得的精简数据分割出地面数据(如图1b所示),在RANSAC平面检测算法基础上,对数据进行分段校准处理后再实现地面分割,以达到对复杂场景提取地面的普适性,具体过程是:1.3) Divide the simplified data obtained in steps 1.1) and 1.2) into ground data (as shown in Figure 1b). Based on the RANSAC plane detection algorithm, the data is segmented and calibrated and then the ground is segmented to achieve To extract the universality of the ground for complex scenes, the specific process is:

1.3.1)计算沿车辆行驶方向X坐标轴的最小值min_x及最大值max_x,将精简处理后的点云数据沿X轴方向以1m的长度划分成N段子点云数据,此处N=ROUNDUP(max_x-min_x),ROUNDUP(x)为向上取整函数;1.3.1) Calculate the minimum value min_x and the maximum value max_x along the X-coordinate axis of the vehicle's driving direction, and divide the point cloud data after simplified processing into N segments of sub-point cloud data with a length of 1m along the X-axis direction, where N=ROUNDUP (max_x-min_x), ROUNDUP(x) is a round-up function;

1.3.2)对于各段子点云数据,采用RANSAC平面检测算法将此段子点云数据中地面部分检测出来,得到拟合平面的一般方程为:1.3.2) For each segment of sub-point cloud data, the RANSAC plane detection algorithm is used to detect the ground part of this segment of sub-point cloud data, and the general equation to obtain the fitting plane is:

Ax+By+Cz+D=0,其中A,B,C,D为已知常数,并且A,B,C不同时为零,通过此方程得到该拟合平面的法向量V=(A,B,C);此外,使用激光雷达采集数据时,雷达坐标系中Z轴是保持垂直的,则竖直向量为Q=(0,0,1),计算拟合得到的地面Plane的法向量V与竖直向量Q的校准矩阵,称为旋转矩阵T,利用得到的旋转矩阵T,将此段子点云数据进行校准,具体校准操作如下:Ax+By+Cz+D=0, where A, B, C, D are known constants, and A, B, C are not zero at the same time, the normal vector V=(A, B, C); in addition, when using lidar to collect data, the Z axis in the radar coordinate system is kept vertical, then the vertical vector is Q=(0,0,1), and the normal vector of the ground Plane obtained by calculating the fitting The calibration matrix of V and vertical vector Q is called rotation matrix T, and the obtained rotation matrix T is used to calibrate the sub-point cloud data. The specific calibration operation is as follows:

已知校准前的向量为V,校准后的向量为Q,那么:It is known that the vector before calibration is V and the vector after calibration is Q, then:

V·Q=|V||Q|cosθ (5)V·Q=|V||Q|cosθ (5)

则V、Q之间的夹角为:Then the angle between V and Q is:

Figure BDA0002583434820000061
Figure BDA0002583434820000061

假定校准前向量V为(a1,a2,a3)(这个是公式的实例化,相当于一个是形式参数,一个是实际参数),校准后向量Q为(b1,b2,b3),根据向量的叉乘定义得到旋转轴C(c1,c2,c3),则有:Assuming that the vector V before calibration is (a1, a2, a3) (this is an instantiation of the formula, which is equivalent to one being a formal parameter and the other being an actual parameter), and the vector Q after calibration is (b1, b2, b3), according to the vector The cross product definition gets the rotation axis C(c1,c2,c3), then there are:

Figure BDA0002583434820000062
Figure BDA0002583434820000062

设旋转轴C对应一个单位向量k(kx,ky,kz),利用旋转角θ和单位向量k,由罗德里格旋转公式,空间中一个任意向量u沿单位向量k旋转θ角度后得到的向量w为:Let the rotation axis C correspond to a unit vector k (k x , k y , k z ), using the rotation angle θ and the unit vector k, according to the Rodrigue rotation formula, an arbitrary vector u in the space rotates along the unit vector k by the angle θ after The resulting vector w is:

w=u cosθ+(k·u)k(1-cosθ)+(k·u)sinθ (8)w=u cosθ+(k·u)k(1-cosθ)+(k·u)sinθ (8)

从而得出对应的旋转矩阵T为:Thus, the corresponding rotation matrix T is obtained as:

Figure BDA0002583434820000071
Figure BDA0002583434820000071

1.3.3)基于RANSAC平面检测算法实现此段子点云数据的地面分割,具体步骤是:1.3.3) Realize the ground segmentation of this sub-point cloud data based on the RANSAC plane detection algorithm. The specific steps are:

1.3.3.1)输入校准后的子点云数据CloudPoint,给定平面模型和设置距离阈值t,(本步骤根据经验数据,设定阈值t为0.1m);1.3.3.1) Input the calibrated sub-point cloud data CloudPoint, give the plane model and set the distance threshold t, (this step is based on empirical data, and the threshold t is set to 0.1m);

1.3.3.2)在点云数据中随机采样三个点,使这三个点不在同一条直线上,求解三个随机样本点所构成的平面模型方程;1.3.3.2) Randomly sample three points in the point cloud data so that these three points are not on the same straight line, and solve the plane model equation formed by the three random sample points;

1.3.3.3)计算点云数据的其他点到该平面的距离dis,并与设定的距离阈值t进行比较,得到基于该平面模型的局内点集合IncloudSet;1.3.3.3) Calculate the distance dis from other points of the point cloud data to the plane, and compare it with the set distance threshold t to obtain an intra-office point set IncloudSet based on the plane model;

1.3.3.4)统计该平面模型局内点集合IncloudSet的个数,与当前最大个数进行比较,保留两者中最大的那个个数;1.3.3.4) Count the number of IncloudSets in the in-office point set of the plane model, compare it with the current maximum number, and keep the largest number of the two;

1.3.3.5)将保留下个数最多的所有局内点集合重新估计新的平面模型,并输出局内点为此次计算的地面点;1.3.3.5) Re-estimate the new plane model by keeping the set of all intra-office points with the largest number, and output the intra-office point as the ground point for this calculation;

1.3.3.6)重新采样,重复以上步骤;当迭代计算次数达到最大迭代数后,退出迭代循环,选出迭代过程中最优平面模型与输出局内点数最多的地面点,此时得到的地面点便是该段点云检测到的地面数据。1.3.3.6) Re-sampling and repeat the above steps; when the number of iterative calculations reaches the maximum number of iterations, exit the iterative loop, select the optimal plane model and the ground point with the largest number of points in the output bureau during the iteration process, and the ground point obtained at this time is is the ground data detected by this point cloud.

1.3.4)将各段子点云数据检测到的地面数据组合起来,得到最终的地面数据,如图1c所示,实现地面与非地面数据的分割。1.3.4) Combine the ground data detected by each sub-point cloud data to obtain the final ground data, as shown in Figure 1c, to achieve the segmentation of ground and non-ground data.

步骤2,实施障碍物分割与障碍物识别,Step 2, implement obstacle segmentation and obstacle recognition,

2.1)采用基于体素化的DBSCAN聚类算法,实现非地面数据的障碍物分割;2.1) Using voxel-based DBSCAN clustering algorithm to achieve obstacle segmentation of non-ground data;

2.1.1)如图2a所示,将非地面激光雷达点云场景数据作为输入进行体素栅格单元的划分,设置栅格单元边长d为15cm,通过计算获得点云数据X、Y、Z三个坐标轴的最大最小值,利用设定的栅格单元边长d计算得到划分的栅格单元每一坐标轴的层数m、n、s(此处层数m、n、s不需要与前面重复的字母进行区别,因为面对的是不同数据主体的同一种操作),以X坐标轴为例,min_X为X轴最小坐标,max_X为X轴最大坐标(此处min_X、max_X不需要与前面重复的字母进行区别,因为面对的是不同数据主体的同一种操作),则有:2.1.1) As shown in Figure 2a, the non-ground lidar point cloud scene data is used as input to divide the voxel grid unit, and the side length d of the grid unit is set to 15cm, and the point cloud data X, Y, The maximum and minimum values of the three coordinate axes of Z are calculated by using the set side length d of the grid unit to obtain the number of layers m, n, and s of each coordinate axis of the divided grid unit (the number of layers m, n, and s here is not It needs to be distinguished from the previous repeated letters, because it is facing the same operation of different data subjects), taking the X-axis as an example, min_X is the minimum coordinate of the X-axis, and max_X is the maximum coordinate of the X-axis (here min_X, max_X are not It needs to be distinguished from the previous repeated letters, because it is faced with the same operation by different data subjects), there are:

Figure BDA0002583434820000081
Figure BDA0002583434820000081

Y轴、Z轴与上述X轴处理方式一致,经过以上步骤能够得到场景数据的单元边长为d的体素栅格单元划分;The Y-axis and Z-axis are consistent with the processing methods of the above-mentioned X-axis. After the above steps, the voxel grid unit division of the unit side length d of the scene data can be obtained;

2.1.2)统计所有栅格单元内包含的点数,并记录所有非空的栅格单元,对这些栅格单元进行合理的标记,保证这些栅格单元整体结构与原数据位置信息相同,确定某一栅格单元周围邻域内栅格单元与原数据相同;2.1.2) Count the number of points contained in all grid cells, record all non-empty grid cells, and mark these grid cells reasonably to ensure that the overall structure of these grid cells is the same as the original data location information, and determine a certain grid cell. The grid cells in the neighborhood around a grid cell are the same as the original data;

根据设定的参数阈值MinPts(依据使用的点云数据和实验经验,参数阈值MinPts设置为20),对每个非空栅格单元进行判断,如果栅格单元内包含的点数大于MinPts,则将本栅格单元设置为核心单元,包含的点设置为核心点;如果栅格单元内包含的点数小于MinPts,则需要进行二次判断;According to the set parameter threshold MinPts (based on the point cloud data used and experimental experience, the parameter threshold MinPts is set to 20), each non-empty grid cell is judged, if the number of points contained in the grid cell is greater than MinPts, the This grid unit is set as the core unit, and the contained points are set as the core points; if the number of points contained in the grid unit is less than MinPts, a secondary judgment is required;

对此栅格单元内的点进行ε-邻域搜索,判断邻域内是否存在位于核心单元中的邻域点,如果邻域点存在,则将此栅格单元也划分为核心单元参与之后的计算,其他点不再进行判断;如果邻域点不存在,选取此栅格单元中的其他点进行判断,直到所有点都进行判断后依旧不存在邻域点位于其他核心单元中,则直接判断此栅格单元内的点为噪声点;Perform an ε-neighborhood search on the points in this grid unit to determine whether there are neighbor points located in the core unit in the neighborhood. , other points will no longer be judged; if the neighborhood point does not exist, select other points in this grid cell for judgment, until all points have been judged, there is still no neighborhood point located in other core cells, then directly judge this The points in the grid cells are noise points;

2.1.3)计算1.1.2)中每个非噪声栅格单元内包含R个点的质心Q:2.1.3) Calculate the centroid Q containing R points in each non-noise grid cell in 1.1.2):

Figure BDA0002583434820000091
Figure BDA0002583434820000091

2.1.4)将计算得到的质心点集合定义为核心点集合S,从集合中随机选取任意一核心点P,若点P半径距离ε内含有不少于MinPts个点,那么点P为核心点,如果不是,那么此点P为噪声点,依次计算它ε-邻域内包含的其他质心点并添加到候选集合E中,设置一个聚类簇,将点P和它邻域内的其他质心点作为同一个聚类簇中的一员,那么他们所代表的栅格单元以及包含的点也是同一个聚类簇的,将这个聚类簇的所有点设置同一个label作为与其他聚类簇的区分;从候选集合E中选取一个质心点,同样计算这个质心点的ε-邻域内包含的质心点以及进行聚类划分,直到候选集合E中没有元素,初步完成这个聚类簇;然后,将核心点集合S中已经确定的核心点对象移除,从中再任意选取一点继续上述操作,直至核心点集合S中元素为空;2.1.4) Define the calculated set of centroid points as the core point set S, and randomly select any core point P from the set. If there are no less than MinPts points within the radius distance ε of the point P, then the point P is the core point , if not, then this point P is a noise point, calculate other centroid points contained in its ε-neighborhood in turn and add them to the candidate set E, set up a cluster, take point P and other centroid points in its neighborhood as A member of the same cluster, then the grid cells they represent and the points they contain are also in the same cluster, and set the same label for all points in this cluster to distinguish it from other clusters ; Select a centroid point from the candidate set E, and also calculate the centroid points contained in the ε-neighborhood of the centroid point and perform cluster division until there are no elements in the candidate set E, and the clustering cluster is initially completed; then, the core The determined core point objects in the point set S are removed, and then arbitrarily select a point to continue the above operation until the elements in the core point set S are empty;

2.1.5)在步骤2.1.4)中的ε-邻域包含点搜索时,为了提升搜索速度,构建KD-Tree搜索最近邻点:2.1.5) When the ε-neighborhood in step 2.1.4) contains point search, in order to improve the search speed, a KD-Tree is constructed to search for the nearest neighbor point:

2.1.5.1)判断输入数据集是否为空,如果是空数据集,返回空的KD-Tree,否则执行下面操作;2.1.5.1) Determine whether the input dataset is empty, if it is an empty dataset, return an empty KD-Tree, otherwise perform the following operations;

2.1.5.2)节点生成,主要分为两部分,即split域的确定和Node-data的确定;首先统计所有描述子数据(特征向量)在每个维上的数据方差,选出值中最大的一个所对应的维域作为split域的值,然后将数据集按照确定的split域的值进行排序,其在正中间的点作为Node-data;2.1.5.2) Node generation is mainly divided into two parts, namely the determination of the split domain and the determination of Node-data; first, the data variance of all descriptor data (eigenvectors) in each dimension is counted, and the largest value is selected. A corresponding dimension field is used as the value of the split field, and then the data set is sorted according to the determined value of the split field, and the point in the middle is used as Node-data;

2.1.5.3)根据确定的Node-data,划分左子空间和右子空间;2.1.5.3) According to the determined Node-data, divide the left subspace and the right subspace;

2.1.5.4)递归构建KD-Tree;2.1.5.4) Construct KD-Tree recursively;

2.1.6)选出聚类簇聚合中较小的簇F,设定点数小于100的簇为较小聚类簇,分析簇中所有点的ε-邻域搜索,判断该点邻域内是否存在标记为其他簇label的点,如果存在则将此聚类簇与检测到的聚类簇进行合并,若所有点都不存在,则将其独自划分为一个聚类簇,直到所有较小聚类簇都进行过判断,然后将标记代表簇label的点进行存储;最后,对所有点进行遍历,根据label将同一聚类簇输出,并配置不同的颜色来进行区别;2.1.6) Select the smaller cluster F in the cluster aggregation, set the cluster with the number of points less than 100 as the smaller cluster, analyze the ε-neighborhood search of all points in the cluster, and determine whether the point exists in the neighborhood Points marked as labels of other clusters, if they exist, merge this cluster with the detected cluster, if all points do not exist, divide it into a cluster by itself, until all smaller clusters The clusters have been judged, and then the points representing the cluster labels are stored; finally, all the points are traversed, the same cluster is output according to the label, and different colors are configured to distinguish;

至此,就获得了非地面障碍物的分割结果,以不同颜色的独立的聚类簇体现出来,如图2b所示。So far, the segmentation results of non-ground obstacles have been obtained, which are represented by independent clusters of different colors, as shown in Figure 2b.

2.2)基于多特征的障碍物分类识别,2.2) Multi-feature-based obstacle classification and recognition,

2.2.1)获取通过步骤2.1)得到的聚类分割后的聚类簇的以下6类特征:2.2.1) Obtain the following 6 types of features of the clustered clusters obtained by step 2.1) after clustering:

f1特征是包围盒的长、宽、高信息;The f 1 feature is the length, width and height information of the bounding box;

f2特征是数据包含点的数据量; The f2 feature is the data volume of the data containing points;

f3特征是描述聚类分割后聚类簇数据的三维协方差矩阵特征值信息,对于三维点云,协方差矩阵H表示为:The f3 feature is the eigenvalue information of the three-dimensional covariance matrix describing the clustered cluster data after cluster segmentation. For a three-dimensional point cloud, the covariance matrix H is expressed as:

Figure BDA0002583434820000102
Figure BDA0002583434820000102

在公式(12)中,以cov(x,y)为例:In formula (12), take cov(x,y) as an example:

cov(x,y)=E{[x-E(x)][y-E(y)]} (13)cov(x,y)=E{[x-E(x)][y-E(y)]} (13)

在公式(13)中,以E(x)为例,E(x)为x的期望或均值,表示为

Figure BDA0002583434820000103
In formula (13), taking E(x) as an example, E(x) is the expected or mean value of x, expressed as
Figure BDA0002583434820000103

假设分割后聚类簇数据包含j个点(x1,y1,z1)……(xj,yj,zj),则任一点(xi,yi,zi)表达如下:Assuming that the clustered data after segmentation contains j points (x 1 , y 1 , z 1 )...(x j , y j , z j ), then any point (x i , y i , z i ) is expressed as follows:

Figure BDA0002583434820000101
Figure BDA0002583434820000101

然后依据公式(15)求得协方差矩阵的所有的特征值λiThen all the eigenvalues λ i of the covariance matrix are obtained according to formula (15):

|H-λE|=0 (15)|H-λE|=0 (15)

f4特征是描述聚类分割后的聚类簇数据与无人车激光雷达设备的距离信息,利用公式(16)求解簇内点的平均坐标,以此作为该障碍物聚类的位置坐标假设分割后聚类簇数据包含T个点,;The f4 feature is to describe the distance information between the clustered cluster data after clustering and the unmanned vehicle lidar device, and the average coordinates of the points in the cluster are calculated by formula (16), which is used as the position coordinate assumption of the obstacle clustering. After segmentation, the cluster data contains T points,;

Figure BDA0002583434820000111
Figure BDA0002583434820000111

f5特征是描述聚类分割后的聚类簇数据的主方向趋势信息,根据不同种类的障碍物,其点云数据的分布趋势也不同,根据聚类簇数据中每个方向轴的最大最小坐标值以及利用AABB包围盒求得数据高程、宽度、长度,通过组合得到聚类簇数据的面积大小、长高比、宽高比等信息来确定该障碍物的主体方向趋势信息; The f5 feature is to describe the main direction trend information of the cluster data after cluster segmentation. According to different types of obstacles, the distribution trend of the point cloud data is also different. According to the maximum and minimum values of each direction axis in the cluster data Coordinate values and using AABB bounding box to obtain data elevation, width and length, and through combination to obtain the area size, aspect ratio, aspect ratio and other information of cluster data to determine the main direction trend information of the obstacle;

f6特征是描述聚类分割后的聚类簇数据的双视图形状轮廓信息。通过分析该聚类簇数据左视图、前视图的形状轮廓信息,能够模糊判断该聚类点云数据所代表障碍物种类。The f6 feature is the bi-view shape outline information describing the clustered cluster data after cluster division. By analyzing the shape and outline information of the left view and front view of the clustered cluster data, the type of obstacles represented by the clustered point cloud data can be judged fuzzy.

2.2.2)基于JointBoost分类方法的障碍物识别,2.2.2) Obstacle recognition based on JointBoost classification method,

2.2.2.1)将步骤2.2.1)提取的六类特征除去f4特征外的五类特征都作为JointBoos分类器所需的特征,并对这五类特征进行提取;2.2.2.1) Use the six types of features extracted in step 2.2.1 ) except the f

2.2.2.2)选择合适的样本数据集并标记各自所属类别标签进行JointBoost分类器的训练,再将测试数据通过训练好的分类器进行分类识别,得到初始的分类结果,2.2.2.2) Select the appropriate sample data set and mark the respective category labels to train the JointBoost classifier, and then classify and identify the test data through the trained classifier to obtain the initial classification result,

JointBoost实现多分类与SVM多分类相似,是将问题转化成多个二分类组合来实现,在进行训练时,输入m对训练点云的特征向量和对应的类标签(zi,ci),在每次训练的迭代过程中,训练每个二分类的一个弱分类器hm(zi,ci),并通过最小损失函数在多个二分类中找到多个类别的共享特征,依据共享特征将多个弱分类器组合成最后的强分类器Hm(zi,ci);在进行测试时,输入要识别点云聚类簇的特征向量,经过强分类器分类识别,输出这个聚类簇所有可能类别的概率值,概率值最大所代表的类别便是该点云聚类簇输入的特征向量确定的类别。The realization of multi-classification by JointBoost is similar to that of SVM multi-classification. It is realized by converting the problem into multiple binary classification combinations. During training, input m pairs of training point cloud feature vectors and corresponding class labels (z i , c i ), In the iterative process of each training, a weak classifier h m (z i , c i ) for each binary classification is trained, and the shared features of multiple categories are found in multiple binary classifications through the minimum loss function. The feature combines multiple weak classifiers into a final strong classifier H m (z i , c i ); when testing, input the feature vector to identify the point cloud cluster, and after the strong classifier classification and identification, output this The probability value of all possible categories of the cluster, the category represented by the largest probability value is the category determined by the feature vector input to the point cloud cluster.

2.2.2.3)因为分类器的分类结果可能存在误差,所以初步确定类别后再结合道路场景中包含的其他特征信息,比如空间特征信息,对分类器得到的分类结果进行优化,得到该聚类簇最终的识别结果。2.2.2.3) Because there may be errors in the classification results of the classifier, the classification results obtained by the classifier are optimized, and the cluster is obtained by combining other feature information contained in the road scene, such as spatial feature information, after initially determining the category. the final recognition result.

具体优化过程是,首先统计JointBoost分类器得到该聚类簇的分类概率结果集为U,并按从大到小对概率进行排序依次为U1,U2,U3,…,Un,其中U1所代表的类别在很大概率上是能够确定该聚类簇所属类别的,但依旧存在误差的可能,需要对其进行二次判别;然后,根据实际道路场景中障碍物的空间特征信息,利用f4特征,该聚类簇与无人车激光雷达设备的距离信息对其分类结果进行鉴别。The specific optimization process is to first count the JointBoost classifier to obtain the classification probability result set of the cluster as U, and sort the probabilities in descending order of U 1 , U 2 , U 3 ,…,U n , where The category represented by U 1 can determine the category of the cluster with a high probability, but there is still the possibility of error, and it needs to be judged twice; then, according to the spatial feature information of obstacles in the actual road scene , using the f 4 feature, the distance information between the cluster and the LiDAR device of the unmanned vehicle is used to identify its classification result.

在无人车上路行驶时,会根据实际情况提前获得道路的一些基本信息,例如道路宽度。就结合道路宽度和聚类簇到雷达设备的距离信息来对聚类簇分类器的识别结果做二次判别。When the unmanned vehicle is driving on the road, some basic information of the road, such as road width, will be obtained in advance according to the actual situation. Combined with the road width and the distance information from the cluster to the radar device, the identification result of the cluster classifier is used for secondary discrimination.

通过公式(16)来获得该聚类簇距离原点的三个坐标方向的平均距离,根据实验数据确定X轴方向为道路延伸方向,Y轴方向为道路宽度方向;设道路宽度为W,那么某个障碍物聚类簇,在道路宽度方向上到原点的距离为该聚类簇所有点y坐标的平均值

Figure BDA0002583434820000121
通过二者的大小比较能够作为确定一个类别信息的判决条件,其判决规则为:若
Figure BDA0002583434820000122
那么该障碍物已经远离道路区域,属于车辆类别的障碍物概率较低;若
Figure BDA0002583434820000123
则该障碍物聚类簇属于建筑物等其他类别障碍物的概率低;The average distance of the three coordinate directions of the cluster from the origin is obtained by formula (16). According to the experimental data, the X-axis direction is the road extension direction, and the Y-axis direction is the road width direction; set the road width as W, then a certain The distance to the origin in the width direction of the road is the average of the y-coordinates of all points in the cluster
Figure BDA0002583434820000121
The comparison of the size of the two can be used as a judgment condition for determining a category of information. The judgment rule is: if
Figure BDA0002583434820000122
Then the obstacle is far away from the road area, and the probability of the obstacle belonging to the vehicle category is low; if
Figure BDA0002583434820000123
The probability that the obstacle cluster belongs to other types of obstacles such as buildings is low;

根据JointBoost分类得到的概率集合和二次判别规定,确定最终的分类识别结果是:According to the probability set obtained by the JointBoost classification and the secondary discrimination rules, the final classification and recognition result is determined to be:

如果

Figure BDA0002583434820000131
若最大概率值代表的类别属于车辆类障碍物,则该点云聚类簇便是属于车辆类障碍物;若最大概率值代表的类别属于标志杆树木行人类障碍物,则该点云聚类簇便是属于道路的标志杆树木行人类;若最大概率值代表的类别属于建筑物类障碍物,则此次识别可能错误,重新进行分类,若结果相同则以此为识别结果;if
Figure BDA0002583434820000131
If the category represented by the maximum probability value is a vehicle obstacle, the point cloud cluster is a vehicle obstacle; if the category represented by the maximum probability value belongs to a sign pole, trees, pedestrians, and a human obstacle, the point cloud cluster The clusters are the signposts, trees and pedestrians belonging to the road; if the category represented by the maximum probability value is a building obstacle, the identification may be wrong this time, and the classification is re-classified. If the results are the same, this is the identification result;

如果

Figure BDA0002583434820000132
若最大概率值代表的类别属于建筑物类障碍物,则该聚类簇便是属于建筑物类障碍物;若最大概率值代表的类别属于标志杆树木行人类障碍物,则该点云聚类簇便是属于道路的标志杆树木行人类;若最大概率值代表的类别属于车辆类障碍物,则此次识别可能错误,重新进行分类,若结果相同则以此为识别结果。if
Figure BDA0002583434820000132
If the category represented by the maximum probability value belongs to a building obstacle, the cluster is a building obstacle; if the category represented by the maximum probability value belongs to a sign pole, trees, and human obstacles, the point cloud clustering The clusters are the signposts, trees, pedestrians and people belonging to the road; if the category represented by the maximum probability value belongs to the vehicle obstacle, the identification may be wrong this time, and the classification is re-classified. If the results are the same, this is the identification result.

经过上述步骤步骤2.2.1)和步骤2.1.2)得到障碍物分类识别结果,如图2c所示。After the above steps, step 2.2.1) and step 2.1.2), the obstacle classification and recognition result is obtained, as shown in Figure 2c.

步骤3,进行道路边界检测与可通行区域分析,Step 3, carry out road boundary detection and passable area analysis,

3.1)利用数据点高程突变设计一种适合车载激光雷达数据的道路边界提取方法,提取出道路边界点后使用先验知识进行边界点的筛选,并采用最小二乘法进行道路边界线的拟合;3.1) Design a road boundary extraction method suitable for vehicle-mounted lidar data by using the elevation mutation of data points. After extracting road boundary points, use prior knowledge to screen the boundary points, and use the least squares method to fit the road boundary line;

3.1.1)统计激光雷达点云数据中所有点的Z轴坐标值,根据道路环境情况确定道路边界的高度阈值G;然后采用图像中滑动窗口的思想统计窗口内点云数据点Z坐标发生突变(即Z坐标的变化大于G)的点,对其进行标记,这些标记的突变点便是道路边界的候选点;关于高度阈值G的确定,因为实际情况中,路牙、路面车辆、绿化带、建筑物等的高度差值大,所以需要按道路场景来判断,根据实验经验直通道路的高度阈值G约为1m,弯路场景区域的高度阈值G约为0.5m;根据高程分布以及确定的高度阈值G,利用滑动窗口思想确定数据中高程突变区域,进而确定突变点,3.1.1) Count the Z-axis coordinate values of all points in the lidar point cloud data, and determine the height threshold G of the road boundary according to the road environment; then use the idea of sliding windows in the image to count the sudden changes in the Z-coordinates of the point cloud data points in the window. (that is, the change of the Z coordinate is greater than G), mark it, and the mutation points of these marks are the candidate points of the road boundary; about the determination of the height threshold G, because in the actual situation, curbs, road vehicles, green belts The height difference between buildings, buildings, etc. is large, so it needs to be judged according to the road scene. According to the experimental experience, the height threshold G of the straight road is about 1m, and the height threshold G of the curved road scene area is about 0.5m; according to the elevation distribution and the determined height Threshold G, use the sliding window idea to determine the elevation mutation area in the data, and then determine the mutation point,

首先对激光雷达点云数据按照道路延伸方向(X轴方向)进行道路横切,将数据切割成沿X方向为0.5m宽的多个子点云数据,然后取出其中一个数据进行下述操作:First, the lidar point cloud data is cross-cut according to the road extension direction (X-axis direction), and the data is cut into multiple sub-point cloud data with a width of 0.5m along the X direction, and then one of the data is taken out and the following operations are performed:

操作1:对子点云片段进行栅格处理,将点云数据沿Y轴方向分成一个个长宽各为0.5m的栅格单元,并记录到栅格单元序列F中;Operation 1: Perform grid processing on the sub-point cloud segments, divide the point cloud data into grid cells with a length and width of 0.5m each along the Y-axis, and record them in the grid cell sequence F;

操作2:从栅格单元序列F中按序取出某个单元Fi,计算该栅格单元的最大高程H,并与确定的高度阈值G比较:若H<G时,此栅格单元判别为近地面数据,不进行道路边界候选点的提取;若H>G,该栅格单元可能包含边界候选点,按顺序记录该栅格单元序号,形成新的需要提取候选点的栅格单元序列K;Operation 2: Take a certain unit Fi in sequence from the grid unit sequence F, calculate the maximum elevation H of the grid unit, and compare it with the determined height threshold G: if H<G, the grid unit is judged to be close Ground data, do not extract road boundary candidate points; if H>G, the grid cell may contain boundary candidate points, record the grid cell serial numbers in order to form a new grid cell sequence K that needs to extract candidate points;

操作3:采用滑动窗口思想提取栅格单元序列K中各栅格包含的道路边界候选点,分别从左从右两个方向同时对栅格进行处理,遍历整个栅格内所有点,左进方向的栅格单元在Z坐标值最小的前提下Y坐标值最小的点作为候选点,右进方向的栅格单元以Z坐标值最小的前提下Y坐标值最大的点作为候选点;Operation 3: Use the sliding window idea to extract the candidate road boundary points contained in each grid in the grid unit sequence K, process the grids from left and right at the same time, traverse all points in the entire grid, and move to the left. On the premise that the Z coordinate value is the smallest, the grid cell with the smallest Y coordinate value is used as the candidate point, and the grid cell in the right direction takes the point with the largest Y coordinate value on the premise that the Z coordinate value is the smallest as the candidate point;

操作4:选取另一个子点云片段执行上述操作找到道路边界候选点,直至所有子点云都经过处理,得到所有的道路边界候选点。Operation 4: Select another sub-point cloud segment to perform the above operations to find road boundary candidate points, until all sub-point clouds have been processed to obtain all road boundary candidate points.

3.1.2)对道路边界候选点进行筛选,剔除非道路边沿区域的候选点,3.1.2) Screen the candidate points of the road boundary, and eliminate the candidate points in the non-road edge area,

用步骤1中得到的地面数据来提取有效先验知识,将提取的地面数据看作为道路面,通过计算道路面沿道路延伸方向的两边边缘区域点的平均Y坐标值作为约束条件,得到两条约束线,选择两条线左右各为0.5m距离内的候选点,剔除落在约束线区域外的其他候选点,最终得到两个道路边界点的点序列,以各自序列中的点为最终的道路边界点,拟合道路边界线;Use the ground data obtained in step 1 to extract effective prior knowledge, regard the extracted ground data as a road surface, and obtain two Constraint line, select candidate points within a distance of 0.5m on the left and right sides of the two lines, eliminate other candidate points outside the constraint line area, and finally obtain the point sequence of the two road boundary points, with the points in the respective sequences as the final Road boundary points, fit road boundary lines;

对于弯路场景,要得到约束线,采用步骤1.3)中地面分割中分段的方式,在步骤1提取每个分段的初步地面数据后,便记录此段数据的近似宽度作为这段数据的约束条件,然后以此来筛选候选点提取阶段的每个子点云片段得到的突变点,得到该子点云片段正确的道路边界点,最后再将各个子点云片段正确的边界点进行组合,得到完整的场景边界点;For the detour scene, to obtain the constraint line, the method of segmentation in the ground segmentation in step 1.3) is used. After the preliminary ground data of each segment is extracted in step 1, the approximate width of this segment of data is recorded as the constraint of this segment of data. condition, and then screen the mutation points obtained by each sub-point cloud segment in the candidate point extraction stage, obtain the correct road boundary point of the sub-point cloud segment, and finally combine the correct boundary points of each sub-point cloud segment to get Complete scene boundary points;

3.1.3)用最小二乘法进行道路边界拟合,通过确定一个函数模型来拟合目标数据,计算函数模型数据使得与目标数据误差的平方和最小,近而得到该函数模型的参数值,3.1.3) Use the least squares method for road boundary fitting, determine a function model to fit the target data, calculate the function model data to minimize the square sum of the error with the target data, and obtain the parameter value of the function model,

假设需要拟合n个目标数据为(x1,y1;x2,y2;L;xn,yn;),拟合的曲线模型方程为:Assuming that the n target data to be fitted are (x 1 , y 1 ; x 2 , y 2 ; L; x n , y n ;), the fitted curve model equation is:

y=Ax2+Bx+C (17)y=Ax 2 +Bx+C (17)

其中A、B、C为任意实数,求解最小误差平方和的表达式为:where A, B, and C are any real numbers, and the expression for solving the minimum error sum of squares is:

Figure BDA0002583434820000151
Figure BDA0002583434820000151

当误差为最小时,A、B、C为最佳估计参数,因此对A、B、C三个参数求偏导数得到公式:When the error is the smallest, A, B, and C are the best estimated parameters. Therefore, the partial derivatives of the three parameters of A, B, and C are calculated to obtain the formula:

Figure BDA0002583434820000152
Figure BDA0002583434820000152

进而得到关于参数A、B、C的方程组,用矩阵形式表示为:Then, the system of equations about parameters A, B, C is obtained, which is expressed in matrix form as:

Figure BDA0002583434820000161
Figure BDA0002583434820000161

求解该矩阵便得到A、B、C的参数值,此时该函数模型就能够确定,模型代表的曲线便是目标数据所拟合的曲线;为了实现拟合,选择筛选后候选点的XY两个坐标值作为输入,以此来进行拟合道路边界线,得到道路边界检测结果,如图3a所的。The parameter values of A, B, and C can be obtained by solving the matrix. At this time, the function model can be determined, and the curve represented by the model is the curve fitted by the target data. The coordinate values are used as input to fit the road boundary line to obtain the road boundary detection result, as shown in Figure 3a.

3.2)利用栅格标记无障碍区域,实现激光雷达道路场景中可通行区域的检测;3.2) Use the grid to mark the barrier-free area to realize the detection of the passable area in the lidar road scene;

3.2.1)以剔除地面数据后的激光雷达点云场景为输入,假设通过道路边界检测得到的两条边界线为l1、l2,那么两条曲线之间的区域便是道路区域;由于激光雷达是实时采集的真实路况信息,道路上行驶的其他车辆、行人、广告牌等相对于雷达车就是障碍物,无人车必须避让这些障碍物而不至于发生事故,那么障碍物所处区域便是雷达车无法行驶的区域;将两条道路边界曲线l1、l2外部的非道路区域进行分割,只考虑边界线之间的道路区域;对道路区域数据进行栅格化处理,在不考虑高程的前提下,将三维点云场景划分成一系列大小相等的栅格块,选择X、Y方向都为0.5m距离大小的规则四边形进行划分;3.2.1) Taking the lidar point cloud scene after removing the ground data as the input, assuming that the two boundary lines obtained by the road boundary detection are l 1 and l 2 , then the area between the two curves is the road area; Lidar is real road condition information collected in real time. Other vehicles, pedestrians, billboards, etc. driving on the road are obstacles compared to radar vehicles. Unmanned vehicles must avoid these obstacles to avoid accidents. Then the area where the obstacles are located It is the area where the radar vehicle cannot travel; the non-road area outside the two road boundary curves l 1 and l 2 is divided, and only the road area between the boundary lines is considered; On the premise of considering the elevation, divide the 3D point cloud scene into a series of grid blocks of equal size, and select a regular quadrilateral with a distance of 0.5m in the X and Y directions for division;

3.2.2)将道路数据进行栅格处理后,对每个栅格进行判别,确定这个栅格是否为包含障碍物数据点的栅格单元,采用统计栅格单元内是否存在点云数据来划分此栅格单元属于哪种类型栅格,若栅格内含有点云数据,则判定为障碍物栅格,若栅格内不包含点云数据则是可通行区域栅格;3.2.2) After the road data is processed into grids, each grid is discriminated to determine whether the grid is a grid cell containing obstacle data points, and is divided by counting whether there is point cloud data in the grid cells. Which type of grid does this grid unit belong to? If the grid contains point cloud data, it is determined as an obstacle grid, and if the grid does not contain point cloud data, it is a passable area grid;

3.2.3)采用八邻域标记方法进行判断,将确定为可通行区域的栅格单元进行标记扩展,3.2.3) The eight-neighbor labeling method is used for judgment, and the grid cells determined as passable areas are marked and expanded,

首先设所有栅格单元序列为GridSeq,确定为可通行区域栅格单元序列为PassSeq,那么在所有栅格单元序列GridSeq中任意选择一栅格作为一个种子单元,通过循环查找这个栅格种子的上、下、左、右、左上、右上、左下、右下八个邻接栅格单元;若查找的栅格单元符合可通行区域栅格的条件,则将其标记为可通行区域标签“1”,并将此个栅格单元添加到可通行区域栅格序列PassSeq中;若不符合,则标记为“0”,然后将该栅格单元从序列GridSeq中移除;循环处理,直到栅格序列GridSeq为空,此时初步确定数据中可通行区域的栅格单元;First, set all grid cell sequences as GridSeq, and determine the passable area grid cell sequence as PassSeq, then arbitrarily select a grid as a seed unit in all grid cell sequences GridSeq, and search for the upper part of the grid seed by looping. , bottom, left, right, top left, top right, bottom left, bottom right eight adjacent grid cells; if the searched grid cell meets the conditions of the passable area grid, it will be marked as the passable area label "1", And add this grid unit to the passable area grid sequence PassSeq; if it does not match, mark it as "0", and then remove the grid unit from the sequence GridSeq; loop processing until the grid sequence GridSeq If it is empty, the grid cells of the passable area in the data are preliminarily determined;

3.2.4)根据步骤2实现的障碍物分割找到两条边界线上的障碍物,将其沿道路延伸方向进行投影,并确定该障碍物点云数据点分布的轮廓形状,3.2.4) Find the obstacles on the two boundary lines according to the obstacle segmentation implemented in step 2, project them along the road extension direction, and determine the contour shape of the point cloud data point distribution of the obstacles,

根据现实生活中的实际场景,道路两旁的广告牌和延伸的树枝,主轮廓形状多为“┌”或“┐”两种类型,如果位于道路边界上障碍物点分布形状符合两种类型的障碍物则确定为悬空障碍物,无人车是能够通行的,所以将确定为是可通行的障碍物所对应的栅格单元更改标记为“绿色”,表示此次栅格是可通行区域栅格;According to the actual scene in real life, the billboards on both sides of the road and the extended branches, the main contour shapes are mostly "┌" or "┐". The obstacle is determined to be a suspended obstacle, and the unmanned vehicle can pass, so the grid cell corresponding to the obstacle determined to be passable is changed and marked as "green", indicating that the grid is a passable area grid. ;

3.2.5)利用障碍物分割识别得到的障碍物位置信息、结构信息等特征,结合确定的道路可通行区域,完成无人车在此道路场景下可达性路径的预测和评估,3.2.5) Using the obstacle location information, structure information and other features obtained by obstacle segmentation and identification, combined with the determined road passable area, complete the prediction and evaluation of the accessibility path of the unmanned vehicle in this road scene,

设定无人车辆的车宽为1.7m,为保证能够顺利通行并不接触其他车辆等障碍物,无人车辆左右各增加0.3m的预留空间,所以整体可通行宽度要保证大于L=2.3m的距离;在完成障碍物分割的基础上,确定位于道路路面上障碍物的位置,以各自障碍物的AABB包围盒计算这些障碍物的体积,并且利用检测到的两条边界线,得到障碍物之间或者障碍物到边界线之间的可通行区域的宽度,然后与确定的可安全通行最小距离L相比较,得到此处宽度是否能够保证无人车安全通过。The vehicle width of the unmanned vehicle is set to 1.7m. In order to ensure smooth passage without touching obstacles such as other vehicles, a reserved space of 0.3m is added to the left and right of the unmanned vehicle, so the overall passable width should be guaranteed to be greater than L=2.3 The distance of m; on the basis of completing the obstacle segmentation, determine the position of the obstacles on the road surface, calculate the volume of these obstacles with the AABB bounding boxes of the respective obstacles, and use the detected two boundary lines to obtain the obstacle The width of the passable area between objects or between obstacles and the boundary line, and then compared with the determined minimum safe passable distance L to obtain whether the width here can ensure the safe passage of unmanned vehicles.

Claims (8)

1.一种基于激光雷达的道路环境要素感知方法,其特征在于,按照以下步骤实施:1. a road environment element perception method based on lidar, is characterized in that, is implemented according to the following steps: 步骤1,对原始的激光雷达的点云数据进行下采样、感兴趣区域的获取,获得精简点云数据以及对地面数据进行分割,得到结果,具体过程是:Step 1, down-sampling the original lidar point cloud data, acquiring the region of interest, obtaining the simplified point cloud data and segmenting the ground data, and obtaining the result. The specific process is: 1.1)实现基于体素栅格的下采样算法来进行数据的精简操作;1.1) Implement the downsampling algorithm based on the voxel grid to perform data reduction operations; 1.2)消除离散点和远距离无用点,获取后续处理的感兴趣区域;1.2) Eliminate discrete points and long-distance useless points, and obtain areas of interest for subsequent processing; 1.3)将经过步骤1.1)和步骤1.2)获得的精简数据分割出地面数据,在RANSAC平面检测算法基础上,对数据进行分段校准处理后再实现地面分割,以达到对复杂场景提取地面的普适性;1.3) Divide the simplified data obtained in steps 1.1) and 1.2) into ground data. Based on the RANSAC plane detection algorithm, the data is segmented and calibrated and then the ground is segmented, so as to achieve common ground extraction for complex scenes. suitability; 步骤2,实施障碍物分割与障碍物识别,Step 2, implement obstacle segmentation and obstacle recognition, 2.1)采用基于体素化的DBSCAN聚类算法,实现非地面数据的障碍物分割;2.1) Using voxel-based DBSCAN clustering algorithm to achieve obstacle segmentation of non-ground data; 2.2)基于多特征的障碍物分类识别;2.2) Multi-feature-based obstacle classification and recognition; 步骤3,进行道路边界检测与可通行区域分析,Step 3, carry out road boundary detection and passable area analysis, 3.1)利用数据点高程突变设计一种适合车载激光雷达数据的道路边界提取方法,提取出道路边界点后使用先验知识进行边界点的筛选,并采用最小二乘法进行道路边界线的拟合;3.1) Design a road boundary extraction method suitable for vehicle-mounted lidar data by using the elevation mutation of data points. After extracting road boundary points, use prior knowledge to screen the boundary points, and use the least squares method to fit the road boundary line; 3.2)利用栅格标记无障碍区域,实现激光雷达道路场景中可通行区域的检测,即成。3.2) Use the grid to mark the barrier-free area to realize the detection of the passable area in the lidar road scene, and that's it. 2.根据权利要求1所述的基于激光雷达的道路环境要素感知方法,其特征在于,所述的步骤1.1)中,具体过程是:2. The road environment element perception method based on lidar according to claim 1, is characterized in that, in described step 1.1), the concrete process is: 根据点云数据分布的稠密程度选择合适栅格单元的三个边长,将点云数据拆分成一个个的小体素栅格,统计包含在单元中的所有点,计算它们的质心作为整个栅格单元内全部点的代表,将所有栅格单元质心点共同组成新的数据,完成下采样,According to the density of the point cloud data distribution, select the three side lengths of the appropriate grid cell, split the point cloud data into small voxel grids, count all the points contained in the cell, and calculate their centroids as the whole The representative of all points in the grid unit, the centroid points of all grid units are combined to form new data, and the downsampling is completed. 1.1.1)读取原始激光雷达的点云数据;1.1.1) Read the point cloud data of the original lidar; 1.1.2)计算点云数据中最大点及最小点,即点云包围盒的右上点maxPoint和左下点minPoint,以此计算出点云包围盒的长Lx、宽Ly、高Lz;1.1.2) Calculate the maximum and minimum points in the point cloud data, that is, the upper right point maxPoint and the lower left point minPoint of the point cloud bounding box, so as to calculate the length Lx, width Ly, and height Lz of the point cloud bounding box; 1.1.3)设定体素栅格单元的长Vx、宽Vy、高Vz;将点云数据进行体素栅格划分,获得m×n×s个单元,其中m=[Lx/Vx],n=[Ly/Vy],s=[Lz/Vz],符号“[]”是取整符号,保证栅格单元数为整数;1.1.3) Set the length Vx, width Vy, and height Vz of the voxel grid unit; divide the point cloud data into a voxel grid to obtain m×n×s units, where m=[Lx/Vx], n=[Ly/Vy], s=[Lz/Vz], the symbol "[]" is the rounding symbol to ensure that the number of grid cells is an integer; 1.1.4)随机选择任意一点p,那么点p对应的栅格单元为:1.1.4) Randomly select any point p, then the grid cell corresponding to point p is:
Figure FDA0002583434810000021
Figure FDA0002583434810000021
Figure FDA0002583434810000022
Figure FDA0002583434810000022
Figure FDA0002583434810000023
Figure FDA0002583434810000023
点p所属一维栅格单元编号Index为:Index=Iindex_x*1+Iindex_y*(m+1)+Iindex_z*(m+1)*(n+1) (2)The index of the one-dimensional grid cell number to which point p belongs is: Index=I index_x *1+I index_y *(m+1)+I index_z *(m+1)*(n+1) (2) 1.1.5)遍历原点云所有点并将栅格单元编号Index相同的点插入同一个Index_vector向量中;1.1.5) Traverse all the points of the origin cloud and insert the points with the same grid cell number Index into the same Index_vector vector; 1.1.6)计算相同编号点所在的栅格单元内L个点的质心G(x,y,z)作为代表点实现下采样,计算公式如下:1.1.6) Calculate the centroid G(x, y, z) of L points in the grid unit where the same numbered point is located as a representative point to achieve downsampling. The calculation formula is as follows:
Figure FDA0002583434810000024
Figure FDA0002583434810000024
3.根据权利要求2所述的基于激光雷达的道路环境要素感知方法,其特征在于,所述的步骤1.2)中,3. The method for perceiving road environment elements based on lidar according to claim 2, wherein, in the described step 1.2), 车载激光雷达获得的点云数据,中心空白类圆形区域便是激光雷达设备所在的位置,数据以车辆前行方向为X轴正方向,以车辆右侧方向是Y轴的正方向,向上方向是Z轴正方向;通过统计车辆前进方向两侧也就是Y轴方向获取点的疏密程度,并与设定阈值进行比较,从而划分此处是否属于感兴趣区域,具体过程是:For the point cloud data obtained by the vehicle-mounted lidar, the central blank circular area is where the lidar device is located. The data takes the forward direction of the vehicle as the positive direction of the X-axis, the right direction of the vehicle is the positive direction of the Y-axis, and the upward direction is the positive direction of the Z-axis; by counting the density of the points on both sides of the vehicle's forward direction, that is, the direction of the Y-axis, and comparing it with the set threshold, it can be divided into whether it belongs to the area of interest. The specific process is: 1.2.1)将下采样处理后的点云数据投影到yoz平面;1.2.1) Project the downsampled point cloud data to the yoz plane; 1.2.2)获得投影数据以Y轴方向的最大数据Ymin到最小数据Ymax,在此区域内将数据划分为M段;1.2.2) Obtain the projection data from the maximum data Y min to the minimum data Y max in the Y-axis direction, and divide the data into M segments in this area; 1.2.3)统计每段数据中点的密度Di,并绘制整个点云数据的密度曲线;1.2.3) Count the density D i of the points in each piece of data, and draw the density curve of the entire point cloud data; 1.2.4)通过公式(4)计算每段密度所占总密度比r,与设定阈值
Figure FDA0002583434810000031
进行比较,从最左端进行判断,若
Figure FDA0002583434810000032
则认为此段疏密度较低,包含信息较少,将此段删除;继续比较下一段,若出现
Figure FDA0002583434810000033
停止此次判断;再从最右端开始判断,若
Figure FDA0002583434810000034
则将此段删除,直到
Figure FDA0002583434810000035
时结束判断,
1.2.4) Calculate the total density ratio r of each segment density by formula (4), and set the threshold value
Figure FDA0002583434810000031
Compare and judge from the leftmost end, if
Figure FDA0002583434810000032
It is considered that this paragraph has a low density and contains less information, and this paragraph is deleted; continue to compare the next paragraph, if there is
Figure FDA0002583434810000033
Stop this judgment; then start the judgment from the far right, if
Figure FDA0002583434810000034
delete this segment until
Figure FDA0002583434810000035
when the judgment ends,
Figure FDA0002583434810000036
Figure FDA0002583434810000036
1.2.5)对于车辆前行方向,选择距离激光雷达设备前后30m作为X轴方向的感兴趣区域,使用直通滤波方法进行区域划分,得到下采样后的感兴趣区域作为最终精简处理后的场景数据。1.2.5) For the forward direction of the vehicle, select 30m before and after the lidar device as the region of interest in the X-axis direction, use the pass-through filtering method to divide the region, and obtain the down-sampled region of interest as the final simplified scene data. .
4.根据权利要求3所述的基于激光雷达的道路环境要素感知方法,其特征在于,所述的步骤1.3)中,具体过程是:4. the road environment element perception method based on lidar according to claim 3, is characterized in that, in described step 1.3), concrete process is: 1.3.1)计算沿车辆行驶方向X坐标轴的最小值min_x及最大值max_x,将精简处理后的点云数据沿X轴方向以1m的长度划分成N段子点云数据,此处N=ROUNDUP(max_x-min_x),ROUNDUP(x)为向上取整函数;1.3.1) Calculate the minimum value min_x and the maximum value max_x along the X-coordinate axis of the vehicle's driving direction, and divide the point cloud data after simplified processing into N segments of sub-point cloud data with a length of 1m along the X-axis direction, where N=ROUNDUP (max_x-min_x), ROUNDUP(x) is a round-up function; 1.3.2)对于各段子点云数据,采用RANSAC平面检测算法将此段子点云数据中地面部分检测出来,得到拟合平面的一般方程为:1.3.2) For each segment of sub-point cloud data, the RANSAC plane detection algorithm is used to detect the ground part of this segment of sub-point cloud data, and the general equation to obtain the fitting plane is: Ax+By+Cz+D=0,其中A,B,C,D为已知常数,并且A,B,C不同时为零,通过此方程得到该拟合平面的法向量V=(A,B,C);此外,使用激光雷达采集数据时,雷达坐标系中Z轴是保持垂直的,则竖直向量为Q=(0,0,1),计算拟合得到的地面Plane的法向量V与竖直向量Q的校准矩阵,称为旋转矩阵T,利用得到的旋转矩阵T,将此段子点云数据进行校准,具体校准操作如下:Ax+By+Cz+D=0, where A, B, C, D are known constants, and A, B, C are not zero at the same time, the normal vector V=(A, B, C); In addition, when using lidar to collect data, the Z axis in the radar coordinate system is kept vertical, then the vertical vector is Q=(0, 0, 1), and the normal vector of the ground Plane obtained by calculating the fitting The calibration matrix of V and vertical vector Q is called rotation matrix T, and the obtained rotation matrix T is used to calibrate the sub-point cloud data. The specific calibration operation is as follows: 已知校准前的向量为V,校准后的向量为Q,那么:It is known that the vector before calibration is V and the vector after calibration is Q, then: V·Q=|V||Q|cosθ (5)V·Q=|V||Q|cosθ (5) 则V、Q之间的夹角为:Then the angle between V and Q is:
Figure FDA0002583434810000041
Figure FDA0002583434810000041
假定校准前向量V为(a1,a2,a3),校准后向量Q为(b1,b2,b3),根据向量的叉乘定义得到旋转轴C(c1,c2,c3),则有:Assuming that the vector V before calibration is (a1, a2, a3), and the vector Q after calibration is (b1, b2, b3), and the rotation axis C (c1, c2, c3) is obtained according to the definition of the cross product of the vectors, there are:
Figure FDA0002583434810000042
Figure FDA0002583434810000042
设旋转轴C对应一个单位向量k(kx,ky,kz),利用旋转角θ和单位向量k,由罗德里格旋转公式,空间中一个任意向量u沿单位向量k旋转θ角度后得到的向量w为:Let the rotation axis C correspond to a unit vector k (k x , ky , k z ), using the rotation angle θ and the unit vector k, according to the Rodrigue rotation formula, an arbitrary vector u in the space rotates along the unit vector k by the angle θ after The resulting vector w is: w=ucosθ+(k·u)k(1-cosθ)+(k·u)sinθ (8)w=ucosθ+(k·u)k(1-cosθ)+(k·u)sinθ (8) 从而得出对应的旋转矩阵T为:Thus, the corresponding rotation matrix T is obtained as:
Figure FDA0002583434810000043
Figure FDA0002583434810000043
1.3.3)基于RANSAC平面检测算法实现此段子点云数据的地面分割,具体步骤是:1.3.3) Realize the ground segmentation of this sub-point cloud data based on the RANSAC plane detection algorithm. The specific steps are: 1.3.3.1)输入校准后的子点云数据CloudPoint,给定平面模型和设置距离阈值t;1.3.3.1) Input the calibrated sub-point cloud data CloudPoint, give the plane model and set the distance threshold t; 1.3.3.2)在点云数据中随机采样三个点,使这三个点不在同一条直线上,求解三个随机样本点所构成的平面模型方程;1.3.3.2) Randomly sample three points in the point cloud data so that these three points are not on the same straight line, and solve the plane model equation formed by the three random sample points; 1.3.3.3)计算点云数据的其他点到该平面的距离dis,并与设定的距离阈值t进行比较,得到基于该平面模型的局内点集合IncloudSet;1.3.3.3) Calculate the distance dis from other points of the point cloud data to the plane, and compare it with the set distance threshold t to obtain an intra-office point set IncloudSet based on the plane model; 1.3.3.4)统计该平面模型局内点集合IncloudSet的个数,与当前最大个数进行比较,保留两者中最大的那个个数;1.3.3.4) Count the number of IncloudSets in the in-office point set of the plane model, compare it with the current maximum number, and keep the largest number of the two; 1.3.3.5)将保留下个数最多的所有局内点集合重新估计新的平面模型,并输出局内点为此次计算的地面点;1.3.3.5) Re-estimate the new plane model by keeping the set of all intra-office points with the largest number, and output the intra-office point as the ground point for this calculation; 1.3.3.6)重新采样,重复以上步骤;当迭代计算次数达到最大迭代数后,退出迭代循环,选出迭代过程中最优平面模型与输出局内点数最多的地面点,此时得到的地面点便是该段点云检测到的地面数据;1.3.3.6) Re-sampling and repeat the above steps; when the number of iterative calculations reaches the maximum number of iterations, exit the iterative loop, select the optimal plane model and the ground point with the largest number of points in the output bureau during the iteration process, and the ground point obtained at this time is is the ground data detected by the point cloud; 1.3.4)将各段子点云数据检测到的地面数据组合起来,得到最终的地面数据,实现地面与非地面数据的分割。1.3.4) Combine the ground data detected by each sub-point cloud data to obtain the final ground data, and realize the segmentation of ground and non-ground data.
5.根据权利要求4所述的基于激光雷达的道路环境要素感知方法,其特征在于,所述的步骤2.1)中,具体过程是:5. The method for perceiving road environment elements based on lidar according to claim 4, wherein, in the described step 2.1), the specific process is: 2.1.1)将非地面激光雷达点云场景数据作为输入进行体素栅格单元的划分,设置栅格单元边长d为15cm,通过计算获得点云数据X、Y、Z三个坐标轴的最大最小值,利用设定的栅格单元边长d计算得到划分的栅格单元每一坐标轴的层数m、n、s,以X坐标轴为例,min_X为X轴最小坐标,max_X为X轴最大坐标,则有:2.1.1) Use the non-ground lidar point cloud scene data as input to divide the voxel grid cells, set the grid cell side length d to 15cm, and obtain the point cloud data X, Y, and Z coordinate axes through calculation. The maximum and minimum values are calculated by using the set side length d of the grid unit to obtain the number of layers m, n, and s of each coordinate axis of the divided grid unit. Taking the X coordinate axis as an example, min_X is the minimum coordinate of the X axis, and max_X is the The maximum coordinate of the X-axis is:
Figure FDA0002583434810000061
Figure FDA0002583434810000061
Y轴、Z轴与上述X轴处理方式一致,经过以上步骤能够得到场景数据的单元边长为d的体素栅格单元划分;The Y-axis and Z-axis are consistent with the processing methods of the above-mentioned X-axis. After the above steps, the voxel grid unit division of the unit side length d of the scene data can be obtained; 2.1.2)统计所有栅格单元内包含的点数,并记录所有非空的栅格单元,对这些栅格单元进行合理的标记,保证这些栅格单元整体结构与原数据位置信息相同,确定某一栅格单元周围邻域内栅格单元与原数据相同;2.1.2) Count the number of points contained in all grid cells, record all non-empty grid cells, and mark these grid cells reasonably to ensure that the overall structure of these grid cells is the same as the original data location information, and determine a certain grid cell. The grid cells in the neighborhood around a grid cell are the same as the original data; 根据设定的参数阈值MinPts,对每个非空栅格单元进行判断,如果栅格单元内包含的点数大于MinPts,则将本栅格单元设置为核心单元,包含的点设置为核心点;如果栅格单元内包含的点数小于MinPts,则需要进行二次判断;According to the set parameter threshold MinPts, each non-empty grid cell is judged. If the number of points contained in the grid cell is greater than MinPts, the grid cell is set as the core cell, and the contained points are set as the core point; if If the number of points contained in the grid cell is less than MinPts, a second judgment is required; 对此栅格单元内的点进行ε-邻域搜索,判断邻域内是否存在位于核心单元中的邻域点,如果邻域点存在,则将此栅格单元也划分为核心单元参与之后的计算,其他点不再进行判断;如果邻域点不存在,选取此栅格单元中的其他点进行判断,直到所有点都进行判断后依旧不存在邻域点位于其他核心单元中,则直接判断此栅格单元内的点为噪声点;Perform an ε-neighborhood search on the points in this grid cell to determine whether there is a neighborhood point located in the core unit in the neighborhood. , other points will no longer be judged; if the neighborhood point does not exist, select other points in this grid cell for judgment, until all points have been judged, there is still no neighborhood point located in other core cells, then directly judge this The points in the grid cells are noise points; 2.1.3)计算1.1.2)中每个非噪声栅格单元内包含R个点的质心Q:2.1.3) Calculate the centroid Q containing R points in each non-noise grid cell in 1.1.2):
Figure FDA0002583434810000062
Figure FDA0002583434810000062
2.1.4)将计算得到的质心点集合定义为核心点集合S,从集合中随机选取任意一核心点P,若点P半径距离ε内含有不少于MinPts个点,那么点P为核心点,如果不是,那么此点P为噪声点,依次计算它ε-邻域内包含的其他质心点并添加到候选集合E中,设置一个聚类簇,将点P和它邻域内的其他质心点作为同一个聚类簇中的一员,那么他们所代表的栅格单元以及包含的点也是同一个聚类簇的,将这个聚类簇的所有点设置同一个label作为与其他聚类簇的区分;从候选集合E中选取一个质心点,同样计算这个质心点的ε-邻域内包含的质心点以及进行聚类划分,直到候选集合E中没有元素,初步完成这个聚类簇;然后,将核心点集合S中已经确定的核心点对象移除,从中再任意选取一点继续上述操作,直至核心点集合S中元素为空;2.1.4) Define the calculated set of centroid points as the core point set S, and randomly select any core point P from the set. If there are no less than MinPts points within the radius distance ε of the point P, then the point P is the core point , if not, then this point P is a noise point, calculate other centroid points contained in its ε-neighborhood in turn and add them to the candidate set E, set up a cluster, take point P and other centroid points in its neighborhood as A member of the same cluster, then the grid cells they represent and the points they contain are also in the same cluster, and set the same label for all points in this cluster to distinguish it from other clusters ; Select a centroid point from the candidate set E, and also calculate the centroid points contained in the ε-neighborhood of the centroid point and perform cluster division until there are no elements in the candidate set E, and the clustering cluster is initially completed; then, the core The determined core point objects in the point set S are removed, and then arbitrarily select a point to continue the above operation until the elements in the core point set S are empty; 2.1.5)在步骤2.1.4)中的ε-邻域包含点搜索时,构建KD-Tree搜索最近邻点:2.1.5) When the ε-neighborhood contains point search in step 2.1.4), construct a KD-Tree to search the nearest neighbor point: 2.1.5.1)判断输入数据集是否为空,如果是空数据集,返回空的KD-Tree,否则执行下面操作;2.1.5.1) Determine whether the input dataset is empty, if it is an empty dataset, return an empty KD-Tree, otherwise perform the following operations; 2.1.5.2)节点生成,主要分为两部分,即split域的确定和Node-data的确定;首先统计所有描述子数据在每个维上的数据方差,选出值中最大的一个所对应的维域作为split域的值,然后将数据集按照确定的split域的值进行排序,其在正中间的点作为Node-data;根据确定的Node-data,划分左子空间和右子空间;递归构建KD-Tree;2.1.5.2) Node generation is mainly divided into two parts, namely the determination of the split domain and the determination of Node-data; first, the data variance of all descriptor data in each dimension is counted, and the one corresponding to the largest value is selected. The dimension field is used as the value of the split field, and then the data set is sorted according to the determined value of the split field, and the point in the middle is used as Node-data; according to the determined Node-data, the left subspace and the right subspace are divided; recursion Build KD-Tree; 2.1.6)选出聚类簇聚合中较小的簇F,设定点数小于100的簇为较小聚类簇,分析簇中所有点的ε-邻域搜索,判断该点邻域内是否存在标记为其他簇label的点,如果存在则将此聚类簇与检测到的聚类簇进行合并,若所有点都不存在,则将其独自划分为一个聚类簇,直到所有较小聚类簇都进行过判断,然后将标记代表簇label的点进行存储;最后,对所有点进行遍历,根据label将同一聚类簇输出,并配置不同的颜色来进行区别;2.1.6) Select the smaller cluster F in the cluster aggregation, set the cluster with the number of points less than 100 as the smaller cluster, analyze the ε-neighborhood search of all points in the cluster, and determine whether the point exists in the neighborhood Points marked as labels of other clusters, if they exist, merge this cluster with the detected cluster, if all points do not exist, divide it into a cluster by itself, until all smaller clusters The clusters have been judged, and then the points representing the cluster labels are stored; finally, all the points are traversed, the same cluster is output according to the label, and different colors are configured to distinguish; 至此,就获得了非地面障碍物的分割结果,以不同颜色的独立的聚类簇体现出来。At this point, the segmentation results of non-ground obstacles are obtained, which are reflected in independent clusters of different colors.
6.根据权利要求5所述的基于激光雷达的道路环境要素感知方法,其特征在于,所述的步骤2.2)中,具体过程是:6. The method for perceiving road environment elements based on lidar according to claim 5, characterized in that, in the described step 2.2), the specific process is: 2.2.1)获取通过步骤2.1)得到的聚类分割后的聚类簇的以下6类特征:2.2.1) Obtain the following 6 types of features of the clustered clusters obtained by step 2.1) after clustering: f1特征是包围盒的长、宽、高信息;The f 1 feature is the length, width and height information of the bounding box; f2特征是数据包含点的数据量; The f2 feature is the data volume of the data containing points; f3特征是描述聚类分割后聚类簇数据的三维协方差矩阵特征值信息,对于三维点云,协方差矩阵H表示为:The f3 feature is the eigenvalue information of the three-dimensional covariance matrix describing the clustered cluster data after cluster segmentation. For a three-dimensional point cloud, the covariance matrix H is expressed as:
Figure FDA0002583434810000081
Figure FDA0002583434810000081
在公式(12)中,以cov(x,y)为例:In formula (12), take cov(x, y) as an example: cov(x,y)=E{[x-E(x)][y-E(y)]} (13)cov(x, y)=E{[x-E(x)][y-E(y)]} (13) 在公式(13)中,以E(x)为例,E(x)为x的期望或均值,表示为
Figure FDA0002583434810000082
In formula (13), taking E(x) as an example, E(x) is the expected or mean value of x, expressed as
Figure FDA0002583434810000082
假设分割后聚类簇数据包含j个点(x1,y1,z1)……(xj,yj,zj),则任一点(xi,yi,zi)表达如下:Assuming that the clustered data after segmentation contains j points (x 1 , y 1 , z 1 )... (x j , y j , z j ), then any point ( xi , y i , z i ) is expressed as follows:
Figure FDA0002583434810000083
Figure FDA0002583434810000083
然后依据公式(15)求得协方差矩阵的所有的特征值λiThen all the eigenvalues λ i of the covariance matrix are obtained according to formula (15): |H-λE|=0 (15)|H-λE|=0 (15) f4特征是描述聚类分割后的聚类簇数据与无人车激光雷达设备的距离信息,利用公式(16)求解簇内点的平均坐标,以此作为该障碍物聚类的位置坐标假设分割后聚类簇数据包含T个点,;The f4 feature is to describe the distance information between the clustered cluster data after clustering and the unmanned vehicle lidar device, and the average coordinates of the points in the cluster are calculated by formula (16), which is used as the position coordinate assumption of the obstacle clustering. After segmentation, the cluster data contains T points,;
Figure FDA0002583434810000084
Figure FDA0002583434810000084
f5特征是描述聚类分割后的聚类簇数据的主方向趋势信息,根据不同种类的障碍物,其点云数据的分布趋势也不同,根据聚类簇数据中每个方向轴的最大最小坐标值以及利用AABB包围盒求得数据高程、宽度、长度,通过组合得到聚类簇数据的面积大小、长高比、宽高比等信息来确定该障碍物的主体方向趋势信息; The f5 feature is to describe the main direction trend information of the cluster data after cluster segmentation. According to different types of obstacles, the distribution trend of the point cloud data is also different. According to the maximum and minimum values of each direction axis in the cluster data Coordinate values and using AABB bounding box to obtain data elevation, width and length, and through combination to obtain the area size, aspect ratio, aspect ratio and other information of cluster data to determine the main direction trend information of the obstacle; f6特征是描述聚类分割后的聚类簇数据的双视图形状轮廓信息,通过分析该聚类簇数据左视图、前视图的形状轮廓信息,能够模糊判断该聚类点云数据所代表障碍物种类;The feature f6 is to describe the dual - view shape and outline information of the clustered cluster data after clustering and segmentation. By analyzing the shape and outline information of the left and front views of the clustered cluster data, the obstacles represented by the clustered point cloud data can be fuzzy judged. species; 2.2.2)基于JointBoost分类方法的障碍物识别,2.2.2) Obstacle recognition based on JointBoost classification method, 2.2.2.1)将步骤2.2.1)提取的六类特征除去f4特征外的五类特征都作为JointBoos分类器所需的特征,并对这五类特征进行提取;2.2.2.1) Use the six types of features extracted in step 2.2.1 ) except the f 2.2.2.2)选择合适的样本数据集并标记各自所属类别标签进行JointBoost分类器的训练,再将测试数据通过训练好的分类器进行分类识别,得到初始的分类结果,2.2.2.2) Select the appropriate sample data set and mark the respective category labels to train the JointBoost classifier, and then classify and identify the test data through the trained classifier to obtain the initial classification result, JointBoost实现多分类与SVM多分类相似,是将问题转化成多个二分类组合来实现,在进行训练时,输入m对训练点云的特征向量和对应的类标签(zi,ci),在每次训练的迭代过程中,训练每个二分类的一个弱分类器hm(zi,ci),并通过最小损失函数在多个二分类中找到多个类别的共享特征,依据共享特征将多个弱分类器组合成最后的强分类器Hm(zi,ci);在进行测试时,输入要识别点云聚类簇的特征向量,经过强分类器分类识别,输出这个聚类簇所有可能类别的概率值,概率值最大所代表的类别便是该点云聚类簇输入的特征向量确定的类别。The realization of multi-classification by JointBoost is similar to that of SVM multi-classification. It is realized by converting the problem into multiple binary classification combinations. During training, input m pairs of training point cloud feature vectors and corresponding class labels (z i , c i ), In the iterative process of each training, a weak classifier h m (z i , c i ) for each binary classification is trained, and the shared features of multiple categories are found in multiple binary classifications through the minimum loss function. The feature combines multiple weak classifiers into a final strong classifier H m (z i , c i ); when testing, input the feature vector to identify the point cloud cluster, and after the strong classifier classification and identification, output this The probability value of all possible categories of the cluster, the category represented by the largest probability value is the category determined by the feature vector input to the point cloud cluster. 2.2.2.3)因为分类器的分类结果可能存在误差,所以初步确定类别后再结合道路场景中包含的其他特征信息,比如空间特征信息,对分类器得到的分类结果进行优化,得到该聚类簇最终的识别结果,2.2.2.3) Because there may be errors in the classification results of the classifier, the classification results obtained by the classifier are optimized, and the cluster is obtained by combining other feature information contained in the road scene, such as spatial feature information, after initially determining the category. The final recognition result, 具体优化过程是,首先统计JointBoost分类器得到该聚类簇的分类概率结果集为U,并按从大到小对概率进行排序依次为U1,U2,U3,…,Un,其中U1所代表的类别在很大概率上是能够确定该聚类簇所属类别的,但依旧存在误差的可能,需要对其进行二次判别;然后,根据实际道路场景中障碍物的空间特征信息,利用f4特征,该聚类簇与无人车激光雷达设备的距离信息对其分类结果进行鉴别。The specific optimization process is to first count the JointBoost classifier to obtain the classification probability result set of the cluster as U, and sort the probabilities in descending order of U 1 , U 2 , U 3 ,…,U n , where The category represented by U 1 can determine the category of the cluster with a high probability, but there is still the possibility of error, and it needs to be judged twice; then, according to the spatial feature information of obstacles in the actual road scene , using the f 4 feature, the distance information between the cluster and the LiDAR device of the unmanned vehicle is used to identify its classification result. 在无人车上路行驶时,会根据实际情况提前获得道路的一些基本信息,例如道路宽度。就结合道路宽度和聚类簇到雷达设备的距离信息来对聚类簇分类器的识别结果做二次判别。When the unmanned vehicle is driving on the road, some basic information of the road, such as road width, will be obtained in advance according to the actual situation. Combined with the road width and the distance information from the cluster to the radar device, the identification result of the cluster classifier is used for secondary discrimination. 通过公式(16)来获得该聚类簇距离原点的三个坐标方向的平均距离,根据实验数据确定X轴方向为道路延伸方向,Y轴方向为道路宽度方向;设道路宽度为W,那么某个障碍物聚类簇,在道路宽度方向上到原点的距离为该聚类簇所有点y坐标的平均值
Figure FDA0002583434810000101
通过二者的大小比较能够作为确定一个类别信息的判决条件,其判决规则为:若
Figure FDA0002583434810000102
那么该障碍物已经远离道路区域,属于车辆类别的障碍物概率较低;若
Figure FDA0002583434810000103
则该障碍物聚类簇属于建筑物等其他类别障碍物的概率低;
The average distance of the three coordinate directions of the cluster from the origin is obtained by formula (16). According to the experimental data, the X-axis direction is the road extension direction, and the Y-axis direction is the road width direction; set the road width as W, then a certain The distance to the origin in the width direction of the road is the average of the y-coordinates of all points in the cluster
Figure FDA0002583434810000101
The comparison of the size of the two can be used as a judgment condition for determining a category of information. The judgment rule is: if
Figure FDA0002583434810000102
Then the obstacle is far away from the road area, and the probability of the obstacle belonging to the vehicle category is low; if
Figure FDA0002583434810000103
The probability that the obstacle cluster belongs to other types of obstacles such as buildings is low;
根据JointBoost分类得到的概率集合和二次判别规定,确定最终的分类识别结果是:According to the probability set obtained by the JointBoost classification and the secondary discrimination rules, the final classification and recognition result is determined to be: 如果
Figure FDA0002583434810000104
若最大概率值代表的类别属于车辆类障碍物,则该点云聚类簇便是属于车辆类障碍物;若最大概率值代表的类别属于标志杆树木行人类障碍物,则该点云聚类簇便是属于道路的标志杆树木行人类;若最大概率值代表的类别属于建筑物类障碍物,则此次识别可能错误,重新进行分类,若结果相同则以此为识别结果;
if
Figure FDA0002583434810000104
If the category represented by the maximum probability value is a vehicle obstacle, the point cloud cluster is a vehicle obstacle; if the category represented by the maximum probability value belongs to a sign pole, trees, pedestrians, and a human obstacle, the point cloud cluster The clusters are the signposts, trees and pedestrians belonging to the road; if the category represented by the maximum probability value is a building obstacle, the identification may be wrong this time, and the classification is re-classified. If the results are the same, this is the identification result;
如果
Figure FDA0002583434810000105
若最大概率值代表的类别属于建筑物类障碍物,则该聚类簇便是属于建筑物类障碍物;若最大概率值代表的类别属于标志杆树木行人类障碍物,则该点云聚类簇便是属于道路的标志杆树木行人类;若最大概率值代表的类别属于车辆类障碍物,则此次识别可能错误,重新进行分类,若结果相同则以此为识别结果。
if
Figure FDA0002583434810000105
If the category represented by the maximum probability value belongs to a building obstacle, the cluster is a building obstacle; if the category represented by the maximum probability value belongs to a sign pole, trees, and human obstacles, the point cloud clustering The clusters are the signposts, trees, pedestrians and people belonging to the road; if the category represented by the maximum probability value belongs to the vehicle obstacle, the identification may be wrong this time, and the classification is re-classified. If the results are the same, this is the identification result.
经过上述步骤步骤2.2.1)和步骤2.1.2)得到障碍物分类识别结果。After the above steps, step 2.2.1) and step 2.1.2), the obstacle classification and recognition result is obtained.
7.根据权利要求6所述的基于激光雷达的道路环境要素感知方法,其特征在于,所述的步骤3.1)中,具体过程是:7. The road environment element perception method based on lidar according to claim 6, is characterized in that, in described step 3.1), concrete process is: 3.1.1)统计激光雷达点云数据中所有点的Z轴坐标值,根据道路环境情况确定道路边界的高度阈值G;然后采用图像中滑动窗口的思想统计窗口内点云数据点Z坐标发生突变的点,对其进行标记,这些标记的突变点便是道路边界的候选点;根据高程分布以及确定的高度阈值G,利用滑动窗口思想确定数据中高程突变区域,进而确定突变点,3.1.1) Count the Z-axis coordinate values of all points in the lidar point cloud data, and determine the height threshold G of the road boundary according to the road environment; then use the idea of sliding windows in the image to count the sudden changes in the Z-coordinates of the point cloud data points in the window. These marked points are the candidate points of the road boundary; according to the elevation distribution and the determined height threshold G, the sliding window idea is used to determine the elevation mutation area in the data, and then the mutation point is determined. 首先对激光雷达点云数据按照道路延伸方向进行道路横切,将数据切割成沿X方向为0.5m宽的多个子点云数据,然后取出其中一个数据进行下述操作:First, the lidar point cloud data is cross-cut according to the road extension direction, and the data is cut into multiple sub-point cloud data with a width of 0.5m along the X direction, and then one of the data is taken out and the following operations are performed: 操作1:对子点云片段进行栅格处理,将点云数据沿Y轴方向分成一个个长宽各为0.5m的栅格单元,并记录到栅格单元序列F中;Operation 1: Perform grid processing on the sub-point cloud segments, divide the point cloud data into grid cells with a length and width of 0.5m each along the Y-axis, and record them in the grid cell sequence F; 操作2:从栅格单元序列F中按序取出某个单元Fi,计算该栅格单元的最大高程H,并与确定的高度阈值G比较:若H<G时,此栅格单元判别为近地面数据,不进行道路边界候选点的提取;若H>G,该栅格单元可能包含边界候选点,按顺序记录该栅格单元序号,形成新的需要提取候选点的栅格单元序列K;Operation 2: Take a certain unit Fi in sequence from the grid unit sequence F, calculate the maximum elevation H of the grid unit, and compare it with the determined height threshold G: if H<G, the grid unit is judged to be close Ground data, do not extract road boundary candidate points; if H>G, the grid cell may contain boundary candidate points, record the grid cell serial numbers in order to form a new grid cell sequence K that needs to extract candidate points; 操作3:采用滑动窗口思想提取栅格单元序列K中各栅格包含的道路边界候选点,分别从左从右两个方向同时对栅格进行处理,遍历整个栅格内所有点,左进方向的栅格单元在Z坐标值最小的前提下Y坐标值最小的点作为候选点,右进方向的栅格单元以Z坐标值最小的前提下Y坐标值最大的点作为候选点;Operation 3: Use the sliding window idea to extract the candidate road boundary points contained in each grid in the grid unit sequence K, process the grids from left and right at the same time, traverse all points in the entire grid, and move to the left. On the premise that the Z coordinate value is the smallest, the grid cell with the smallest Y coordinate value is used as the candidate point, and the grid cell in the right direction takes the point with the largest Y coordinate value on the premise that the Z coordinate value is the smallest as the candidate point; 操作4:选取另一个子点云片段执行上述操作找到道路边界候选点,直至所有子点云都经过处理,得到所有的道路边界候选点;Operation 4: Select another sub-point cloud segment to perform the above operations to find road boundary candidate points, until all sub-point clouds have been processed to obtain all road boundary candidate points; 3.1.2)对道路边界候选点进行筛选,剔除非道路边沿区域的候选点,3.1.2) Screen the candidate points of the road boundary, and eliminate the candidate points in the non-road edge area, 用步骤1中得到的地面数据来提取有效先验知识,将提取的地面数据看作为道路面,通过计算道路面沿道路延伸方向的两边边缘区域点的平均Y坐标值作为约束条件,得到两条约束线,选择两条线左右各为0.5m距离内的候选点,剔除落在约束线区域外的其他候选点,最终得到两个道路边界点的点序列,以各自序列中的点为最终的道路边界点,拟合道路边界线;Use the ground data obtained in step 1 to extract effective prior knowledge, regard the extracted ground data as a road surface, and obtain two Constraint line, select candidate points within a distance of 0.5m on the left and right sides of the two lines, eliminate other candidate points outside the constraint line area, and finally obtain the point sequence of the two road boundary points, with the points in the respective sequences as the final Road boundary points, fit road boundary lines; 对于弯路场景,要得到约束线,采用步骤1.3)中地面分割中分段的方式,在步骤1提取每个分段的初步地面数据后,便记录此段数据的近似宽度作为这段数据的约束条件,然后以此来筛选候选点提取阶段的每个子点云片段得到的突变点,得到该子点云片段正确的道路边界点,最后再将各个子点云片段正确的边界点进行组合,得到完整的场景边界点;For the detour scene, to obtain the constraint line, the method of segmentation in the ground segmentation in step 1.3) is used. After the preliminary ground data of each segment is extracted in step 1, the approximate width of this segment of data is recorded as the constraint of this segment of data. condition, and then screen the mutation points obtained by each sub-point cloud segment in the candidate point extraction stage, obtain the correct road boundary point of the sub-point cloud segment, and finally combine the correct boundary points of each sub-point cloud segment to get Complete scene boundary points; 3.1.3)用最小二乘法进行道路边界拟合,通过确定一个函数模型来拟合目标数据,计算函数模型数据使得与目标数据误差的平方和最小,近而得到该函数模型的参数值,3.1.3) Use the least squares method for road boundary fitting, determine a function model to fit the target data, calculate the function model data to minimize the square sum of the error with the target data, and obtain the parameter value of the function model, 假设需要拟合n个目标数据为(x1,y1;x2,y2;L;xn,yn;),拟合的曲线模型方程为:Assuming that the n target data to be fitted are (x 1 , y 1 ; x 2 , y 2 ; L; x n , y n ;), the fitted curve model equation is: y=Ax2+Bx+C (17)y=Ax 2 +Bx+C (17) 其中A、B、C为任意实数,求解最小误差平方和的表达式为:Where A, B, and C are any real numbers, the expression for solving the minimum error sum of squares is:
Figure FDA0002583434810000131
Figure FDA0002583434810000131
当误差为最小时,A、B、C为最佳估计参数,因此对A、B、C三个参数求偏导数得到公式:When the error is the smallest, A, B, and C are the best estimated parameters. Therefore, the partial derivatives of the three parameters of A, B, and C are calculated to obtain the formula:
Figure FDA0002583434810000132
Figure FDA0002583434810000132
进而得到关于参数A、B、C的方程组,用矩阵形式表示为:Then, the system of equations about parameters A, B, C is obtained, which is expressed in matrix form as:
Figure FDA0002583434810000133
Figure FDA0002583434810000133
求解该矩阵便得到A、B、C的参数值,此时该函数模型就能够确定,模型代表的曲线便是目标数据所拟合的曲线;为了实现拟合,选择筛选后候选点的XY两个坐标值作为输入,以此来进行拟合道路边界线,得到道路边界检测结果。The parameter values of A, B, and C can be obtained by solving the matrix. At this time, the function model can be determined, and the curve represented by the model is the curve fitted by the target data. The coordinate values are used as input to fit the road boundary line and obtain the road boundary detection result.
8.根据权利要求7所述的基于激光雷达的道路环境要素感知方法,其特征在于,所述的步骤3.2)中,具体过程是:8. The method for perceiving road environment elements based on lidar according to claim 7, characterized in that, in the described step 3.2), the specific process is: 3.2.1)以剔除地面数据后的激光雷达点云场景为输入,假设通过道路边界检测得到的两条边界线为l1、l2,那么两条曲线之间的区域便是道路区域;将两条道路边界曲线l1、l2外部的非道路区域进行分割,只考虑边界线之间的道路区域;对道路区域数据进行栅格化处理,在不考虑高程的前提下,将三维点云场景划分成一系列大小相等的栅格块,选择X、Y方向都为0.5m距离大小的规则四边形进行划分;3.2.1) Taking the lidar point cloud scene after removing the ground data as the input, assuming that the two boundary lines obtained by the road boundary detection are l 1 and l 2 , then the area between the two curves is the road area; The non-road areas outside the two road boundary curves l 1 and l 2 are divided, and only the road area between the boundary lines is considered; the road area data is rasterized, and the 3D point cloud The scene is divided into a series of grid blocks of equal size, and a regular quadrilateral with a distance of 0.5m in both the X and Y directions is selected for division; 3.2.2)将道路数据进行栅格处理后,对每个栅格进行判别,确定这个栅格是否为包含障碍物数据点的栅格单元,采用统计栅格单元内是否存在点云数据来划分此栅格单元属于哪种类型栅格,若栅格内含有点云数据,则判定为障碍物栅格,若栅格内不包含点云数据则是可通行区域栅格;3.2.2) After the road data is processed into grids, each grid is discriminated to determine whether the grid is a grid cell containing obstacle data points, and is divided by counting whether there is point cloud data in the grid cells. Which type of grid does this grid unit belong to? If the grid contains point cloud data, it is determined as an obstacle grid, and if the grid does not contain point cloud data, it is a passable area grid; 3.2.3)采用八邻域标记方法进行判断,将确定为可通行区域的栅格单元进行标记扩展,3.2.3) The eight-neighbor labeling method is used for judgment, and the grid cells determined as passable areas are marked and expanded, 首先设所有栅格单元序列为GridSeq,确定为可通行区域栅格单元序列为PassSeq,那么在所有栅格单元序列GridSeq中任意选择一栅格作为一个种子单元,通过循环查找这个栅格种子的上、下、左、右、左上、右上、左下、右下八个邻接栅格单元;若查找的栅格单元符合可通行区域栅格的条件,则将其标记为可通行区域标签“1”,并将此个栅格单元添加到可通行区域栅格序列PassSeq中;若不符合,则标记为“0”,然后将该栅格单元从序列GridSeq中移除;循环处理,直到栅格序列GridSeq为空,此时初步确定数据中可通行区域的栅格单元;First, set all grid cell sequences as GridSeq, and determine the passable area grid cell sequence as PassSeq, then arbitrarily select a grid as a seed unit in all grid cell sequences GridSeq, and search for the upper part of the grid seed by looping. , bottom, left, right, top left, top right, bottom left, bottom right eight adjacent grid cells; if the searched grid cell meets the conditions of the passable area grid, it will be marked as the passable area label "1", And add this grid unit to the passable area grid sequence PassSeq; if it does not match, mark it as "0", and then remove the grid unit from the sequence GridSeq; loop processing until the grid sequence GridSeq If it is empty, the grid cells of the passable area in the data are preliminarily determined; 3.2.4)根据步骤2实现的障碍物分割找到两条边界线上的障碍物,将其沿道路延伸方向进行投影,并确定该障碍物点云数据点分布的轮廓形状,3.2.4) Find the obstacles on the two boundary lines according to the obstacle segmentation implemented in step 2, project them along the road extension direction, and determine the contour shape of the point cloud data point distribution of the obstacles, 将确定为是可通行的障碍物所对应的栅格单元更改标记为“绿色”,表示此次栅格是可通行区域栅格;Change the grid cell corresponding to the obstacle determined to be passable as "green", indicating that this grid is a passable area grid; 3.2.5)利用障碍物分割识别得到的障碍物位置信息、结构信息特征,结合确定的道路可通行区域,完成无人车在此道路场景下可达性路径的预测和评估,3.2.5) Use the obstacle location information and structural information features obtained by obstacle segmentation and identification, combined with the determined road passable area, to complete the prediction and evaluation of the accessibility path of the unmanned vehicle in this road scene, 设定无人车辆的车宽为1.7m,无人车辆左右各增加0.3m的预留空间,所以整体可通行宽度要保证大于L=2.3m的距离;在完成障碍物分割的基础上,确定位于道路路面上障碍物的位置,以各自障碍物的AABB包围盒计算这些障碍物的体积,并且利用检测到的两条边界线,得到障碍物之间或者障碍物到边界线之间的可通行区域的宽度,然后与确定的可安全通行最小距离L相比较,得到此处宽度是否能够保证无人车安全通过。The width of the unmanned vehicle is set to 1.7m, and the reserved space for the left and right sides of the unmanned vehicle is increased by 0.3m, so the overall passable width should be guaranteed to be greater than the distance of L=2.3m; on the basis of completing the obstacle segmentation, determine At the position of the obstacles on the road surface, calculate the volume of these obstacles with the AABB bounding boxes of the respective obstacles, and use the detected two boundary lines to obtain the passable between obstacles or between obstacles to the boundary line The width of the area is then compared with the determined minimum safe passable distance L to obtain whether the width here can ensure the safe passage of unmanned vehicles.
CN202010674121.XA 2020-07-14 2020-07-14 Road environment element sensing method based on laser radar Active CN111985322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010674121.XA CN111985322B (en) 2020-07-14 2020-07-14 Road environment element sensing method based on laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010674121.XA CN111985322B (en) 2020-07-14 2020-07-14 Road environment element sensing method based on laser radar

Publications (2)

Publication Number Publication Date
CN111985322A true CN111985322A (en) 2020-11-24
CN111985322B CN111985322B (en) 2024-02-06

Family

ID=73439427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010674121.XA Active CN111985322B (en) 2020-07-14 2020-07-14 Road environment element sensing method based on laser radar

Country Status (1)

Country Link
CN (1) CN111985322B (en)

Cited By (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419505A (en) * 2020-12-07 2021-02-26 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud road rod-shaped object automatic extraction method combining semantic rule and model matching
CN112526993A (en) * 2020-11-30 2021-03-19 广州视源电子科技股份有限公司 Grid map updating method and device, robot and storage medium
CN112560747A (en) * 2020-12-23 2021-03-26 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud data-based lane boundary interactive extraction method
CN112629548A (en) * 2020-12-28 2021-04-09 北京航空航天大学 Three-dimensional grid map creating and updating method based on roadside laser radar
CN112634310A (en) * 2020-12-14 2021-04-09 深兰人工智能(深圳)有限公司 Road boundary extraction method, device, electronic equipment and storage medium
CN112699734A (en) * 2020-12-11 2021-04-23 深圳市银星智能科技股份有限公司 Threshold detection method, mobile robot and storage medium
CN112784799A (en) * 2021-02-01 2021-05-11 三一机器人科技有限公司 AGV (automatic guided vehicle) backward pallet and obstacle identification method and device and AGV
CN112801022A (en) * 2021-02-09 2021-05-14 青岛慧拓智能机器有限公司 Method for rapidly detecting and updating road boundary of unmanned mine card operation area
CN112904841A (en) * 2021-01-12 2021-06-04 北京布科思科技有限公司 Single-line positioning obstacle avoidance method and device in non-horizontal orientation, equipment and storage medium
CN113064135A (en) * 2021-06-01 2021-07-02 北京海天瑞声科技股份有限公司 Method and device for detecting obstacle in 3D radar point cloud continuous frame data
CN113156451A (en) * 2021-03-23 2021-07-23 北京易控智驾科技有限公司 Unstructured road boundary detection method and device, storage medium and electronic equipment
CN113156932A (en) * 2020-12-29 2021-07-23 上海市东方海事工程技术有限公司 Obstacle avoidance control method and system for rail flaw detection vehicle
CN113156948A (en) * 2021-04-19 2021-07-23 浙江工业大学 Ground and obstacle distinguishing and identifying method of two-wheeled self-balancing robot
CN113191459A (en) * 2021-05-27 2021-07-30 山东高速建设管理集团有限公司 Road-side laser radar-based in-transit target classification method
CN113205139A (en) * 2021-05-06 2021-08-03 南京云智控产业技术研究院有限公司 Unmanned ship water sphere detection method based on density clustering
CN113240678A (en) * 2021-05-10 2021-08-10 青岛小鸟看看科技有限公司 Plane information detection method and system
CN113484875A (en) * 2021-07-30 2021-10-08 燕山大学 Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering
CN113592891A (en) * 2021-07-30 2021-11-02 华东理工大学 Unmanned vehicle passable area analysis method and navigation grid map manufacturing method
CN113734176A (en) * 2021-09-18 2021-12-03 重庆长安汽车股份有限公司 Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium
CN113758484A (en) * 2020-11-30 2021-12-07 北京京东乾石科技有限公司 Path planning method and device
CN113865488A (en) * 2021-09-24 2021-12-31 北京京东方技术开发有限公司 Distance measuring method, electronic device and computer readable storage medium
CN113935425A (en) * 2021-10-21 2022-01-14 中国船舶重工集团公司第七一一研究所 Object identification method, device, terminal and storage medium
CN113989784A (en) * 2021-11-30 2022-01-28 福州大学 Road scene type identification method and system based on vehicle-mounted laser point cloud
CN114063107A (en) * 2021-11-26 2022-02-18 山东大学 A method for extracting ground point cloud based on laser beam
CN114325759A (en) * 2021-12-31 2022-04-12 深兰人工智能(深圳)有限公司 Dynamic obstacle removal method and related methods and devices in lidar positioning
CN114359876A (en) * 2022-03-21 2022-04-15 成都奥伦达科技有限公司 Vehicle target identification method and storage medium
CN114387293A (en) * 2021-12-31 2022-04-22 深圳市镭神智能系统有限公司 Road edge detection method and device, electronic equipment and vehicle
CN114419599A (en) * 2022-01-21 2022-04-29 苏州挚途科技有限公司 Obstacle identification method, device and electronic device
CN114488183A (en) * 2021-12-29 2022-05-13 深圳优地科技有限公司 Method, apparatus, device and readable storage medium for processing obstacle point cloud
CN114488026A (en) * 2022-01-30 2022-05-13 重庆长安汽车股份有限公司 Underground parking garage passable space detection method based on 4D millimeter wave radar
CN114488073A (en) * 2022-02-14 2022-05-13 中国第一汽车股份有限公司 Method for processing point cloud data acquired by laser radar
CN114495026A (en) * 2022-01-07 2022-05-13 武汉市虎联智能科技有限公司 Laser radar identification method and device, electronic equipment and storage medium
CN114543666A (en) * 2022-01-20 2022-05-27 大连理工大学 Stockpile surface prediction method based on mine field environment perception
CN114578313A (en) * 2022-05-09 2022-06-03 中国电子科技集团公司信息科学研究院 Centroid search-based grid detection mirror image elimination method and device
CN114639024A (en) * 2022-03-03 2022-06-17 江苏方天电力技术有限公司 Automatic laser point cloud classification method for power transmission line
CN114663855A (en) * 2022-03-11 2022-06-24 北京航空航天大学 A road surface water and roughness detection method for unstructured roads
CN114690780A (en) * 2022-04-13 2022-07-01 中国矿业大学 Method for allowing unmanned rail electric locomotive to pass through slope and curve in deep limited space
CN114693855A (en) * 2022-05-31 2022-07-01 中汽创智科技有限公司 Point cloud data processing method and device
WO2022141912A1 (en) * 2021-01-01 2022-07-07 杜豫川 Vehicle-road collaboration-oriented sensing information fusion representation and target detection method
WO2022141116A1 (en) * 2020-12-29 2022-07-07 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN114780229A (en) * 2021-01-05 2022-07-22 中国移动通信有限公司研究院 A processing method, device and electronic device for a perception task
CN114779794A (en) * 2022-06-21 2022-07-22 东风悦享科技有限公司 Street obstacle identification method based on unmanned patrol vehicle system in typhoon scene
CN114779206A (en) * 2022-03-09 2022-07-22 三一智矿科技有限公司 Method and device for identifying road boundary, storage medium and equipment
CN114821571A (en) * 2022-03-11 2022-07-29 天津四维向量科技有限公司 A point cloud processing method for power cable identification and reconstruction
CN114863048A (en) * 2022-05-17 2022-08-05 亿咖通(湖北)技术有限公司 A method and electronic device for generating vectorized data of road boundary line
CN114882460A (en) * 2022-05-17 2022-08-09 常州大学 Road vehicle detection method based on feature layer fusion
CN114898013A (en) * 2022-07-15 2022-08-12 深圳市城市交通规划设计研究中心股份有限公司 Traffic isochronal generation method, electronic device and storage medium
CN114926809A (en) * 2022-04-14 2022-08-19 重庆兰德适普信息科技有限公司 Passable area detection method and device, moving tool and storage medium
CN115077553A (en) * 2022-06-30 2022-09-20 重庆长安汽车股份有限公司 Grid-based search trajectory planning method, system, vehicle, equipment and medium
CN115097487A (en) * 2022-07-13 2022-09-23 重庆长安汽车股份有限公司 Vehicle environment information generation method and device, electronic device and storage medium
CN115145253A (en) * 2021-03-16 2022-10-04 广州汽车集团股份有限公司 End-to-end automatic driving method and system and training method of automatic driving model
CN115167421A (en) * 2022-07-06 2022-10-11 珠海一微半导体股份有限公司 Boundary-based target point search method, chip and robot
CN115240149A (en) * 2021-04-25 2022-10-25 株洲中车时代电气股份有限公司 Three-dimensional point cloud detection and identification method, device, electronic device and storage medium
CN115236674A (en) * 2022-06-15 2022-10-25 北京踏歌智行科技有限公司 Mining area environment sensing method based on 4D millimeter wave radar
CN115248448A (en) * 2022-09-22 2022-10-28 毫末智行科技有限公司 Laser radar-based road edge detection method, device, equipment and storage medium
CN115273007A (en) * 2022-07-27 2022-11-01 滁州学院 Underground locomotive region-of-interest dividing method and obstacle extraction method
CN115267815A (en) * 2022-06-10 2022-11-01 合肥工业大学 An optimal layout method of roadside lidar group based on point cloud modeling
CN115542346A (en) * 2022-10-20 2022-12-30 重庆长安汽车股份有限公司 Ground detection method, device, vehicle and storage medium
CN115540896A (en) * 2022-12-06 2022-12-30 广汽埃安新能源汽车股份有限公司 Path planning method, path planning device, electronic equipment and computer readable medium
CN115574803A (en) * 2022-11-16 2023-01-06 深圳市信润富联数字科技有限公司 Moving route determining method, device, equipment and storage medium
CN115641462A (en) * 2022-12-26 2023-01-24 电子科技大学 Radar image target identification method
CN115712298A (en) * 2022-10-25 2023-02-24 太原理工大学 Autonomous navigation method for robot running in channel based on single-line laser radar
WO2023023982A1 (en) * 2021-08-25 2023-03-02 Huawei Technologies Co., Ltd. An intrusion detection method and apparatus
CN115797435A (en) * 2022-11-24 2023-03-14 中冶建工集团有限公司 Intelligent foundation pit measuring method
CN115797901A (en) * 2021-09-09 2023-03-14 北京机械设备研究所 Mining area barrier rapid detection method based on multidimensional tree
CN115808169A (en) * 2022-11-29 2023-03-17 广东鲲鹏空间信息技术有限公司 Map road edge line generation method, device and system and readable storage medium
CN115839716A (en) * 2022-11-28 2023-03-24 中冶赛迪技术研究中心有限公司 Steel grabbing machine timely positioning and map building method and device for scrap steel yard
CN115906250A (en) * 2022-11-18 2023-04-04 同济大学 Road obstacle detection method, maintenance method, system, equipment and medium
WO2023050638A1 (en) * 2021-09-29 2023-04-06 上海仙途智能科技有限公司 Curb recognition based on laser point cloud
CN115964446A (en) * 2022-12-18 2023-04-14 北京工业大学 A method for interactive processing of radar data based on mobile terminal
CN116226697A (en) * 2023-05-06 2023-06-06 北京师范大学 Spatial data clustering method, system, equipment and medium
US20230182743A1 (en) * 2021-12-15 2023-06-15 Industrial Technology Research Institute Method and system for extracting road data and method and system for controlling self-driving car
US11741621B2 (en) 2021-05-10 2023-08-29 Qingdao Pico Technology Co., Ltd. Method and system for detecting plane information
CN116740101A (en) * 2023-05-16 2023-09-12 中国信息通信研究院 Plane segmentation algorithm for point cloud objects
CN116796210A (en) * 2023-08-25 2023-09-22 山东莱恩光电科技股份有限公司 Barrier detection method based on laser radar
CN116901085A (en) * 2023-09-01 2023-10-20 苏州立构机器人有限公司 Intelligent robot obstacle avoidance method and device, intelligent robot and readable storage medium
GB2621048A (en) * 2021-03-01 2024-01-31 Du Yuchuan Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
EP4290276A4 (en) * 2021-02-07 2024-02-21 Huawei Technologies Co., Ltd. Road boundary recognition method and apparatus
CN117934324A (en) * 2024-03-25 2024-04-26 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device
CN117970361A (en) * 2024-03-28 2024-05-03 北京市农林科学院智能装备技术研究中心 Feeding amount prediction method, device, electronic equipment and medium
CN118038415A (en) * 2024-04-12 2024-05-14 厦门中科星晨科技有限公司 Laser radar-based vehicle identification method, device, medium and electronic equipment
CN118072360A (en) * 2024-04-19 2024-05-24 浙江华是科技股份有限公司 Perimeter invasion single human body complete identification method and system
CN118230284A (en) * 2024-02-29 2024-06-21 中交第二公路勘察设计研究院有限公司 A method and device for quickly identifying road elements using a mobile laser radar
CN118393520A (en) * 2024-05-09 2024-07-26 杭州电子科技大学 A multi-sensor map construction method and device for active ballast machine
CN118888073A (en) * 2024-09-26 2024-11-01 浙江数汉科技有限公司 A blockchain-based medical data management system for IoT devices

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460416A (en) * 2018-02-28 2018-08-28 武汉理工大学 A kind of structured road feasible zone extracting method based on three-dimensional laser radar
WO2018210303A1 (en) * 2017-05-19 2018-11-22 上海蔚来汽车有限公司 Road model construction
CN110244321A (en) * 2019-04-22 2019-09-17 武汉理工大学 A detection method of road passable area based on 3D lidar
CN110320504A (en) * 2019-07-29 2019-10-11 浙江大学 A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model
US20200218979A1 (en) * 2018-12-28 2020-07-09 Nvidia Corporation Distance estimation to objects and free-space boundaries in autonomous machine applications

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018210303A1 (en) * 2017-05-19 2018-11-22 上海蔚来汽车有限公司 Road model construction
CN108460416A (en) * 2018-02-28 2018-08-28 武汉理工大学 A kind of structured road feasible zone extracting method based on three-dimensional laser radar
US20200218979A1 (en) * 2018-12-28 2020-07-09 Nvidia Corporation Distance estimation to objects and free-space boundaries in autonomous machine applications
CN110244321A (en) * 2019-04-22 2019-09-17 武汉理工大学 A detection method of road passable area based on 3D lidar
CN110320504A (en) * 2019-07-29 2019-10-11 浙江大学 A kind of unstructured road detection method based on laser radar point cloud statistics geometrical model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
孔栋;王晓原;孙亮;王方;陈晨;: "车载雷达点云的结构化道路边界提取方法", 河南科技大学学报(自然科学版), no. 04 *
王灿;孔斌;杨静;王智灵;祝辉;: "基于三维激光雷达的道路边界提取和障碍物检测算法", 模式识别与人工智能, no. 04 *
苏致远;徐友春;李永乐;: "基于三维激光雷达的车辆目标检测方法", 军事交通学院学报, no. 01 *

Cited By (121)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112526993A (en) * 2020-11-30 2021-03-19 广州视源电子科技股份有限公司 Grid map updating method and device, robot and storage medium
CN113758484A (en) * 2020-11-30 2021-12-07 北京京东乾石科技有限公司 Path planning method and device
CN112526993B (en) * 2020-11-30 2023-08-08 广州视源电子科技股份有限公司 Grid map updating method, device, robot and storage medium
CN112419505A (en) * 2020-12-07 2021-02-26 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud road rod-shaped object automatic extraction method combining semantic rule and model matching
CN112419505B (en) * 2020-12-07 2023-11-10 苏州工业园区测绘地理信息有限公司 Automatic extraction method for vehicle-mounted point cloud road shaft by combining semantic rules and model matching
CN112699734A (en) * 2020-12-11 2021-04-23 深圳市银星智能科技股份有限公司 Threshold detection method, mobile robot and storage medium
CN112699734B (en) * 2020-12-11 2024-04-16 深圳银星智能集团股份有限公司 Threshold detection method, mobile robot and storage medium
CN112634310A (en) * 2020-12-14 2021-04-09 深兰人工智能(深圳)有限公司 Road boundary extraction method, device, electronic equipment and storage medium
CN112560747A (en) * 2020-12-23 2021-03-26 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud data-based lane boundary interactive extraction method
CN112629548B (en) * 2020-12-28 2023-01-24 北京航空航天大学 Three-dimensional grid map creating and updating method based on roadside laser radar
CN112629548A (en) * 2020-12-28 2021-04-09 北京航空航天大学 Three-dimensional grid map creating and updating method based on roadside laser radar
CN113156932A (en) * 2020-12-29 2021-07-23 上海市东方海事工程技术有限公司 Obstacle avoidance control method and system for rail flaw detection vehicle
WO2022141116A1 (en) * 2020-12-29 2022-07-07 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and apparatus, and movable platform
GB2618936A (en) * 2021-01-01 2023-11-22 Du Yuchuan Vehicle-road collaboration-oriented sensing information fusion representation and target detection method
WO2022206977A1 (en) * 2021-01-01 2022-10-06 许军 Cooperative-vehicle-infrastructure-oriented sensing information fusion representation and target detection method
WO2022141910A1 (en) * 2021-01-01 2022-07-07 杜豫川 Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
WO2022141912A1 (en) * 2021-01-01 2022-07-07 杜豫川 Vehicle-road collaboration-oriented sensing information fusion representation and target detection method
GB2618936B (en) * 2021-01-01 2024-10-02 Univ Tongji A vehicle-road cooperative perception method for 3D object detection based on deep neural networks with feature sharing
CN114780229A (en) * 2021-01-05 2022-07-22 中国移动通信有限公司研究院 A processing method, device and electronic device for a perception task
CN112904841B (en) * 2021-01-12 2023-11-03 北京布科思科技有限公司 Non-horizontal single-line positioning obstacle avoidance method, device, equipment and storage medium
CN112904841A (en) * 2021-01-12 2021-06-04 北京布科思科技有限公司 Single-line positioning obstacle avoidance method and device in non-horizontal orientation, equipment and storage medium
CN112784799A (en) * 2021-02-01 2021-05-11 三一机器人科技有限公司 AGV (automatic guided vehicle) backward pallet and obstacle identification method and device and AGV
EP4290276A4 (en) * 2021-02-07 2024-02-21 Huawei Technologies Co., Ltd. Road boundary recognition method and apparatus
CN112801022A (en) * 2021-02-09 2021-05-14 青岛慧拓智能机器有限公司 Method for rapidly detecting and updating road boundary of unmanned mine card operation area
GB2628958A (en) * 2021-03-01 2024-10-09 Univ Tongji A method of infrastructure-augmented cooperative perception for autonomous vehicles based on voxel feature aggregation
GB2621048A (en) * 2021-03-01 2024-01-31 Du Yuchuan Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
GB2628958B (en) * 2021-03-01 2025-05-14 Univ Tongji A method of infrastructure-augmented cooperative perception for autonomous vehicles based on voxel feature aggregation
CN115145253A (en) * 2021-03-16 2022-10-04 广州汽车集团股份有限公司 End-to-end automatic driving method and system and training method of automatic driving model
CN113156451A (en) * 2021-03-23 2021-07-23 北京易控智驾科技有限公司 Unstructured road boundary detection method and device, storage medium and electronic equipment
CN113156948A (en) * 2021-04-19 2021-07-23 浙江工业大学 Ground and obstacle distinguishing and identifying method of two-wheeled self-balancing robot
CN113156948B (en) * 2021-04-19 2022-06-28 浙江工业大学 A method for distinguishing and recognizing the ground and obstacles for a two-wheeled self-balancing robot
CN115240149A (en) * 2021-04-25 2022-10-25 株洲中车时代电气股份有限公司 Three-dimensional point cloud detection and identification method, device, electronic device and storage medium
CN113205139A (en) * 2021-05-06 2021-08-03 南京云智控产业技术研究院有限公司 Unmanned ship water sphere detection method based on density clustering
CN113205139B (en) * 2021-05-06 2024-09-13 南京云智控产业技术研究院有限公司 Unmanned ship water sphere detection method based on density clustering
US11741621B2 (en) 2021-05-10 2023-08-29 Qingdao Pico Technology Co., Ltd. Method and system for detecting plane information
CN113240678A (en) * 2021-05-10 2021-08-10 青岛小鸟看看科技有限公司 Plane information detection method and system
CN113191459A (en) * 2021-05-27 2021-07-30 山东高速建设管理集团有限公司 Road-side laser radar-based in-transit target classification method
CN113064135B (en) * 2021-06-01 2022-02-18 北京海天瑞声科技股份有限公司 Method and device for detecting obstacle in 3D radar point cloud continuous frame data
CN113064135A (en) * 2021-06-01 2021-07-02 北京海天瑞声科技股份有限公司 Method and device for detecting obstacle in 3D radar point cloud continuous frame data
CN113484875A (en) * 2021-07-30 2021-10-08 燕山大学 Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering
CN113592891A (en) * 2021-07-30 2021-11-02 华东理工大学 Unmanned vehicle passable area analysis method and navigation grid map manufacturing method
CN113592891B (en) * 2021-07-30 2024-03-22 华东理工大学 Unmanned vehicle passable domain analysis method and navigation grid map manufacturing method
WO2023023982A1 (en) * 2021-08-25 2023-03-02 Huawei Technologies Co., Ltd. An intrusion detection method and apparatus
CN115797901A (en) * 2021-09-09 2023-03-14 北京机械设备研究所 Mining area barrier rapid detection method based on multidimensional tree
CN113734176B (en) * 2021-09-18 2024-11-22 重庆长安汽车股份有限公司 Environmental perception system, method, vehicle and storage medium for intelligent driving vehicle
CN113734176A (en) * 2021-09-18 2021-12-03 重庆长安汽车股份有限公司 Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium
CN113865488A (en) * 2021-09-24 2021-12-31 北京京东方技术开发有限公司 Distance measuring method, electronic device and computer readable storage medium
CN113865488B (en) * 2021-09-24 2023-10-27 北京京东方技术开发有限公司 Distance measuring method, electronic equipment and computer readable storage medium
WO2023050638A1 (en) * 2021-09-29 2023-04-06 上海仙途智能科技有限公司 Curb recognition based on laser point cloud
CN113935425A (en) * 2021-10-21 2022-01-14 中国船舶重工集团公司第七一一研究所 Object identification method, device, terminal and storage medium
CN113935425B (en) * 2021-10-21 2024-08-16 中国船舶集团有限公司第七一一研究所 Object identification method, device, terminal and storage medium
CN114063107A (en) * 2021-11-26 2022-02-18 山东大学 A method for extracting ground point cloud based on laser beam
CN113989784A (en) * 2021-11-30 2022-01-28 福州大学 Road scene type identification method and system based on vehicle-mounted laser point cloud
CN113989784B (en) * 2021-11-30 2024-11-15 福州大学 A road scene type recognition method and system based on vehicle-mounted laser point cloud
US20230182743A1 (en) * 2021-12-15 2023-06-15 Industrial Technology Research Institute Method and system for extracting road data and method and system for controlling self-driving car
US11999352B2 (en) * 2021-12-15 2024-06-04 Industrial Technology Research Institute Method and system for extracting road data and method and system for controlling self-driving car
CN114488183A (en) * 2021-12-29 2022-05-13 深圳优地科技有限公司 Method, apparatus, device and readable storage medium for processing obstacle point cloud
CN114325759A (en) * 2021-12-31 2022-04-12 深兰人工智能(深圳)有限公司 Dynamic obstacle removal method and related methods and devices in lidar positioning
CN114387293A (en) * 2021-12-31 2022-04-22 深圳市镭神智能系统有限公司 Road edge detection method and device, electronic equipment and vehicle
CN114495026A (en) * 2022-01-07 2022-05-13 武汉市虎联智能科技有限公司 Laser radar identification method and device, electronic equipment and storage medium
CN114543666B (en) * 2022-01-20 2022-11-29 大连理工大学 A stockpile level prediction method based on mine environment perception
CN114543666A (en) * 2022-01-20 2022-05-27 大连理工大学 Stockpile surface prediction method based on mine field environment perception
CN114419599A (en) * 2022-01-21 2022-04-29 苏州挚途科技有限公司 Obstacle identification method, device and electronic device
CN114419599B (en) * 2022-01-21 2025-03-28 苏州挚途科技有限公司 Obstacle recognition method, device and electronic equipment
CN114488026A (en) * 2022-01-30 2022-05-13 重庆长安汽车股份有限公司 Underground parking garage passable space detection method based on 4D millimeter wave radar
CN114488073B (en) * 2022-02-14 2025-04-15 中国第一汽车股份有限公司 Processing method of point cloud data acquired by laser radar
CN114488073A (en) * 2022-02-14 2022-05-13 中国第一汽车股份有限公司 Method for processing point cloud data acquired by laser radar
CN114639024A (en) * 2022-03-03 2022-06-17 江苏方天电力技术有限公司 Automatic laser point cloud classification method for power transmission line
CN114779206A (en) * 2022-03-09 2022-07-22 三一智矿科技有限公司 Method and device for identifying road boundary, storage medium and equipment
CN114779206B (en) * 2022-03-09 2025-05-27 三一智矿科技有限公司 Road boundary recognition method, device, storage medium, and equipment
CN114821571A (en) * 2022-03-11 2022-07-29 天津四维向量科技有限公司 A point cloud processing method for power cable identification and reconstruction
CN114663855A (en) * 2022-03-11 2022-06-24 北京航空航天大学 A road surface water and roughness detection method for unstructured roads
CN114663855B (en) * 2022-03-11 2024-05-24 北京航空航天大学 A road surface waterlogging and roughness detection method for unstructured roads
CN114359876A (en) * 2022-03-21 2022-04-15 成都奥伦达科技有限公司 Vehicle target identification method and storage medium
CN114690780B (en) * 2022-04-13 2024-10-29 中国矿业大学 Gradient and curve passing method of unmanned tracked electric locomotive in deep limited space
CN114690780A (en) * 2022-04-13 2022-07-01 中国矿业大学 Method for allowing unmanned rail electric locomotive to pass through slope and curve in deep limited space
CN114926809A (en) * 2022-04-14 2022-08-19 重庆兰德适普信息科技有限公司 Passable area detection method and device, moving tool and storage medium
CN114578313A (en) * 2022-05-09 2022-06-03 中国电子科技集团公司信息科学研究院 Centroid search-based grid detection mirror image elimination method and device
CN114863048A (en) * 2022-05-17 2022-08-05 亿咖通(湖北)技术有限公司 A method and electronic device for generating vectorized data of road boundary line
CN114882460A (en) * 2022-05-17 2022-08-09 常州大学 Road vehicle detection method based on feature layer fusion
CN114693855A (en) * 2022-05-31 2022-07-01 中汽创智科技有限公司 Point cloud data processing method and device
CN115267815A (en) * 2022-06-10 2022-11-01 合肥工业大学 An optimal layout method of roadside lidar group based on point cloud modeling
CN115236674A (en) * 2022-06-15 2022-10-25 北京踏歌智行科技有限公司 Mining area environment sensing method based on 4D millimeter wave radar
CN115236674B (en) * 2022-06-15 2024-06-04 北京踏歌智行科技有限公司 Mining area environment sensing method based on 4D millimeter wave radar
CN114779794B (en) * 2022-06-21 2022-10-11 东风悦享科技有限公司 Street obstacle identification method based on unmanned patrol vehicle system in typhoon scene
CN114779794A (en) * 2022-06-21 2022-07-22 东风悦享科技有限公司 Street obstacle identification method based on unmanned patrol vehicle system in typhoon scene
CN115077553A (en) * 2022-06-30 2022-09-20 重庆长安汽车股份有限公司 Grid-based search trajectory planning method, system, vehicle, equipment and medium
CN115167421A (en) * 2022-07-06 2022-10-11 珠海一微半导体股份有限公司 Boundary-based target point search method, chip and robot
CN115097487A (en) * 2022-07-13 2022-09-23 重庆长安汽车股份有限公司 Vehicle environment information generation method and device, electronic device and storage medium
CN114898013A (en) * 2022-07-15 2022-08-12 深圳市城市交通规划设计研究中心股份有限公司 Traffic isochronal generation method, electronic device and storage medium
CN115273007A (en) * 2022-07-27 2022-11-01 滁州学院 Underground locomotive region-of-interest dividing method and obstacle extraction method
CN115273007B (en) * 2022-07-27 2025-05-23 滁州学院 Underground locomotive region of interest division method and obstacle extraction method
CN115248448A (en) * 2022-09-22 2022-10-28 毫末智行科技有限公司 Laser radar-based road edge detection method, device, equipment and storage medium
CN115542346A (en) * 2022-10-20 2022-12-30 重庆长安汽车股份有限公司 Ground detection method, device, vehicle and storage medium
CN115542346B (en) * 2022-10-20 2025-07-15 重庆长安汽车股份有限公司 Ground detection method, device, vehicle and storage medium
CN115712298A (en) * 2022-10-25 2023-02-24 太原理工大学 Autonomous navigation method for robot running in channel based on single-line laser radar
CN115574803A (en) * 2022-11-16 2023-01-06 深圳市信润富联数字科技有限公司 Moving route determining method, device, equipment and storage medium
CN115906250A (en) * 2022-11-18 2023-04-04 同济大学 Road obstacle detection method, maintenance method, system, equipment and medium
CN115797435A (en) * 2022-11-24 2023-03-14 中冶建工集团有限公司 Intelligent foundation pit measuring method
CN115839716A (en) * 2022-11-28 2023-03-24 中冶赛迪技术研究中心有限公司 Steel grabbing machine timely positioning and map building method and device for scrap steel yard
CN115808169A (en) * 2022-11-29 2023-03-17 广东鲲鹏空间信息技术有限公司 Map road edge line generation method, device and system and readable storage medium
CN115540896B (en) * 2022-12-06 2023-03-07 广汽埃安新能源汽车股份有限公司 Path planning method and device, electronic equipment and computer readable medium
CN115540896A (en) * 2022-12-06 2022-12-30 广汽埃安新能源汽车股份有限公司 Path planning method, path planning device, electronic equipment and computer readable medium
CN115964446A (en) * 2022-12-18 2023-04-14 北京工业大学 A method for interactive processing of radar data based on mobile terminal
CN115641462A (en) * 2022-12-26 2023-01-24 电子科技大学 Radar image target identification method
CN116226697A (en) * 2023-05-06 2023-06-06 北京师范大学 Spatial data clustering method, system, equipment and medium
CN116740101A (en) * 2023-05-16 2023-09-12 中国信息通信研究院 Plane segmentation algorithm for point cloud objects
CN116740101B (en) * 2023-05-16 2024-03-12 中国信息通信研究院 Plane segmentation method for point cloud objects
CN116796210A (en) * 2023-08-25 2023-09-22 山东莱恩光电科技股份有限公司 Barrier detection method based on laser radar
CN116796210B (en) * 2023-08-25 2023-11-28 山东莱恩光电科技股份有限公司 Barrier detection method based on laser radar
CN116901085A (en) * 2023-09-01 2023-10-20 苏州立构机器人有限公司 Intelligent robot obstacle avoidance method and device, intelligent robot and readable storage medium
CN116901085B (en) * 2023-09-01 2023-12-22 苏州立构机器人有限公司 Intelligent robot obstacle avoidance method and device, intelligent robot and readable storage medium
CN118230284A (en) * 2024-02-29 2024-06-21 中交第二公路勘察设计研究院有限公司 A method and device for quickly identifying road elements using a mobile laser radar
CN117934324B (en) * 2024-03-25 2024-06-11 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device
CN117934324A (en) * 2024-03-25 2024-04-26 广东电网有限责任公司中山供电局 Denoising method and device for laser point cloud data and radar scanning device
CN117970361A (en) * 2024-03-28 2024-05-03 北京市农林科学院智能装备技术研究中心 Feeding amount prediction method, device, electronic equipment and medium
CN118038415A (en) * 2024-04-12 2024-05-14 厦门中科星晨科技有限公司 Laser radar-based vehicle identification method, device, medium and electronic equipment
CN118072360A (en) * 2024-04-19 2024-05-24 浙江华是科技股份有限公司 Perimeter invasion single human body complete identification method and system
CN118393520B (en) * 2024-05-09 2025-03-25 杭州电子科技大学 A multi-sensor map construction method and device for active ballast machine
CN118393520A (en) * 2024-05-09 2024-07-26 杭州电子科技大学 A multi-sensor map construction method and device for active ballast machine
CN118888073A (en) * 2024-09-26 2024-11-01 浙江数汉科技有限公司 A blockchain-based medical data management system for IoT devices

Also Published As

Publication number Publication date
CN111985322B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN111985322A (en) A Lidar-based Road Environment Element Perception Method
CN112801022B (en) Method for rapidly detecting and updating road boundary of unmanned mining card operation area
US20230186647A1 (en) Feature extraction from mobile lidar and imagery data
CN114266987B (en) Intelligent identification method for high slope dangerous rock mass of unmanned aerial vehicle
Guinard et al. Weakly supervised segmentation-aided classification of urban scenes from 3D LiDAR point clouds
CN105701448B (en) Three-dimensional face point cloud nose detection method and the data processing equipment for applying it
CN113989784A (en) Road scene type identification method and system based on vehicle-mounted laser point cloud
CN113484875B (en) A Hierarchical Recognition Method of LiDAR Point Cloud Targets Based on Mixture Gaussian Sorting
CN112200248B (en) Point cloud semantic segmentation method, system and storage medium based on DBSCAN clustering under urban road environment
JP2016018538A (en) Image recognition device and method and program
CN113920360A (en) Road point cloud rod extraction and multi-scale identification method
CN114488073A (en) Method for processing point cloud data acquired by laser radar
CN112347894B (en) A single vegetation extraction method based on transfer learning and Gaussian mixture model separation
Xiang et al. Segmentation-based classification for 3D point clouds in the road environment
CN116109601A (en) A real-time target detection method based on 3D lidar point cloud
Sun et al. Classification of MLS point clouds in urban scenes using detrended geometric features from supervoxel-based local contexts
CN115063555A (en) Street tree extraction method for vehicle LiDAR point cloud grown in Gaussian distribution area
CN115760898A (en) World coordinate positioning method for road sprinklers in mixed Gaussian domain
CN117853804A (en) Point cloud vehicle target detection method combining multidimensional features
CN114972743B (en) A hierarchical single tree extraction method based on radius expansion
Bulatov et al. Automatic tree-crown detection in challenging scenarios
Nagy et al. 3D CNN based phantom object removing from mobile laser scanning data
Bulatov et al. Vectorization of road data extracted from aerial and uav imagery
CN111311643A (en) Video target tracking method using dynamic search
CN116051841A (en) Roadside ground object multistage clustering segmentation algorithm based on vehicle-mounted LiDAR point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant