[go: up one dir, main page]

CN114445576A - Three-dimensional dynamic sight distance evaluation system and method based on laser radar point cloud data - Google Patents

Three-dimensional dynamic sight distance evaluation system and method based on laser radar point cloud data Download PDF

Info

Publication number
CN114445576A
CN114445576A CN202111677039.3A CN202111677039A CN114445576A CN 114445576 A CN114445576 A CN 114445576A CN 202111677039 A CN202111677039 A CN 202111677039A CN 114445576 A CN114445576 A CN 114445576A
Authority
CN
China
Prior art keywords
sight
line
vehicle
point cloud
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111677039.3A
Other languages
Chinese (zh)
Other versions
CN114445576B (en
Inventor
王晓飞
丁振中
陈嘉熙
廖勇刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202111677039.3A priority Critical patent/CN114445576B/en
Publication of CN114445576A publication Critical patent/CN114445576A/en
Application granted granted Critical
Publication of CN114445576B publication Critical patent/CN114445576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a three-dimensional dynamic sight distance evaluation system and method based on laser radar point cloud data. The invention provides a theoretical basis for dynamic visual range inspection and road driving safety improvement based on a road three-dimensional scene.

Description

基于激光雷达点云数据的三维动态视距评估系统及方法3D dynamic line-of-sight assessment system and method based on lidar point cloud data

技术领域technical field

本发明涉及公路安全评价领域,具体涉及基于激光雷达点云数据的三维动态视距评估系统及方法。The invention relates to the field of highway safety evaluation, in particular to a three-dimensional dynamic line-of-sight evaluation system and method based on laser radar point cloud data.

背景技术Background technique

由于道路存在较为复杂的驾驶环境,车辆的实际运行速度随着道路条件、驾驶技术及车辆性能的变化而不断变化,因此分离的平、纵道路静态的二维设计视距无法准确反映真实驾驶环境下人对行车视距的需求,同时其针对道路的视距估计存在一定的偏差,从而导致视距评估结果有效性的难以得到确定。Due to the complex driving environment on the road, the actual running speed of the vehicle changes continuously with the changes of road conditions, driving techniques and vehicle performance. Therefore, the static two-dimensional design line of sight of the separated horizontal and vertical roads cannot accurately reflect the real driving environment. There is a certain deviation in the line-of-sight estimation for the road, which makes it difficult to determine the validity of the line-of-sight evaluation result.

发明内容SUMMARY OF THE INVENTION

为了克服现有技术存在的缺点与不足,本发明提供一种基于激光雷达点云数据的三维动态视距评估系统及方法,具体是基于运行速度预测模型下准确估计道路三维空间的可用视距。In order to overcome the shortcomings and deficiencies of the prior art, the present invention provides a three-dimensional dynamic line-of-sight evaluation system and method based on lidar point cloud data, specifically based on the running speed prediction model to accurately estimate the available line-of-sight of the road three-dimensional space.

本发明采用如下技术方案:The present invention adopts following technical scheme:

一种基于激光雷达点云数据的三维动态视距评估方法,包括如下步骤:A three-dimensional dynamic line-of-sight evaluation method based on lidar point cloud data, comprising the following steps:

获取车辆轨迹及点云数据;Obtain vehicle trajectory and point cloud data;

对点云数据进行预处理、分离,形成地面点云数据和高于地面的点云数据;Preprocess and separate point cloud data to form ground point cloud data and point cloud data higher than the ground;

对地面点云数据利用Delaunay三角剖分算法形成TIN三角网,并利用TIN三角网提取道路的线形数据;并按照道路线形组合形式进行分段,获得道路的三维几何特征集;基于三维线形几何特征的运行速度预测模型得到车辆的运行速度,根据运行速度确定驾驶员每个时刻的动视角范围;Delaunay triangulation algorithm is used to form a TIN triangulation network for the ground point cloud data, and the linear data of the road is extracted by the TIN triangulation network; it is segmented according to the road alignment combination to obtain the three-dimensional geometric feature set of the road; based on the three-dimensional linear geometric features The running speed prediction model of the vehicle is used to obtain the running speed of the vehicle, and the driving angle range of the driver at each moment is determined according to the running speed;

对高于地面的点云数据进行分类,形成均质物体及非均质物体,进一步得到只包含道路信息的三维环境;Classify the point cloud data above the ground to form homogeneous objects and non-homogeneous objects, and further obtain a 3D environment that only contains road information;

以车辆偏向单向道路一侧边缘线为视距的最不利位置,则偏向另一侧为视距的最利位置,两者中的较小视距即为驾驶员处于真实三维环境下的三维动态可用视距。The most unfavorable position of the line of sight of the vehicle is the most unfavorable position of the line of sight on one side of the one-way road, and the most favorable position of the line of sight is the other side, and the smaller of the two is the three-dimensional view of the driver in the real three-dimensional environment. Dynamic available viewing distance.

进一步,还包括视距评价步骤,计算车辆处于该位置的行车视距,并和该位置的三维动态可用视距比较,若三维动态可用视距大于行车视距,则满足要求,反之则不满足。Further, it also includes a line of sight evaluation step, calculating the driving line of sight of the vehicle at this position, and comparing it with the three-dimensional dynamic available line of sight at the position, if the three-dimensional dynamic usable line of sight is greater than the driving line of sight, the requirement is satisfied, otherwise it is not satisfied. .

进一步,以车辆偏向单向道路一侧边缘线为视距的最不利位置,则偏向另一侧为视距的最利位置,两者中的较小视距即为驾驶员处于真实三维环境下的三维动态可用视距,具体为:Further, the most unfavorable position of the line of sight on one side of the one-way road is the most unfavorable position of the line of sight, and the other side is the most favorable position of the line of sight. The smaller of the two is that the driver is in a real three-dimensional environment. The 3D dynamic available viewing distance of , specifically:

当车辆驾驶员处于最不利视距车辆位置,利用视锥椭圆面,驾驶员所能看到的车道边缘线的最远位置,与驾驶员目前对应车道边缘线的位置的路线弧长距离作为三维动态视距;When the driver of the vehicle is in the most unfavorable line-of-sight vehicle position, using the view cone ellipse, the farthest position of the lane edge line that the driver can see, and the route arc length of the driver's current position corresponding to the lane edge line as the three-dimensional dynamic line of sight;

当车辆驾驶员处于最有利视距车辆位置时,与上述方法相同方法获得三维动态视距;When the vehicle driver is at the vehicle position with the most favorable sight distance, obtain the three-dimensional dynamic sight distance in the same way as the above method;

两种位置获得的三维动态视距进行比较,较小值为驾驶员处于真实三维环境下的三维动态可用视距。The three-dimensional dynamic visual distance obtained from the two positions is compared, and the smaller value is the three-dimensional dynamic available visual distance of the driver in the real three-dimensional environment.

进一步,对预处理后的点云数据进行RANSAC分割算法,形成地面数据和高于地面的其他点云数据。Further, the RANSAC segmentation algorithm is performed on the preprocessed point cloud data to form ground data and other point cloud data higher than the ground.

进一步,所述动视角为动视野状态下驾驶员能够看到的水平和纵向的最大视角,通过该视角确定每个驾驶员位置的椭圆视锥体,其投影至竖向二维空间为视锥椭圆面。Further, the moving angle of view is the maximum horizontal and vertical angle of view that the driver can see in the dynamic field of view state, and the elliptical viewing cone of each driver's position is determined by this viewing angle, and its projection to the vertical two-dimensional space is the viewing cone. Ellipse.

一种三维动态视距评估方法的系统,包括激光雷达和惯导系统,所述惯导系统包括惯导单元及GPS单元。A system for a three-dimensional dynamic line-of-sight evaluation method includes a lidar and an inertial navigation system, wherein the inertial navigation system includes an inertial navigation unit and a GPS unit.

进一步,所述激光雷达设置在车辆的顶端,并保持水平位置。Further, the lidar is arranged at the top of the vehicle and maintains a horizontal position.

进一步,所述惯导单元安装在车辆后轮中心线且Y轴方向朝向车辆前进方向。Further, the inertial navigation unit is installed on the center line of the rear wheel of the vehicle and the Y-axis direction faces the forward direction of the vehicle.

进一步,所述激光雷达和惯导系统的时间戳同步,并且点云收集坐标系的原点开始于激光雷达位置处。惯导系统提供车辆在大地坐标系的位置。Further, the time stamps of the lidar and inertial navigation systems are synchronized, and the origin of the point cloud collection coordinate system starts at the lidar position. The inertial navigation system provides the position of the vehicle in the geodetic coordinate system.

进一步,通过LOAM算法进行帧内校正和特征提取。Further, intra-frame correction and feature extraction are performed by the LOAM algorithm.

本发明的有益效果:Beneficial effects of the present invention:

通过激光雷达技术的点云数据可以建立真实的三维环境,通过驾驶员不同运行速度下视锥空间形成道路的三维动态视距评估方法,为基于道路三维场景下的动态视距检查和提高道路的行车安全性提供了理论基础和分析计算方法。A real three-dimensional environment can be established through the point cloud data of lidar technology, and a three-dimensional dynamic line-of-sight evaluation method of the road is formed by the driver's view cone space at different operating speeds, which is a method for dynamic line-of-sight inspection and improvement of road based on the three-dimensional scene of the road. Driving safety provides theoretical basis and analytical calculation methods.

附图说明Description of drawings

图1为本发明公路三维动态可用视距的示意图;Fig. 1 is the schematic diagram of the three-dimensional dynamic available line-of-sight of the highway of the present invention;

图2为本发明路段中车辆处于道路最不利视距状态位置。FIG. 2 is the position of the vehicle in the most unfavorable line-of-sight state of the road in the road section of the present invention.

图3为本发明的流程图。Figure 3 is a flow chart of the present invention.

图中示出:The figure shows:

1、驾驶员视点;2、车道中心线;3、车道边缘线;4、水平方向最大动视角;5、纵向最大动视角;6、视点中心线;7、目标物点的位置;8、路侧障碍物区域;9、最不利视距车辆位置;10、车道分界线;11、车道边缘线;12、道路行车标线;13、障碍物区域。1. Driver's viewpoint; 2. Lane centerline; 3. Lane edge line; 4. Maximum moving angle of view in horizontal direction; 5. Maximum moving angle of view in vertical direction; 6. Centerline of viewpoint; 7. Position of target point; 8. Road Side obstacle area; 9. The most unfavorable sight distance vehicle position; 10. Lane boundary line; 11. Lane edge line; 12. Road marking line; 13. Obstacle area.

具体实施方式Detailed ways

下面结合实施例及附图,对本发明作进一步地详细说明,但本发明的实施方式不限于此。The present invention will be described in further detail below with reference to the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.

实施例Example

如图1-图3所示,一种基于激光雷达点云数据的三维动态视距评估系统及方法,系统部分包括激光雷达和惯导系统,所述惯导系统包括惯导单元及GPS。同步激光雷达和惯导系统的时间戳,通过LOAM算法进行帧内校正和特征提取,同时点云坐标系的原点开始于激光雷达位置处,从而利用惯导系统和激光雷达能够提供车辆上激光雷达的轨迹,最后精确确定点云的位姿,从而可以将数据转化到大地坐标系中。As shown in Figures 1-3, a three-dimensional dynamic line-of-sight assessment system and method based on lidar point cloud data, the system part includes lidar and an inertial navigation system, and the inertial navigation system includes an inertial navigation unit and a GPS. Synchronize the time stamps of the lidar and the inertial navigation system, perform intra-frame correction and feature extraction through the LOAM algorithm, and at the same time, the origin of the point cloud coordinate system starts at the position of the lidar, so that the inertial navigation system and lidar can be used to provide on-vehicle lidar. The trajectory of the point cloud is finally determined accurately, so that the data can be converted into the geodetic coordinate system.

具体的,激光雷达安装在车顶并保持水平位置,激光雷达具有测量距离远、扫描频率高、测距采样率高和测量分辨率高等优点,其中较高的扫描频率可以确保安装在快速运动的汽车上的激光雷达依然能得到清晰的三维点云,然而在采样频率固定的情况下,更快的扫描频率会导角分辨率降低,因此一般只能选取适中的扫描频率。Specifically, the lidar is installed on the roof and maintained in a horizontal position. The lidar has the advantages of long measurement distance, high scanning frequency, high sampling rate of ranging and high measurement resolution. The higher scanning frequency can ensure installation in fast-moving vehicles. The lidar on the car can still obtain a clear 3D point cloud. However, when the sampling frequency is fixed, the faster scanning frequency will reduce the angle resolution, so generally only a moderate scanning frequency can be selected.

激光雷达是逐帧收集点云数据,收集角度是360度,视线范围是通过驾驶员和目标物点(视锥中心)的水平角度α和竖向角度β形成椭圆视锥构成的。Lidar collects point cloud data frame by frame, the collection angle is 360 degrees, and the line of sight is formed by the horizontal angle α and the vertical angle β of the driver and the target point (view cone center) to form an elliptical view cone.

惯导系统包含一个定位和一个定向天线,惯导系统每秒可以产生150次测量,提供车辆每次测量的大地坐标,获取车辆的轨迹。激光雷达每秒采集20次点云,每次采集的点云都是以激光雷达为坐标原点,但是不是以大地坐标系的点云数据,通过同步激光雷达和惯导系统的时间和空间,可以将点云数据转化到大地坐标系下。The inertial navigation system contains a positioning and a directional antenna. The inertial navigation system can generate 150 measurements per second, provide the geodetic coordinates of each measurement of the vehicle, and obtain the trajectory of the vehicle. Lidar collects point clouds 20 times per second. The point cloud collected each time is based on the laser radar as the coordinate origin, but not the point cloud data in the geodetic coordinate system. By synchronizing the time and space of the laser radar and the inertial navigation system, you can Convert the point cloud data to the geodetic coordinate system.

具体评估方法,包括如下步骤:The specific evaluation method includes the following steps:

S1通过激光雷达及惯导系统获得车辆轨迹及点云数据,根据车辆轨迹精确确定点云的位姿。S1 obtains vehicle trajectory and point cloud data through lidar and inertial navigation system, and accurately determines the pose of the point cloud according to the vehicle trajectory.

S2对点云数据进行预处理、分离,形成地面点云数据和高于地面的点云数据;S2 preprocesses and separates point cloud data to form ground point cloud data and point cloud data higher than the ground;

具体来说:Specifically:

由于测量原理的关系,激光雷达的分辨率也会随着探测物体距离增加而剧烈下降,应该首先对于点云数据进行预处理,选择保留靠近扫描中心区域的点云数据,也就是点云数据较为集中的区域,该区域应包含大部分的点云,所包含的点具有成像清晰、误差小、密度大等特点,从而剔除激光雷达点云中疏远杂乱的点云。Due to the measurement principle, the resolution of lidar will also decrease sharply with the increase of the distance of the detected object. The point cloud data should be preprocessed first, and the point cloud data close to the scanning center area should be preserved, that is, the point cloud data is relatively low. The concentrated area should contain most of the point clouds, and the points contained have the characteristics of clear imaging, small error, and high density, so as to eliminate the distant and cluttered point clouds in the lidar point cloud.

对于保留下的点云数据利用RANSAC分割算法进行分离,形成地面点云数据和高于地面的点云数据。The retained point cloud data is separated by the RANSAC segmentation algorithm to form the ground point cloud data and the point cloud data above the ground.

S3对于地面点云数据利用Delaunay三角剖分算法形成TIN三角网,并利用TIN三角网提取道路的线形数据,先按照道路线形组合形式进行分段,提取计算道路的三维特征:单位切向量θ、主法向量β、副法向量γ、三维曲率k、三维挠率τ,最终形成道路的三维几何特征集。S3 uses the Delaunay triangulation algorithm to form a TIN triangular network for the ground point cloud data, and uses the TIN triangular network to extract the linear data of the road. First, it is segmented according to the road linear combination form, and the three-dimensional features of the road are extracted and calculated: the unit tangent vector θ, The main normal vector β, the secondary normal vector γ, the three-dimensional curvature k, and the three-dimensional torsion τ, finally form the three-dimensional geometric feature set of the road.

Figure BDA0003452264960000041
Figure BDA0003452264960000041

Figure BDA0003452264960000042
Figure BDA0003452264960000042

Figure BDA0003452264960000043
Figure BDA0003452264960000043

Figure BDA0003452264960000044
Figure BDA0003452264960000044

Figure BDA0003452264960000045
Figure BDA0003452264960000045

进一步,Delaunay三角网由一系列相连的但不重叠的三角形的集合,而且这些三角形的外接圆不包含这个面域的其他任何点。Further, the Delaunay triangulation consists of a series of connected but non-overlapping triangles whose circumcircles do not contain any other points in the area.

进一步,道路线形组合主要是指道路平面、纵断面的线形组合。平面线形分为直线、圆曲线、缓和曲线,纵断面分为纵坡路段和竖曲线段。Further, the road alignment combination mainly refers to the alignment combination of the road plane and longitudinal section. The plane alignment is divided into straight line, circular curve and easing curve, and the longitudinal section is divided into longitudinal slope section and vertical curve section.

运用已有的三维运行速度预测模型,将五类道路的三维特征代入,得到车辆在道路上的运行速度。Using the existing three-dimensional running speed prediction model, the three-dimensional features of the five types of roads are substituted to obtain the running speed of the vehicle on the road.

三维运行速度预测模型可以参考论文:基于公路线性三维空间几何特征的小客车运行速度预测方法研究。The three-dimensional running speed prediction model can refer to the paper: Research on the running speed prediction method of passenger cars based on the linear three-dimensional spatial geometric characteristics of highways.

对于不同常用的运行速度可以结合动视角的水平方向最大视角4和纵向最大视角5的取值分布进行线性插值,确定出运行速度下对应的视角形成的视锥椭圆面的范围。For different common running speeds, linear interpolation can be performed in combination with the value distribution of the maximum horizontal viewing angle 4 and the vertical maximum viewing angle 5 of the moving viewing angle to determine the range of the viewing cone ellipse formed by the corresponding viewing angle at the running speed.

S4利用分类算法对于高出地面的点云数据进行分类、重构形成定义的三维物体即均质物体,其他点云数据生成多补丁文件通过形成异质物体。剔除不属于道路信息的无关点云参数,例如其他车辆、行人等信息。看得到只包含道路信息的三维环境。所述道路信息包括道路的两侧边缘线,影响道路视锥的障碍物信息,比如护栏。S4 uses a classification algorithm to classify and reconstruct the point cloud data above the ground to form a defined three-dimensional object, that is, a homogeneous object, and other point cloud data to generate multi-patch files to form heterogeneous objects. Eliminate irrelevant point cloud parameters that do not belong to road information, such as other vehicles, pedestrians and other information. A 3D environment containing only road information can be seen. The road information includes edge lines on both sides of the road, and information on obstacles that affect the visual cone of the road, such as guardrails.

S5以车辆偏向单向道路一侧边缘线为视距的最不利位置,则偏向另一侧为视距的最利位置,两者中的较小视距即为驾驶员处于真实三维环境下的三维动态可用视距。For S5, the most unfavorable position of the line of sight is the vehicle's deviation to one side of the one-way road, and the most favorable position of the line of sight is to the other side. 3D dynamic available viewing distance.

所述三维动态视距是在公路三维线形中,车辆以一定速度行驶时驾驶员在动视角内,沿着道路延伸方向,从能看到的较近侧车道边缘线的第一个点开始搜索直至第一个不能看到的点为止,该点至观察点沿路线的距离即为观察点的三维动态视距。The three-dimensional dynamic line of sight is that in the three-dimensional alignment of the highway, when the vehicle is running at a certain speed, the driver is within the dynamic view, along the extending direction of the road, and starts searching from the first point of the nearer lane edge line that can be seen. Up to the first point that cannot be seen, the distance from this point to the observation point along the route is the three-dimensional dynamic sight distance of the observation point.

通过规范可以确定不同标准车型下驾驶员的视点相对于车辆中心的偏移位置,一般默认能看到目标物点的位置为较近侧车道边缘线上0.1m高位置。Through the specification, the offset position of the driver's viewpoint relative to the center of the vehicle can be determined under different standard models. Generally, the position where the target point can be seen by default is 0.1m high on the edge line of the near side lane.

驾驶员的视线中心线和车道线的切线保持平行一致。The center line of the driver's line of sight and the tangent to the lane line should be parallel and consistent.

通过空间两点通视原理推导至驾驶员视点在动视角形成的椭圆面和目标物点的通视关系。若目标物点被道路本身和周边设施形成的物体等障碍物遮挡则即为不通视,否则处于通视状态。The line-of-sight relationship between the ellipse formed by the driver's point of view and the target point is derived from the principle of two-point line-of-sight in space. If the target point is blocked by obstacles such as the road itself and objects formed by surrounding facilities, it is not visible, otherwise it is in a visible state.

对于车辆中心点对应的车道边缘线开始,目标物点的间隔均为5m,形成驾驶员视点和目标物点的视锥并向前推进直至第一个不通视的目标物点,该目标物点的前一个目标物点和驾驶员的路线距离即为车辆处于该位置下的三维动态可用视距,每间隔5m沿车辆前进方向依次计算不同驾驶员的视点位置从而确定整个路段可用视距。Starting from the lane edge line corresponding to the center point of the vehicle, the distance between the target object points is 5m, forming a view cone between the driver's point of view and the target object point, and advancing forward until the first non-visible target object point, the target object point The distance between the previous target point and the driver is the three-dimensional dynamic available sight distance of the vehicle at this position, and the viewpoint positions of different drivers are sequentially calculated along the vehicle's forward direction every 5m to determine the available sight distance of the entire road section.

具体来说:对于在包含车道分界线10和道路行车标线12信息的道路上,对于车辆的驾驶员处于最不利视距车辆位置9位置情况下,此时车辆靠近道路的车道边缘线3的位置,利用点云数据建立出的路侧障碍物区域8、13和道路线形组合遮挡下,利用驾驶员视点1结合运行速度下水平方向最大动视角4和纵向最大动视角5形成沿着视点中心线6为中心位置的视锥圆面,能够看到车道中心线2的最远位置目标物点的位置7和沿道路切线方向上驾驶员视点1投影至车道边缘线11上的曲线长距离作为三维可用视距。Specifically: on the road containing the information of the lane dividing line 10 and the road marking line 12, the driver of the vehicle is in the most unfavorable line of sight vehicle position 9, and the vehicle is close to the lane edge line 3 of the road at this time. Position, under the combined occlusion of roadside obstacle areas 8 and 13 established by point cloud data and road alignment, the driver's viewpoint 1 is combined with the maximum moving angle of view 4 in the horizontal direction and the maximum moving angle of view 5 in the vertical direction at the running speed to form a view along the center of the viewpoint. Line 6 is the view cone at the center position, and the position 7 of the target object point at the farthest position of the lane center line 2 and the long distance of the curve projected from the driver's viewpoint 1 to the lane edge line 11 along the tangential direction of the road are taken as 3D available viewing distance.

设车辆偏向单向道路的左、右侧车道边缘线为视距的最不利位置及最利位置,获得两个位置下的三维动态视距,较小值为驾驶员处于真实三维环境下的三维动态可用视距。Set the left and right lane edge lines of the vehicle to the one-way road as the most unfavorable position and the most favorable position of the line of sight, and obtain the 3D dynamic line of sight at the two positions. Dynamic available viewing distance.

根据规范中提供的公式计算出车辆处于该位置处的行车视距,并和计算出该位置处的三维动态可用视距比较,若三维动态可用视距大于要求的行车视距,则满足视距验算要求,反之则不满足。Calculate the driving sight distance of the vehicle at this position according to the formula provided in the specification, and compare it with the 3D dynamic available sight distance calculated at the position. If the 3D dynamic available sight distance is greater than the required driving sight distance, the sight distance is satisfied. Check calculation requirements, and vice versa is not satisfied.

上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受所述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above-mentioned embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited by the described embodiments, and any other changes, modifications, substitutions, and combinations made without departing from the spirit and principle of the present invention , simplification, all should be equivalent replacement modes, and are all included in the protection scope of the present invention.

Claims (10)

1.一种基于激光雷达点云数据的三维动态视距评估方法,其特征在于,包括如下步骤:1. a three-dimensional dynamic line-of-sight assessment method based on lidar point cloud data, is characterized in that, comprises the steps: 获取车辆轨迹及点云数据;Obtain vehicle trajectory and point cloud data; 对点云数据进行预处理、分离,形成地面点云数据和高于地面的点云数据;Preprocess and separate point cloud data to form ground point cloud data and point cloud data higher than the ground; 对地面点云数据利用Delaunay三角剖分算法形成TIN三角网,并利用TIN三角网提取道路的线形数据;并按照道路线形组合形式进行分段,获得道路的三维几何特征集;基于三维线形几何特征的运行速度预测模型得到车辆的运行速度,根据运行速度确定驾驶员每个时刻的动视角范围;Using the Delaunay triangulation algorithm to form a TIN triangulation network for the ground point cloud data, and using the TIN triangulation network to extract the road alignment data; and segmenting according to the road alignment combination to obtain the 3D geometric feature set of the road; based on the 3D linear geometric features The running speed prediction model of the vehicle is used to obtain the running speed of the vehicle, and the driving angle range of the driver at each moment is determined according to the running speed; 对高于地面的点云数据进行分类,形成均质物体及非均质物体,进一步得到只包含道路信息的三维环境;Classify the point cloud data above the ground to form homogeneous objects and non-homogeneous objects, and further obtain a 3D environment that only contains road information; 以车辆偏向单向道路一侧边缘线为视距的最不利位置,则偏向另一侧为视距的最利位置,两者中的较小视距即为驾驶员处于真实三维环境下的三维动态可用视距。The most unfavorable position of the line of sight of the vehicle is the most unfavorable position of the line of sight on one side of the one-way road, and the most favorable position of the line of sight is the other side, and the smaller of the two is the three-dimensional view of the driver in the real three-dimensional environment. Dynamic available viewing distance. 2.根据权利要求1所述的三维动态视距评估方法,其特征在于,还包括视距评价步骤,计算车辆处于该位置的行车视距,并和该位置的三维动态可用视距比较,若三维动态可用视距大于行车视距,则满足要求,反之则不满足。2. The three-dimensional dynamic line-of-sight evaluation method according to claim 1, further comprising a line-of-sight evaluation step, calculating the driving line-of-sight distance of the vehicle at this position, and comparing with the three-dimensional dynamic available line-of-sight distance of the position, if If the three-dimensional dynamic available sight distance is greater than the driving sight distance, the requirement is satisfied, otherwise it is not satisfied. 3.根据权利要求1所述的三维动态视距评估方法,其特征在于,以车辆偏向单向道路一侧边缘线为视距的最不利位置,则偏向另一侧为视距的最利位置,两者中的较小视距即为驾驶员处于真实三维环境下的三维动态可用视距,具体为:3. The three-dimensional dynamic line-of-sight evaluation method according to claim 1 is characterized in that, taking the edge line of one side of the vehicle deviating to one side of the road as the most unfavorable position of the line of sight, then deflecting to the other side is the most favorable position of the line of sight , the smaller of the two is the 3D dynamic available sight distance of the driver in the real 3D environment, specifically: 当车辆驾驶员处于最不利视距车辆位置,利用视锥椭圆面,驾驶员所能看到的车道边缘线的最远位置,与驾驶员目前对应车道边缘线的位置的路线弧长距离作为三维动态视距;When the driver of the vehicle is in the most unfavorable line-of-sight vehicle position, using the view cone ellipse, the farthest position of the lane edge line that the driver can see, and the route arc length of the driver's current position corresponding to the lane edge line as the three-dimensional dynamic line of sight; 当车辆驾驶员处于最有利视距车辆位置时,与上述方法相同方法获得三维动态视距;When the vehicle driver is at the vehicle position with the most favorable sight distance, obtain the three-dimensional dynamic sight distance in the same way as the above method; 两种位置获得的三维动态视距进行比较,较小值为驾驶员处于真实三维环境下的三维动态可用视距。The three-dimensional dynamic visual distance obtained from the two positions is compared, and the smaller value is the three-dimensional dynamic available visual distance of the driver in the real three-dimensional environment. 4.根据权利要求1所述的三维动态视距评估方法,其特征在于,对预处理后的点云数据进行RANSAC分割算法,形成地面数据和高于地面的其他点云数据。4 . The three-dimensional dynamic line-of-sight evaluation method according to claim 1 , wherein a RANSAC segmentation algorithm is performed on the preprocessed point cloud data to form ground data and other point cloud data higher than the ground. 5 . 5.根据权利要求1所述的三维动态视距评估方法,其特征在于,所述动视角为动视野状态下驾驶员能够看到的水平和纵向的最大视角,通过该视角确定每个驾驶员位置的椭圆视锥体,其投影至竖向二维空间为视锥椭圆面。5. three-dimensional dynamic visual distance evaluation method according to claim 1 is characterized in that, described dynamic angle of view is the horizontal and vertical maximum angle of view that driver can see under dynamic field of view state, and each driver is determined by this angle of view The elliptical view frustum of the position, which is projected to the vertical two-dimensional space as the view frustum ellipse. 6.一种实现权利要求1-5任一项所述的三维动态视距评估方法的系统,其特征在于,包括激光雷达和惯导系统,所述惯导系统包括惯导单元及GPS单元。6. A system for implementing the three-dimensional dynamic line-of-sight assessment method according to any one of claims 1-5, characterized in that it comprises a lidar and an inertial navigation system, and the inertial navigation system comprises an inertial navigation unit and a GPS unit. 7.根据权利要求6所述的系统,其特征在于,所述激光雷达设置在车辆的顶端,并保持水平位置。7. The system according to claim 6, wherein the lidar is arranged at the top of the vehicle and maintains a horizontal position. 8.根据权利要求6所述的系统,其特征在于,所述惯导单元安装在车辆后轮中心线且Y轴方向朝向车辆前进方向。8 . The system according to claim 6 , wherein the inertial navigation unit is installed on the center line of the rear wheel of the vehicle and the Y-axis direction is toward the forward direction of the vehicle. 9 . 9.根据权利要求6-8任一项所述的系统,其特征在于,所述激光雷达和惯导系统的时间戳同步,并且点云收集坐标系的原点开始于激光雷达位置处,惯导系统提供车辆在大地坐标系的位置。9. The system according to any one of claims 6-8, wherein the time stamps of the lidar and the inertial navigation system are synchronized, and the origin of the point cloud collection coordinate system starts at the position of the lidar, and the inertial navigation system starts at the position of the lidar. The system provides the position of the vehicle in the geodetic coordinate system. 10.根据权利要求6所述的系统,其特征在于,通过LOAM算法进行帧内校正和特征提取。10. The system according to claim 6, wherein intra-frame correction and feature extraction are performed by LOAM algorithm.
CN202111677039.3A 2021-12-31 2021-12-31 Three-dimensional dynamic sight distance evaluation system and method based on laser radar point cloud data Active CN114445576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111677039.3A CN114445576B (en) 2021-12-31 2021-12-31 Three-dimensional dynamic sight distance evaluation system and method based on laser radar point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111677039.3A CN114445576B (en) 2021-12-31 2021-12-31 Three-dimensional dynamic sight distance evaluation system and method based on laser radar point cloud data

Publications (2)

Publication Number Publication Date
CN114445576A true CN114445576A (en) 2022-05-06
CN114445576B CN114445576B (en) 2024-12-03

Family

ID=81366027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111677039.3A Active CN114445576B (en) 2021-12-31 2021-12-31 Three-dimensional dynamic sight distance evaluation system and method based on laser radar point cloud data

Country Status (1)

Country Link
CN (1) CN114445576B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080021680A1 (en) * 2005-10-04 2008-01-24 Rdv Systems, Ltd. Method and apparatus for evaluating sight distance
CN102230794A (en) * 2011-04-01 2011-11-02 北京航空航天大学 Method for dynamically measuring sight distance of drivers based on video
CN109613544A (en) * 2018-12-26 2019-04-12 长安大学 A kind of highway sighting distance detection method based on laser radar

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080021680A1 (en) * 2005-10-04 2008-01-24 Rdv Systems, Ltd. Method and apparatus for evaluating sight distance
CN102230794A (en) * 2011-04-01 2011-11-02 北京航空航天大学 Method for dynamically measuring sight distance of drivers based on video
CN109613544A (en) * 2018-12-26 2019-04-12 长安大学 A kind of highway sighting distance detection method based on laser radar

Also Published As

Publication number Publication date
CN114445576B (en) 2024-12-03

Similar Documents

Publication Publication Date Title
CN109556615B (en) Driving map generation method based on multi-sensor fusion cognition of automatic driving
US11200433B2 (en) Detection and classification systems and methods for autonomous vehicle navigation
WO2022156276A1 (en) Target detection method and apparatus
JP7090597B2 (en) Methods and systems for generating and using location reference data
JP6171612B2 (en) Virtual lane generation apparatus and program
EP2372304B1 (en) Vehicle position recognition system
KR102558055B1 (en) Suboptimal estimation method
US9074906B2 (en) Road shape recognition device
JP2022110001A (en) Method and system for generating and using localization reference data
JP2023073257A (en) Output device, control method, program, and storage medium
EP2372308B1 (en) Image processing system and vehicle control system
GB2626681A (en) Systems and methods for vehicle navigation
GB2627091A (en) Systems and methods for determining road safety
CN110470309B (en) Vehicle position estimation device
US20230236037A1 (en) Systems and methods for common speed mapping and navigation
US20230202473A1 (en) Calculating vehicle speed for a road curve
CN111429756A (en) A video technology-based road tunnel anti-rear-end warning method
CN112562061A (en) Driving vision enhancement system and method based on laser radar image
Li et al. Pitch angle estimation using a Vehicle-Mounted monocular camera for range measurement
Xiong et al. A 3d estimation of structural road surface based on lane-line information
JP5746996B2 (en) Road environment recognition device
JP2019148889A (en) Road boundary detection device
CN114445576A (en) Three-dimensional dynamic sight distance evaluation system and method based on laser radar point cloud data
CN112530270B (en) Mapping method and device based on region allocation
Jarnea et al. Advanced driver assistance system for overtaking maneuver on a highway

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant