CN101975951B - Field environment barrier detection method fusing distance and image information - Google Patents
Field environment barrier detection method fusing distance and image information Download PDFInfo
- Publication number
- CN101975951B CN101975951B CN 201010195586 CN201010195586A CN101975951B CN 101975951 B CN101975951 B CN 101975951B CN 201010195586 CN201010195586 CN 201010195586 CN 201010195586 A CN201010195586 A CN 201010195586A CN 101975951 B CN101975951 B CN 101975951B
- Authority
- CN
- China
- Prior art keywords
- area
- points
- coordinate system
- camera
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 32
- 230000004888 barrier function Effects 0.000 title 1
- 238000000034 method Methods 0.000 claims abstract description 23
- 230000004927 fusion Effects 0.000 claims abstract description 19
- 244000025254 Cannabis sativa Species 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000013178 mathematical model Methods 0.000 claims abstract description 7
- 239000013598 vector Substances 0.000 claims description 37
- 238000009826 distribution Methods 0.000 claims description 30
- 239000011159 matrix material Substances 0.000 claims description 30
- 239000000203 mixture Substances 0.000 claims description 12
- 230000035515 penetration Effects 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000000354 decomposition reaction Methods 0.000 claims description 8
- 230000007613 environmental effect Effects 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000013519 translation Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 238000005315 distribution function Methods 0.000 claims description 3
- 230000002159 abnormal effect Effects 0.000 abstract description 2
- 230000000875 corresponding effect Effects 0.000 description 14
- 230000008569 process Effects 0.000 description 6
- 239000003086 colorant Substances 0.000 description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 239000004575 stone Substances 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 241001237731 Microtia elva Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000004387 environmental modeling Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000003897 fog Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Images
Landscapes
- Traffic Control Systems (AREA)
Abstract
本发明属于障碍检测技术领域,具体涉及一种融合距离和图像信息的野外环境障碍检测方法,本发明的目的是提供一种能够为无人驾驶车辆在野外行驶条件下对常见障碍进行检测,从而便于车辆规划行驶路径,提高无人驾驶车辆野外行驶的自主能力的方法。它包括如下步骤:建立数学模型、激光雷达距离数据障碍检测、摄像机图像处理和结果融合。本发明可以为无人驾驶车辆在野外行驶条件下常见到的障碍例如草地、道路、树木、灌木丛等进行检测识别,并对可行驶区域的色彩建模,进一步检测出可行驶区域的异常部分。可以将车辆周围环境划分出“不可行驶区域”、“可行驶区域”和“未知区域”等,有利于车辆规划行驶路径,提高无人驾驶车辆野外行驶的自主能力。
The invention belongs to the technical field of obstacle detection, and specifically relates to a method for detecting obstacles in a field environment by fusing distance and image information. The invention provides a method for facilitating the vehicle to plan a driving path and improving the autonomous ability of the unmanned vehicle for field driving. It includes the following steps: establishment of a mathematical model, obstacle detection of laser radar distance data, camera image processing and result fusion. The present invention can detect and identify common obstacles such as grass, roads, trees, bushes, etc., which are commonly seen by unmanned vehicles in the field, and model the color of the drivable area to further detect abnormal parts of the drivable area . The surrounding environment of the vehicle can be divided into "non-drivable area", "drivable area" and "unknown area", which is conducive to vehicle planning driving path and improves the autonomy of unmanned vehicles in the wild.
Description
技术领域 technical field
本发明属于障碍检测技术领域,具体涉及一种融合距离和图像信息的野外环境障碍检测方法。 The invention belongs to the technical field of obstacle detection, and in particular relates to a method for detecting obstacles in a field environment by fusing distance and image information. the
背景技术 Background technique
无人驾驶车辆在行驶道路上的障碍物检测是环境感知技术研究领域中的重要组成部分。在障碍检测的应用中,常用的传感器有激光雷达、摄像机、毫米波雷达、超声波传感器等。激光雷达通过测量发射光和从物体表面反射光之间的时间差来测量距离。它可以直接获取距离数据,为车辆提供了便捷直观的环境描述信息。激光雷达的具体应用有很多形式,例如物体的定位和跟踪,环境建模和避障,定位和地图构建(SLAM),地形和地貌特征的分类等,还可以利用激光雷达的回波强度信息进行障碍检测和跟踪。摄像机得到的周围环境的物理信息丰富,且具有更高的隐蔽性,因此得到了广泛的应用。无论是单摄像机还是多摄像机的应用,现有技术中都有很多研究。但是摄像机图像容易受到光照、烟雾、环境的影响,多摄像机形成的立体视觉虽然能够得到实际距离,但是图像间的匹配复杂、耗时,影响了实际应用效果。毫米波雷达体积小,重量轻,可测距离远,穿透雾、烟、灰尘的能力强,但是分辨率和精度较低,主要用于汽车防撞以及基于车距检测功能的车辆追踪系统和追尾减轻控制刹车系统等。超声波传感器通过检测超声波从发射器发射到遇到障碍反射回接收器的时间来测距。超声波传感器体积小又廉价,应用较为普遍,但是由于精度低,只能用在对环境感知精度要求不高的场合。 Obstacle detection of unmanned vehicles on the road is an important part of the research field of environmental perception technology. In the application of obstacle detection, commonly used sensors include lidar, camera, millimeter wave radar, ultrasonic sensor, etc. LiDAR measures distance by measuring the time difference between when light is emitted and when it is reflected from an object's surface. It can directly obtain the distance data and provide convenient and intuitive environment description information for the vehicle. There are many specific applications of lidar, such as object positioning and tracking, environmental modeling and obstacle avoidance, localization and map construction (SLAM), classification of terrain and landform features, etc., can also use the echo intensity information of lidar for Obstacle detection and tracking. The physical information of the surrounding environment obtained by the camera is rich and has higher concealment, so it has been widely used. Regardless of the application of a single camera or multiple cameras, there are many studies in the prior art. However, camera images are easily affected by light, smog, and the environment. Although the stereo vision formed by multiple cameras can obtain the actual distance, the matching between images is complicated and time-consuming, which affects the actual application effect. Millimeter wave radar is small in size, light in weight, has a long measurable distance, and has a strong ability to penetrate fog, smoke, and dust, but its resolution and accuracy are low. It is mainly used for vehicle collision avoidance and vehicle tracking systems based on vehicle distance detection functions and Rear-end collision reduction control brake system, etc. Ultrasonic sensors measure distance by detecting the time from when an ultrasonic wave is emitted from the transmitter to when it encounters an obstacle and is reflected back to the receiver. Ultrasonic sensors are small and cheap, and are widely used. However, due to their low precision, they can only be used in occasions that do not require high precision in environmental perception. the
而激光雷达扫描数据和光学图像对环境的描述具有很强的互补性,如激光扫描数据可以快速准确地获取物体表面密集的三维坐标,而光学图像包含 了丰富的色彩。因此,融合激光扫描数据与光学图像可以获得无人驾驶车辆行驶环境更加全面的信息,提高了障碍检测的快速性和对复杂环境的适应能力。关于距离和图像的信息融合问题,有不少学者进行了研究。Peter在室内环境下,将单目摄像机图像和声纳测距的数据进行融合,建立了室内环境模型,但是不适合在室外复杂野外环境下应用。马里兰大学的Tsai-hong Hong针对美国Demo III计划中试验无人车在野外环境行驶的要求,提出了融合三维激光雷达数据和摄像机图像的障碍检测算法,只针对三种典型路况即路标、池塘和道路进行识别,缺乏对环境中其它常见障碍例如草地、树木进行识别的研究。Monteiro使用激光雷达和摄像机融合后的信息,用于检测道路上移动的行人和车辆,没有对周围环境进行识别理解。项志宇提出了一种融合激光雷达与摄像机信息的草丛中障碍物检测方法。该方法先将激光雷达数据分组,判别出障碍点,然后映射到摄像机图像中,以此分割出摄像机图像中的不可行驶区域和非不可行驶区域,降低了纯激光雷达判别时出现的误判,并改善了最终障碍物轮廓检测的完整性。但是当障碍物和草丛的颜色很接近时,融合检测的效果会有所下降,而且没有根据判定的不可行驶区域,划分出可行驶区域,缺乏对环境更具体的理解。综上,现有技术没有能够为无人驾驶车辆在野外行驶条件下对常见障碍进行检测,从而便于车辆规划行驶路径,提高无人驾驶车辆野外行驶的自主能力的障碍检测方法。 However, lidar scanning data and optical images are highly complementary in describing the environment. For example, laser scanning data can quickly and accurately obtain dense three-dimensional coordinates on the surface of objects, while optical images contain rich colors. Therefore, the fusion of laser scanning data and optical images can obtain more comprehensive information on the driving environment of unmanned vehicles, which improves the speed of obstacle detection and the adaptability to complex environments. Many scholars have conducted research on the information fusion of distance and images. In the indoor environment, Peter fused the data of the monocular camera image and the sonar ranging to establish an indoor environment model, but it is not suitable for application in the complex outdoor environment. Tsai-hong Hong from the University of Maryland proposed an obstacle detection algorithm that combines three-dimensional lidar data and camera images in response to the requirements of the U.S. Demo III program to test unmanned vehicles in the wild environment, only for three typical road conditions, namely road signs, ponds and There is a lack of research on the identification of other common obstacles in the environment such as grass and trees. Monteiro uses information fused from lidar and cameras to detect moving pedestrians and vehicles on the road without recognizing and understanding the surrounding environment. Xiang Zhiyu proposed a method for detecting obstacles in grass by fusing lidar and camera information. In this method, the lidar data is grouped first, the obstacle points are identified, and then mapped to the camera image, so as to segment the non-drivable area and the non-drivable area in the camera image, which reduces the misjudgment that occurs when the pure lidar is judged. And improved the integrity of the final obstacle contour detection. However, when the colors of obstacles and grass are very close, the effect of fusion detection will be reduced, and the drivable area is not divided according to the determined undrivable area, lacking a more specific understanding of the environment. To sum up, the existing technology does not have an obstacle detection method that can detect common obstacles for unmanned vehicles under field driving conditions, so as to facilitate vehicle planning of driving paths and improve the autonomous ability of unmanned vehicles in the field. the
发明内容 Contents of the invention
本发明的目的是提供一种能够为无人驾驶车辆在野外行驶条件下对常见障碍进行检测,从而便于车辆规划行驶路径,提高无人驾驶车辆野外行驶的自主能力的融合距离和图像信息的野外环境障碍检测方法。 The purpose of the present invention is to provide a field that can detect common obstacles for unmanned vehicles under field driving conditions, thereby facilitating vehicle planning driving paths, and improving the fusion distance and image information of the autonomous ability of unmanned vehicles driving in the field. Environmental obstacle detection method. the
本发明是这样实现的: The present invention is achieved like this:
一种融合距离和图像信息的野外环境障碍检测方法,包括如下步骤: A method for detecting obstacles in a field environment by fusing distance and image information, comprising the following steps:
第一步:建立数学模型; The first step: establish a mathematical model;
第二步:激光雷达距离数据障碍检测; The second step: lidar distance data obstacle detection;
第三步:摄像机图像处理; The third step: camera image processing;
第四步:结果融合。 Step 4: Result fusion. the
如上所述的建立数学模型步骤中,建立车体及激光雷达、摄像机和惯导坐标系,并将所述坐标系相互关联,具体包括如下步骤: In the step of establishing a mathematical model as described above, the vehicle body, laser radar, camera and inertial navigation coordinate system are established, and the coordinate systems are correlated with each other, which specifically includes the following steps:
(1)建立车体坐标系ObXbYbZb; (1) Establish car body coordinate system O b X b Y b Z b ;
原点Ob位于车体的某个固定点,原点为激光雷达光源在与地面重合的平面上的投影,Xb轴指向车体正右方,Yb轴指向车体正前方,Zb轴指向车体上方;车体坐标系原点到地面的高度为h0; The origin O b is located at a fixed point on the car body. The origin is the projection of the lidar light source on the plane coincident with the ground. The X b axis points to the right of the car body, the Y b axis points to the front of the car body, and the Z b axis points to Above the car body; the height from the origin of the car body coordinate system to the ground is h 0 ;
(2)建立激光雷达坐标系OlXlYlZl; (2) Establish the laser radar coordinate system O l X l Y l Z l ;
定义该坐标系原点Ol位于激光雷达光源中心,Xl轴竖直向上,Yl轴水平向右,Zl轴指向车体前方; It is defined that the origin O1 of the coordinate system is located at the center of the laser radar light source, the X1 axis is vertically upward, the Y1 axis is horizontally to the right, and the Z1 axis points to the front of the car body;
(3)建立摄像机坐标系OcXcYcZc; (3) Establish camera coordinate system O c X c Y c Z c ;
坐标系原点Oc位于摄像机镜头的焦点,Xc轴和图像平面水平方向平行,方向向右;Yc轴和图像平面的竖直方向平行,方向向下;Zc轴垂直于图像平面,指向正前方; The origin of the coordinate system O c is located at the focal point of the camera lens, the X c axis is parallel to the horizontal direction of the image plane, and the direction is to the right; the Y c axis is parallel to the vertical direction of the image plane, and the direction is downward; the Z c axis is perpendicular to the image plane, pointing to In front of;
(4)建立惯导坐标系OIXIYIZI; (4) Establish the inertial navigation coordinate system O I X I Y I Z I ;
惯导坐标系和车体坐标系方向一致; The inertial navigation coordinate system is in the same direction as the vehicle body coordinate system;
(5)确定坐标系转换关系,具体包括如下步骤: (5) Determine the coordinate system conversion relationship, specifically including the following steps:
(a)从激光雷达坐标系OlXlYlZl转换到车体坐标系ObXbYbZb; (a) Transform from the lidar coordinate system O l X l Y l Z l to the car body coordinate system O b X b Y b Z b ;
先绕Yl轴旋转90°,再绕Zl轴旋转90°,然后沿Zl轴方向向下平移h0的距离; Rotate 90° around the Y l- axis first, then rotate 90° around the Z l- axis, and then translate down the distance of h0 along the Z l- axis;
式中各参数含义与上述相同; The meaning of each parameter in the formula is the same as above;
(b)摄像机坐标系转换到图像坐标系; (b) Convert the camera coordinate system to the image coordinate system;
根据现有的摄像机针孔模型,将图像坐标系下点的三维坐标转换成图像上的像素位置; According to the existing camera pinhole model, the three-dimensional coordinates of the points in the image coordinate system are converted into pixel positions on the image;
其中A为摄像机内参数, αx、αy分别是图像坐标系中u轴和v轴的尺度因子,u0、v0是光学中心,γ是u轴和v轴不垂直因子,很多情况下,令γ=0;式中其它参数含义与上述相同; where A is the internal parameter of the camera, α x , α y are the scale factors of u-axis and v-axis in the image coordinate system respectively, u 0 and v 0 are the optical center, γ is the non-perpendicular factor of u-axis and v-axis, in many cases, let γ=0; The meanings of other parameters in are the same as above;
(c)从激光雷达坐标系OlXlYlZl转换到摄像机坐标系OcXcYcZc; (c) Transform from the lidar coordinate system O l X l Y l Z l to the camera coordinate system O c X c Y c Z c ;
其中R和t分别为从激光雷达坐标系转换到摄像机坐标系的旋转矩阵和平移向量;式中其它参数含义与上述相同 Where R and t are the rotation matrix and translation vector converted from the lidar coordinate system to the camera coordinate system respectively; the meanings of other parameters in the formula are the same as above
(d)距离数据校正; (d) distance data correction;
假设惯导设备输出的车体俯仰角为α、滚转角为γ,校正后的距离数据通过下式获得: Assuming that the pitch angle of the car body output by the inertial navigation equipment is α, and the roll angle is γ, the corrected distance data is obtained by the following formula:
式中参数含义与上述相同。 The meanings of the parameters in the formula are the same as above. the
如上所述的激光雷达距离数据障碍检测步骤,包括如下步骤: The above-mentioned lidar distance data obstacle detection step includes the following steps:
(1)距离数据预处理; (1) distance data preprocessing;
对激光雷达距离数据进行滤波; Filter the lidar range data;
(2)区域分割; (2) Region segmentation;
将激光雷达扫描环境得到的三维点云进行分割,得到多个区域; Segment the 3D point cloud obtained by the lidar scanning environment to obtain multiple regions;
(3)区域识别; (3) Area identification;
对草地、道路、树木、灌木丛进行分析识别。 Analyze and identify grasslands, roads, trees, and bushes. the
如上所述的区域分割步骤,包括如下步骤: The region segmentation steps as described above include the following steps:
(a)检索某三维空间点pi周围领域中最近的k个点,计算由这k+1个点组成的点集Qi的法线,作为该扫描点的法向量; (a) Retrieve the nearest k points in the field around a certain three-dimensional space point p i , and calculate the normal of the point set Q i composed of these k+1 points, as the normal vector of the scanning point;
(b)将三维空间点pi的坐标 和法线 组成特征向量 两个三维点的笛卡尔空间距离为ρe(pi,pj)=‖pi-pj‖,其中,pj的坐标 pj和其法线 组成特征向量 角度距离为ρa(ni,nj)=0.5-0.5×<ni,nj>/‖ni‖‖nj‖,其中, 为空间距离和角度距离设置不同的阈值ρe,max、ρa,max;当两个点之间的空间距离或者角度距离大于相应阈值时,认为这两个点不在同一区域; (b) The coordinates of point p i in three-dimensional space and normal Composing eigenvectors The Cartesian space distance of two three-dimensional points is ρ e (p i , p j )=‖p i -p j ‖, where the coordinates of p j p j and its normal Composing eigenvectors The angular distance is ρ a (n i , n j )=0.5-0.5×<n i , n j >/‖n i ‖‖n j ‖, where, Set different thresholds ρ e, max , ρ a, max for the spatial distance and angular distance; when the spatial distance or angular distance between two points is greater than the corresponding threshold, the two points are not considered to be in the same area;
(c)对所有相邻点计算相互之间的空间距离ρe和角度ρa距离;如果ρe≤ρe,max且ρa≤ρa,max,将这两个点聚类合并; (c) Calculate the mutual spatial distance ρ e and angle ρ a distance for all adjacent points; if ρ e ≤ ρ e, max and ρ a ≤ ρ a, max , cluster and merge these two points;
(d)如果某一点无法和其它点聚类,新建一个区域; (d) If a certain point cannot be clustered with other points, create a new area;
(e)将所有点聚类后,检验每类所拥有点的数量,如果数量少于某一阈值ncmin,而且类内点的平均高度远远大于车辆跨越障碍的高度hmax,该类为噪声,将其删除。 (e) After clustering all the points, check the number of points owned by each class. If the number is less than a certain threshold n cmin and the average height of the points in the class is much greater than the height h max of the vehicle crossing the obstacle, the class is noise, remove it.
如上所述的区域分割步骤中,k的取值范围是10~30。 In the region division step described above, the value range of k is 10-30. the
如上所述的区域识别步骤具体包括如下步骤: The above-mentioned region identification step specifically includes the following steps:
(a)计算区域特征; (a) Computing regional features;
计算平均高度、奇异值、区域曲面法向量;计算方法如下: Calculate the average height, singular value, and area surface normal vector; the calculation method is as follows:
平均高度: n为该区域内点的数量,i为点的序号; average height: n is the number of points in the area, i is the serial number of the point;
奇异值:[σ1 σ2 σ3];将该区域内所有点的坐标组成矩阵,进行奇异值分解U∑VT,∑中的元素即为奇异值,并且将σ1、σ2、σ3由大到小排序; Singular value: [σ 1 σ 2 σ 3 ]; the coordinates of all points in the area are formed into a matrix, and the singular value decomposition U∑V T is performed. The elements in ∑ are singular values, and σ 1 , σ 2 , σ 3 Sort from big to small;
区域曲面法向量:γ;最小的奇异值σ3对应的VT中的单位向量; Area surface normal vector: γ; the unit vector in V T corresponding to the smallest singular value σ 3 ;
(b)识别,包括如下步骤: (b) identification, including the following steps:
①草地识别; ①Grass recognition;
当满足以下几个条件时,识别为草地; When the following conditions are met, it is recognized as grassland;
1)区域内点平均高度低于车辆越障高度hmax; 1) The average height of points in the area is lower than the vehicle obstacle clearance height h max ;
2)区域内点的穿透率高,奇异值分布为σ1≈σ2≈σ3; 2) The penetration rate of points in the area is high, and the singular value distribution is σ 1 ≈ σ 2 ≈ σ 3 ;
3)区域法向量γ向上; 3) The area normal vector γ is upward;
在识别出草地之后,将这些点合并到同一区域,并且标注该区域为可行驶区域; After identifying the grass, merge these points into the same area, and mark this area as a drivable area;
②道路识别; ②Road identification;
当满足以下几个条件时,识别为道路: When the following conditions are met, it is recognized as a road:
1)区域内点高度低于车辆越障高度 1) The point height in the area is lower than the vehicle obstacle clearance height
2)区域内点的穿透率低,σ1>σ2>>σ3; 2) The penetration rate of points in the area is low, σ 1 >σ 2 >>σ 3 ;
3)区域法向量γ向上; 3) The area normal vector γ is upward;
③树干识别; ③trunk identification;
当奇异值的分布规律为σ1>>σ2≈σ3,区域法向量γ位于水平方向时,识别为树干; When the distribution of singular values is σ 1 >>σ 2 ≈σ 3 , and the area normal vector γ is in the horizontal direction, it is recognized as a tree trunk;
④灌木丛识别; ④Recognition of bushes;
当奇异值为σ1>σ2>σ3,,区域法向量γ也位于水平方向时,识别为灌木丛。 When the singular value is σ 1 >σ 2 >σ 3 , and the area normal vector γ is also in the horizontal direction, it is recognized as a bush.
如上所述的识别步骤中,当奇异值σ1与σ2的比值σ1/σ2在[1 4]内时,σ1≈σ2;当奇异值σ1与σ2的比值σ1/σ2大于50时,奇异值σ1>>σ2。 In the identification step mentioned above, when the ratio σ 1 /σ 2 of the singular value σ 1 to σ 2 is within [1 4], σ 1 ≈σ 2 ; when the ratio σ 1 /σ 2 of the singular value σ 1 to σ 2 When σ 2 is greater than 50, the singular value σ 1 >>σ 2 .
如上所述的摄像机图像处理步骤,具体包括如下步骤: The camera image processing steps as described above specifically include the following steps:
(1)将划分出来的区域内的激光雷达数据,转换到摄像机的图像坐标系下; (1) Transform the laser radar data in the divided area into the image coordinate system of the camera;
(2)对初步判断的可行驶区域进一步识别; (2) Further identification of the initially judged drivable area;
具体包括如下步骤: Specifically include the following steps:
(a)建立高斯混合模型; (a) Establish a Gaussian mixture model;
假设图像中像素点的序列为{x1,x2…xi},第i个像素点的值为xi=[Ri,Gi,Bi],该像素点的概率为: Assuming that the sequence of pixels in the image is {x 1 , x 2 ... x i }, the value of the i-th pixel is x i =[R i , G i , B i ], the probability of this pixel is:
其中K是高斯分布的个数,选3~5个,ωk,i是权重的估计值,μk,i是第k个高斯分布的均值,∑k,i是第k个高斯分布的协方差矩阵,假设协方差矩阵符合 的规律;像素点i在第k个高斯分布函数中的概率密度由下式计算: Among them, K is the number of Gaussian distributions, choose 3 to 5, ω k, i is the estimated value of the weight, μ k, i is the mean value of the kth Gaussian distribution, ∑ k, i is the coordinating value of the kth Gaussian distribution variance matrix, assuming that the covariance matrix conforms to The law; the probability density of pixel i in the kth Gaussian distribution function is calculated by the following formula:
μi为第i个高斯分布的均值,∑是高斯分布的协方差矩阵; μ i is the mean value of the i-th Gaussian distribution, ∑ is the covariance matrix of the Gaussian distribution;
(b)区域识别; (b) area identification;
将建立的可行驶区域颜色的高斯模型与当前待判定区域的图像进行绝对值比较,如果|xi-f|≥T,T为阈值,则该点为非可行驶区域点。 Compare the absolute value of the established Gaussian model of the color of the drivable area with the image of the current area to be determined. If | xi -f|≥T, T is the threshold, then the point is a non-drivable area point.
如上所述的结果融合步骤,具体包括如下步骤: The above-mentioned result fusion step specifically includes the following steps:
将激光雷达和摄像机的判别结果进行融合,判断可行驶区域和不可行驶区域,构建环境地图;综合激光雷达和摄像机的判别结果,将栅格地图划分为可行驶区域和不可行驶区域,划分的原则是: Integrate the discrimination results of lidar and camera to judge the drivable area and non-drivable area, and construct an environmental map; integrate the discrimination results of lidar and camera, divide the grid map into drivable area and non-drivable area, the principle of division yes:
(1)若激光雷达判别结果为不可行驶区域,则将栅格地图划分为不可行驶区域; (1) If the lidar discrimination result is a non-driving area, divide the grid map into a non-driving area;
(2)若激光雷达判别结果为可行驶区域,则需要根据摄像机判别结果进行分析; (2) If the lidar discrimination result is a drivable area, it needs to be analyzed according to the camera discrimination result;
(a)若摄像机判别结果为可行驶区域,则为可行驶区域; (a) If the camera discrimination result is a drivable area, it is a drivable area;
(b)若摄像机判别结果为不可行驶区域,则为不可行驶区域; (b) If the camera discrimination result is a non-driving area, it is a non-driving area;
(3)若存在激光雷达和摄像机均未探测到的区域,则该区域为未知区域。 (3) If there is an area that neither the lidar nor the camera detects, then the area is an unknown area. the
本发明的有益效果是: The beneficial effects of the present invention are:
本发明可以为无人驾驶车辆在野外行驶条件下常见到的障碍例如草地、道路、树木、灌木丛等进行检测识别,并对可行驶区域的色彩建模,进一步检测出可行驶区域的异常部分。可以将车辆周围环境划分出“不可行驶区域”、“可行驶区域”和“未知区域”等,有利于车辆规划行驶路径,提高无人驾驶车辆野外行驶的自主能力。 The present invention can detect and recognize obstacles such as grassland, roads, trees, bushes, etc. commonly seen by unmanned vehicles in the field, and model the color of the drivable area to further detect abnormal parts of the drivable area . The surrounding environment of the vehicle can be divided into "non-drivable area", "drivable area" and "unknown area", which is conducive to the planning of the vehicle's driving path and improves the autonomous ability of the unmanned vehicle to drive in the wild. the
附图说明 Description of drawings
图1是本发明的一种融合距离和图像信息的野外环境障碍检测方法的流程图; Fig. 1 is a flow chart of the field environment obstacle detection method of a kind of fusion distance and image information of the present invention;
图2是本发明的车体坐标系、激光雷达坐标系和摄像机坐标系的相互位置关系示意图; Fig. 2 is the mutual position relation schematic diagram of vehicle body coordinate system, lidar coordinate system and camera coordinate system of the present invention;
图3是本发明的从激光雷达坐标系转换到摄像机坐标系坐标转换示意图。 Fig. 3 is a schematic diagram of coordinate conversion from the laser radar coordinate system to the camera coordinate system according to the present invention. the
具体实施方式 Detailed ways
下面结合附图和实施例对本发明的一种融合距离和图像信息的野外环境障碍检测方法进行介绍: A kind of field environment obstacle detection method of fusion distance and image information of the present invention is introduced below in conjunction with accompanying drawing and embodiment:
如图1所示,一种融合距离和图像信息的野外环境障碍检测方法,包括如下步骤: As shown in Figure 1, a field environment obstacle detection method that fuses distance and image information includes the following steps:
第一步:建立数学模型; The first step: establish a mathematical model;
在建立车辆感知模型时,需要建立车体及激光雷达、摄像机和惯导坐标系,并将这些坐标系相互关联起来,具体包括如下步骤: When establishing a vehicle perception model, it is necessary to establish the coordinate system of the vehicle body, lidar, camera and inertial navigation, and correlate these coordinate systems with each other, including the following steps:
(1)建立车体坐标系ObXbYbZb; (1) Establish car body coordinate system O b X b Y b Z b ;
建立和车辆本体固连在一起的车体坐标系,它随车辆一起运动。车体坐标系的定义为:原点Ob位于车体的某个固定点,指定原点为激光雷达光源在与地面重合的平面上的投影,Xb轴指向车体正右方,Yb轴指向车体正前方,Zb轴指向车体上方。车体坐标系原点到地面的高度为h0。 Establish a car body coordinate system that is fixedly connected with the vehicle body, and it moves with the vehicle. The car body coordinate system is defined as: the origin O b is located at a fixed point on the car body, the designated origin is the projection of the laser radar light source on the plane coincident with the ground, the X b axis points to the right of the car body, and the Y b axis points to The car body is directly in front, and the Z b axis points to the top of the car body. The height from the origin of the car body coordinate system to the ground is h 0 .
(2)建立激光雷达坐标系OlXlYlZl; (2) Establish the laser radar coordinate system O l X l Y l Z l ;
激光雷达安装在云台上,激光雷达发射出的激光只能扫描到二维数据,通过云台摆动才能实现三维扫描。此时激光雷达坐标系并不和激光雷达固连,而是和云台一起作为整体考虑的,即定义该坐标系原点Ol位于激光雷达光源中心,Xl轴竖直向上,Yl轴水平向右,Zl轴指向车体前方。 The lidar is installed on the pan/tilt, and the laser emitted by the lidar can only scan two-dimensional data, and the three-dimensional scanning can only be realized by swinging the pan/tilt. At this time, the lidar coordinate system is not fixedly connected with the lidar, but is considered together with the gimbal as a whole, that is, the origin O l of the coordinate system is defined to be located in the center of the lidar light source, the X l axis is vertically upward, and the Y l axis is horizontal To the right, the Z l- axis points toward the front of the car body.
(3)建立摄像机坐标系OcXcYcZc; (3) Establish camera coordinate system O c X c Y c Z c ;
建立和摄像机固连在一起的摄像机坐标系。坐标系原点Oc位于摄像机镜头的焦点,Xc轴和图像平面水平方向平行,方向向右;Yc轴和图像平面的竖直方向平行,方向向下;Zc轴垂直于图像平面,指向正前方。 Establish a camera coordinate system that is fixedly connected to the camera. The origin of the coordinate system O c is located at the focal point of the camera lens, the X c axis is parallel to the horizontal direction of the image plane, and the direction is to the right; the Y c axis is parallel to the vertical direction of the image plane, and the direction is downward; the Z c axis is perpendicular to the image plane, pointing to In front of.
(4)建立惯导坐标系OIXIYIZI; (4) Establish the inertial navigation coordinate system O I X I Y I Z I ;
惯导设备固定安装在车体上。建立和车体坐标系方向一致的惯导坐标系。惯导设备的安装误差可以在安装时进行跑车实验解算出来,对安装误差校正后,惯导设备输出的姿态角就是车体的姿态角。 The inertial navigation equipment is fixedly installed on the vehicle body. Establish an inertial navigation coordinate system that is in the same direction as the vehicle body coordinate system. The installation error of the inertial navigation equipment can be solved by performing sports car experiments during installation. After the installation error is corrected, the attitude angle output by the inertial navigation equipment is the attitude angle of the car body. the
采用激光雷达数据进行障碍识别的时候,需要使用惯导敏感到的车体的俯仰角和滚转角;通过构造旋转矩阵,将距离数据校正到坐标原点在水平面上的坐标系中。 When using lidar data for obstacle recognition, it is necessary to use the pitch angle and roll angle of the car body sensitive to the inertial navigation; by constructing the rotation matrix, the distance data is corrected to the coordinate system whose coordinate origin is on the horizontal plane. the
车体坐标系、激光雷达坐标系和摄像机坐标系的相互位置关系如图2所 示。 The mutual positional relationship of the car body coordinate system, the lidar coordinate system and the camera coordinate system is shown in Figure 2. the
(5)确定坐标系转换关系,具体包括如下步骤: (5) Determine the coordinate system conversion relationship, specifically including the following steps:
(a)从激光雷达坐标系OlXlYlZl转换到车体坐标系ObXbYbZb; (a) Transform from the lidar coordinate system O l X l Y l Z l to the car body coordinate system O b X b Y b Z b ;
先绕Yl轴旋转90°,再绕Zl轴旋转90°,然后沿Zl轴方向向下平移h0的距离。 First rotate 90° around the Y l axis, then rotate 90° around the Z l axis, and then translate down the distance h0 along the Z l axis.
可得: Available:
(b)摄像机坐标系转换到图像坐标系; (b) Convert the camera coordinate system to the image coordinate system;
根据现有的摄像机针孔模型,将图像坐标系下点的三维坐标转换成图像上的像素位置。 According to the existing camera pinhole model, the three-dimensional coordinates of points in the image coordinate system are converted into pixel positions on the image. the
其中A为摄像机内参数, αx、αy分别是图像坐标系中u轴和v轴的尺度因子,u0、v0是光学中心,γ是u轴和v轴不垂直因子,很多情况下,令γ=0。A可由张正友等人提出的平面靶标法获得。 where A is the internal parameter of the camera, α x , α y are scale factors of u-axis and v-axis in the image coordinate system respectively, u 0 , v 0 are optical centers, γ is a non-perpendicular factor of u-axis and v-axis, and in many cases, γ=0. A can be obtained by the planar target method proposed by Zhang Zhengyou et al.
根据张正友等人提出的基于平面靶标的摄像机标定方法,要求摄像机在两个以上不同的方位拍摄同一个平面靶标,摄像机和平面靶标都可以自由移动,无需知道运动参数。在标定过程中,假设摄像机的内部参数不变,在不同角度拍摄靶标时,只有外部参数改变,通过摄像机拍摄不同位置棋盘格靶 标的图像,提取棋盘格靶标上方格的角点作为特征点,建立靶标点和对应图像点之间的关系,从而计算出摄像机的内参数和靶标相对于摄像机坐标系的外部参数。 According to the camera calibration method based on the planar target proposed by Zhang Zhengyou et al., the camera is required to shoot the same planar target in two or more different orientations, and both the camera and the planar target can move freely without knowing the motion parameters. In the calibration process, assuming that the internal parameters of the camera remain unchanged, only the external parameters change when the target is photographed at different angles. The images of the checkerboard target at different positions are captured by the camera, and the corner points of the grid above the checkerboard target are extracted as feature points. The relationship between the target point and the corresponding image point is established, so as to calculate the internal parameters of the camera and the external parameters of the target relative to the camera coordinate system. the
具体的,通过以平面棋盘格作为靶标,摄像机拍摄靶标在不同位置的图像,提取棋盘格靶标上方格的角点作为靶标平面的特征点,建立靶标平面的特征点和对应图像平面上的点之间的关系,计算出摄像机的内参数矩阵A和靶标相对于摄像机坐标系的外部参数Rc、tc。标定过程如下所述: Specifically, by using a plane checkerboard as the target, the camera captures images of the target at different positions, extracts the corner points of the grid above the checkerboard target as the feature points of the target plane, and establishes the feature points of the target plane and the points on the corresponding image plane Calculate the relationship between the internal parameter matrix A of the camera and the external parameters R c and t c of the target relative to the camera coordinate system. The calibration process is as follows:
首先,求解单应矩阵H: First, solve the homography matrix H:
设靶标平面的特征点 相应的图像平面上的点 二者的映射关系如公式(4)所示: Set the feature points of the target plane Corresponding points on the image plane The mapping relationship between the two is shown in formula (4):
公式(4)中,s是尺度因子,rc1、rc2分别为靶标平面的特征点从靶标平面坐标系变换到图像坐标系的旋转矩阵rc的前两列,tc是相应的平移向量;A为摄像机内参数, 其中,αx、αy分别是图像坐标系中u轴和v轴的尺度因子,u0、v0是光学中心,γ是u轴和v轴不垂直因子,很多情况下,令γ=0。 In formula (4), s is the scale factor, r c1 and r c2 are the first two columns of the rotation matrix r c that transform the feature points of the target plane from the target plane coordinate system to the image coordinate system, and t c is the corresponding translation vector ; A is the internal parameter of the camera, Among them, α x and α y are the scale factors of u-axis and v-axis in the image coordinate system respectively, u 0 and v 0 are the optical center, γ is the non-perpendicular factor of u-axis and v-axis, in many cases, let γ=0 .
令H=A[rc1 rc2 tc],则公式(4)可以简写成公式(5): Let H=A[r c1 r c2 t c ], then formula (4) can be abbreviated as formula (5):
令 其中, 表示H矩阵的三个列向量的转置,图像平面上的点和靶标平面的特征点的映射关系可以写成公式(6)的形式: make in, Representing the transposition of the three column vectors of the H matrix, the mapping relationship between points on the image plane and feature points on the target plane can be written in the form of formula (6):
根据拍摄的每幅图像上的n个点,采用最小二乘法可以求解出每幅图像所对应的矩阵Hi,i=1,2,…。 According to the n points on each captured image, the matrix H i corresponding to each image can be obtained by using the least square method, i=1, 2, . . . .
其次,根据得到的所有图像的H矩阵,求解中间向量b: Secondly, according to the obtained H matrix of all images, solve the intermediate vector b:
令B=A-TA-1,则: Let B=A -T A -1 , then:
将上述矩阵写成向量形式,如公式(8)所示: Write the above matrix into a vector form, as shown in formula (8):
b=[B11 B12 B22 B13 B23 B33]T (8) b=[B 11 B 12 B 22 B 13 B 23 B 33 ] T (8)
那么,就有公式(9)所示关系式存在: Then, there is a relationship shown in formula (9):
其中,h1=[hi1 hi2 hi3]T,hj=[hj1 hj2 hj3]T,i和j均表示第几幅图像,i=1,2,…;j=1,2,…; Among them, h 1 =[h i1 h i2 h i3 ] T , h j =[h j1 h j2 h j3 ] T , i and j both indicate which image, i=1, 2,...; j=1, 2,…;
vij=[hi1hj1 hi1hj2+hi2hj1 hi2hj2 hi3hj1+hi1hj2 hi3hj2+hi2hj3 hi3hj3]T v ij =[h i1 h j1 h i1 h j2 +h i2 h j1 h i2 h j2 h i3 h j1 +h i1 h j2 h i3 h j2 +h i2 h j3 h i3 h j3 ] T
………………………………………………………(10) …………………………………………………(10)
将所有vij组成矩阵V,V是2n×6的矩阵,其中,n=1,2,…;存在公式(11)所示关系式: Form matrix V of all v ij , V is the matrix of 2n * 6, wherein, n=1, 2, ...; There is the relation shown in formula (11):
Vb=0 (11) Vb=0 (11)
通过公式(11)可以求解出b。 b can be solved by formula (11). the
最后,根据b可以分解出摄像机的内参数矩阵A,并根据A-1求解Rc,tc: Finally, according to b, the internal parameter matrix A of the camera can be decomposed, and R c , t c can be solved according to A -1 :
根据b分解出摄像机的内参数具体为:将b利用Cholesky矩阵分解算法求解出A-1,再求逆得到A; Decomposing the internal parameters of the camera according to b is specifically as follows: use the Cholesky matrix decomposition algorithm to solve b to obtain A -1 , and then invert to obtain A;
标定每幅图像的外部参数tc为: The external parameter tc for calibrating each image is:
rc1=λA-1h1 (12) r c1 =λA -1 h 1 (12)
rc2=λA-1h2 (13) r c2 =λA -1 h 2 (13)
rc3=rc1×rc2 (14) r c3 =r c1 ×r c2 (14)
tc=λA-1h3 (15) t c =λA -1 h 3 (15)
其中λ=1/‖A-1h1‖=1/‖A-1h2‖,所以标定的外部参数中的旋转矩阵为: Where λ=1/‖A -1 h 1 ‖=1/‖A -1 h 2 ‖, so the rotation matrix in the calibrated external parameters is:
Rc=[rc1 rc2 rc3] (16) R c =[r c1 r c2 r c3 ] (16)
(c)从激光雷达坐标系OlXlYlZl转换到摄像机坐标系OcXcYcZc; (c) Transform from the lidar coordinate system O l X l Y l Z l to the camera coordinate system O c X c Y c Z c ;
其中R和t分别为从激光雷达坐标系转换到摄像机坐标系的旋转矩阵和平移向量,R和t计算过程如下: Among them, R and t are the rotation matrix and translation vector transformed from the lidar coordinate system to the camera coordinate system respectively, and the calculation process of R and t is as follows:
将一个黑白方块间隔的平面棋盘格靶标,将靶标放置于摄像机和激光雷达前面不同的位置,同时启动摄像机和激光雷达,采集可见光图像和距离数据并且记录。这样就会得到一系列“图像-距离”对。这种靶标既用于摄像机内参数A的标定,又可以用于摄像机和激光雷达的外部参数R和t的标定。 Place a planar checkerboard target with black and white squares, place the target at different positions in front of the camera and lidar, start the camera and lidar at the same time, collect visible light images and distance data and record them. This results in a sequence of "image-distance" pairs. This target is used not only for the calibration of the internal parameter A of the camera, but also for the calibration of the external parameters R and t of the camera and lidar. the
如图3所示,以第i个靶标平面为例,摄像机坐标系的原点Oc到第i个靶标平面的距离为λc,i,垂线方向为γc,i;三维激光雷达坐标系原点Ol到第i个靶标平面的距离为λl,i,垂线方向为γl,i,计算方法如下。 As shown in Figure 3, taking the i-th target plane as an example, the distance from the origin O c of the camera coordinate system to the i-th target plane is λ c,i , and the vertical direction is γ c,i ; the three-dimensional lidar coordinate system The distance from the origin O l to the i-th target plane is λ l,i , and the vertical direction is γ l,i , the calculation method is as follows.
其中R3,c,i表示Rc,i的第3列。摄像机坐标系原点到所有靶标平面的法线方向以及距离可以写成如下形式: where R3,c,i represents column 3 of Rc ,i . The normal direction and distance from the origin of the camera coordinate system to all target planes can be written as follows:
Γc=[γc,1 γc,2…γc,n] (20) Γ c = [γ c, 1 γ c, 2 ... γ c, n ] (20)
Λc=[λc,1 λc,2…λc,n]T (21) Λ c = [λ c, 1 λ c, 2 ... λ c, n ] T (21)
其中Γc为3×n矩阵,Λc为n×1列向量,n为“距离-图像”对数。 Where Γ c is a 3×n matrix, Λ c is an n×1 column vector, and n is the logarithm of "distance-image".
使用最小二乘法从激光雷达扫描的靶标点数据中拟合出靶标平面,可以得到激光雷达坐标系原点到第i个靶标平面的法线方向为γl,i和距离λl,i。设xl,i为 第i个靶标平面mi×3的点集合,mi是靶标平面上点的数量, 为点阵集合的均值,则 Using the least squares method to fit the target plane from the target point data scanned by the laser radar, the normal direction from the origin of the laser radar coordinate system to the i-th target plane can be obtained as γ l, i and the distance λ l, i . Let x l, i be the set of points on the i-th target plane m i ×3, m i is the number of points on the target plane, is the mean value of the lattice set, then
将 进行奇异值分解,可得 Will Carrying out singular value decomposition, we can get
因此γl,i和λl,i为 Therefore γ l,i and λ l,i are
γl,i=U3,l,i (24) γ l, i = U 3, l, i (24)
其中,U3,l,i表示Ul,i的第3列。激光雷达坐标系原点到所有靶标平面的法线方向和距离可以写成如下形式: Among them, U 3,l,i represents the 3rd column of U l,i . The normal direction and distance from the origin of the lidar coordinate system to all target planes can be written as follows:
Γl=[γl,1 γl,2…γl,n] (26) Γ l = [γ l, 1 γ l, 2 ... γ l, n ] (26)
Λl=[λl,1 λl,2…λl,n]T (27) Λ l = [λ l, 1 λ l, 2 ...λ l, n ] T (27)
其中Γl为3×n矩阵,Λl为n×1列向量,n为“距离-图像”对数。 Among them, Γ l is a 3×n matrix, Λ l is an n×1 column vector, and n is the logarithm of "distance-image".
R=VUT (28) R = V U T (28)
其中V和U是 进行奇异值分解的结果,即 where V and U are The result of singular value decomposition, namely
(d)距离数据校正; (d) distance data correction;
假设惯导设备输出的车体俯仰角为α、滚转角为γ,那么校正后的距离数据可以通过下式获得: Assuming that the pitch angle of the vehicle body output by the inertial navigation equipment is α, and the roll angle is γ, then the corrected distance data can be obtained by the following formula:
第二步:激光雷达距离数据障碍检测,包括如下步骤: The second step: lidar distance data obstacle detection, including the following steps:
(1)距离数据预处理; (1) distance data preprocessing;
由于激光雷达受漂移、物体表面特性以及车辆运动等因素的影响,测距数据中存在一定噪声,因此首先应对原始距离数据进行预处理。测量数据中包括由扰动带来的噪声点和误测点,对这些点进行滤波,可以提高运算速度和特征检测的精确度。另外,由于激光的混合像素现象,原始数据中会有一 些数据孤点与前后数据点间距很大,这些点对于环境理解没有意义。因此,对于这些散乱数据点和孤立点,可以采用3×3的模板进行中值滤波加以处理。 Because the lidar is affected by factors such as drift, object surface characteristics, and vehicle movement, there is a certain amount of noise in the ranging data, so the original distance data should be preprocessed first. The measurement data includes noise points and mismeasurement points caused by disturbances. Filtering these points can improve the calculation speed and the accuracy of feature detection. In addition, due to the mixed pixel phenomenon of the laser, there will be some data solitary points in the original data with a large distance from the front and rear data points, and these points are meaningless for the understanding of the environment. Therefore, for these scattered data points and isolated points, a 3×3 template can be used for median filtering. the
(2)区域分割; (2) Region segmentation;
将激光雷达扫描环境得到的三维点云进行分割,得到多个区域,对这些区域分别进行识别。 Segment the 3D point cloud obtained by the lidar scanning environment to obtain multiple regions, and identify these regions respectively. the
对于处理点云数据,现有技术中有多种分割方法。一种是将点的三维坐标和颜色统一放在一个六维向量中,定义融合三维坐标和颜色距离的计算准则,采用区域增长方法,将场景中颜色相似的平面分割出来。另一种是采用平面对物体表面进行拟合,并且对平面种子不断进行区域增长,实现了对点云的分割。第三种是径向边界最近邻(A Radially Bounded Nearest Neighbor,RBNN)聚类方法,可以较好的将点云分割,步骤如下: For processing point cloud data, there are many segmentation methods in the prior art. One is to unify the three-dimensional coordinates and colors of points in a six-dimensional vector, define the calculation criteria for combining the three-dimensional coordinates and color distance, and use the region growing method to segment the planes with similar colors in the scene. The other is to use a plane to fit the surface of the object, and to continuously grow the area of the plane seed to realize the segmentation of the point cloud. The third is the Radially Bounded Nearest Neighbor (RBNN) clustering method, which can better segment the point cloud. The steps are as follows:
(a)检索某三维空间点pi周围领域中最近的k个点,计算由这k+1个点组成的点集Qi的法线,作为该扫描点的法向量。在本实施例中,k的取值范围是10~30,如10、20或30。 (a) Retrieve the nearest k points in the area around a point p i in a certain three-dimensional space, and calculate the normal of the point set Q i composed of these k+1 points as the normal vector of the scanning point. In this embodiment, the value range of k is 10-30, such as 10, 20 or 30.
具体计算方法如下:将Qi中的点进行奇异值分解U∑VT,将VT中对应∑中最小值的单位向量,作为pi的法线 上述奇异值分解为现有技术。 The specific calculation method is as follows: Singular value decomposition U∑V T is performed on the points in Q i , and the unit vector corresponding to the minimum value in ∑ in V T is taken as the normal of p i The singular value decomposition described above is a prior art.
(b)将三维空间点pi的坐标 和法线 组成特征向量 (b) The coordinates of point p i in three-dimensional space and normal Composing eigenvectors
定义两个三维点的笛卡尔空间距离为ρe(pi,pj)=‖pi-pj‖,其中,pj的坐标 pj和其法线 组成特征向量 角度距离为ρa(ni,nj)=0.5-0.5×<ni,nj>/‖ni‖‖nj‖,其中, 为空间距离和角度距离设置不同的阈值ρe,max、ρa,max,当两个点之间的空间距离或者角度距离大于相应阈值时,即认为这两个点不在同一区域。 Define the Cartesian space distance of two three-dimensional points as ρ e (p i , p j )=‖p i -p j ‖, where the coordinates of p j p j and its normal Composing eigenvectors The angular distance is ρ a (n i , n j )=0.5-0.5×<n i , n j >/‖n i ‖‖n j ‖, where, Set different thresholds ρ e,max and ρ a,max for the spatial distance and angular distance. When the spatial distance or angular distance between two points is greater than the corresponding threshold, the two points are considered to be not in the same area.
(c)对所有相邻点计算相互之间的空间距离ρe和角度ρa距离。如果ρe≤ρe,max且ρa≤ρa,max,那么可以将这两个点聚类合并。将所有相邻点尽可能合并到一个区域。 (c) Calculate the mutual spatial distance ρ e and angle ρ a distance for all adjacent points. If ρ e ≤ ρ e,max and ρ a ≤ ρ a,max , then these two points can be clustered and merged. Merge all adjacent points into one region as much as possible.
(d)如果某一点无法和其它点聚类,那么就新建一个区域。 (d) If a point cannot be clustered with other points, create a new region. the
(e)将所有点聚类后,检验每类所拥有点的数量,如果数量少于某一阈值ncmin,而且类内点的平均高度远远大于车辆跨越障碍的高度hmax,认为该类为噪声,可以删除。阈值ncmin和车辆跨越障碍的高度hmax根据实际情况选取。 (e) After clustering all the points, check the number of points owned by each class. If the number is less than a certain threshold n cmin and the average height of the points in the class is far greater than the height h max of the vehicle crossing the obstacle, the class is considered For noise, it can be removed. The threshold n cmin and the height h max of the vehicle crossing the obstacle are selected according to the actual situation.
(3)区域识别; (3) Area identification;
无人驾驶车辆行驶在野外环境中,会有草地、道路、树木、灌木丛等各种路况,需要对这些地形进行分析识别,具体包括如下步骤: When unmanned vehicles drive in the wild environment, there will be various road conditions such as grasslands, roads, trees, bushes, etc. These terrains need to be analyzed and identified, including the following steps:
(a)计算区域特征; (a) Computing regional features;
识别之前,需要计算出区域几个特征,包括平均高度、奇异值、区域曲面法向量等。计算方法如下: Before recognition, several features of the region need to be calculated, including average height, singular values, and normal vectors of the surface of the region. The calculation method is as follows:
平均高度: n为该区域内点的数量,i为点的序号。 average height: n is the number of points in the area, and i is the serial number of the points.
奇异值:[σ1 σ2 σ3]。将该区域内所有点的坐标组成矩阵,进行奇异值分解U∑VT,∑中的元素即为奇异值,并且将σ1、σ2、σ3由大到小排序。 Singular values: [σ 1 σ 2 σ 3 ]. The coordinates of all points in the area are formed into a matrix, and the singular value decomposition U∑V T is performed. The elements in ∑ are the singular values, and the σ 1 , σ 2 , and σ 3 are sorted from large to small.
区域曲面法向量:γ。最小的奇异值σ3对应的VT中的单位向量。 Area surface normal vector: γ. The smallest singular value σ 3 corresponds to the unit vector in V T .
(b)识别,包括如下步骤: (b) identification, including the following steps:
①草地识别; ①Grass recognition;
激光打到草地上的时候,由于草叶散乱,接收到的数据点非常杂乱,一般边缘检测方法无法准确检测出来。草地和普通道路的最大区别是扫描点的密度不同,也就是穿透率不同。枝叶稀疏的草丛,激光扫描的表面孔隙较多,穿透率较大;而浓密的灌木丛枝叶生长繁茂,则穿透率较小;如果激光扫描表面为坚硬固体平面,各个扫描点的距离值分布均匀,其穿透率就近似等于 零。当无人驾驶车辆行驶在野外环境的时候,覆盖在地表的草丛是可以行驶通过的。因此当检测到穿透率较大的草丛时,应标识为可行驶区域。 When the laser hits the grass, the received data points are very messy due to the scattered grass blades, which cannot be accurately detected by general edge detection methods. The biggest difference between grassland and ordinary road is the density of scanning points, that is, the penetration rate is different. The grass with sparse branches and leaves has more pores on the surface of the laser scanning, and the penetration rate is relatively high; while the bushes with dense branches and leaves grow luxuriantly, the penetration rate is small; if the laser scanning surface is a hard solid plane, the distance value of each scanning point If the distribution is uniform, the penetration rate is approximately equal to zero. When the unmanned vehicle is driving in the wild environment, the grass covered on the ground can be driven through. Therefore, when a grass with a high penetration rate is detected, it should be marked as a drivable area. the
当满足以下几个条件时,识别为草地。 When the following conditions are met, it is recognized as grassland. the
1)区域内点平均高度低于车辆越障高度hmax。 1) The average height of points in the area is lower than the vehicle obstacle clearance height h max .
2)区域内点的穿透率高,奇异值分布为σ1≈σ2≈σ3。 2) The penetration rate of points in the region is high, and the singular value distribution is σ 1 ≈σ 2 ≈σ 3 .
3)区域法向量γ向上。 3) The area normal vector γ is upward. the
由于在区域分割的时候,可能会把草地上的点划分到不同的区域,因此在识别出草地之后,需要将这些点合并到同一区域,并且标注该区域为可行驶区域。此处的合并是指把几个区域内的点放置在一个区域内。 Since the points on the grass may be divided into different areas during area segmentation, after the grass is identified, these points need to be merged into the same area and marked as a drivable area. Merging here refers to placing points in several areas in one area. the
②道路识别; ②Road recognition;
道路区域和草地区域的最大区别,在于激光束穿透能力不同。道路的表面比较平坦,因此光束照射在路面上,扫描点分布比较均匀。道路为可行驶区域。 The biggest difference between the road area and the grass area lies in the different penetration capabilities of the laser beam. The surface of the road is relatively flat, so the beam is irradiated on the road, and the distribution of scanning points is relatively uniform. Roads are drivable areas. the
当满足以下几个条件时,识别为道路。 When the following conditions are met, it is recognized as a road. the
1)区域内点高度低于车辆越障高度 1) The point height in the area is lower than the vehicle obstacle clearance height
2)区域内点的穿透率低,σ1>σ2>>σ3。 2) The penetration rate of points in the region is low, σ 1 >σ 2 >>σ 3 .
3)区域法向量γ向上。 3) The area normal vector γ is upward. the
③树干的识别; ③Recognition of trunk;
树干上的扫描点分布一般呈圆柱面的一部分。当奇异值的分布规律为σ1>>σ2≈σ3,区域法向量γ位于水平方向时,识别为树干。 The distribution of scanning points on the tree trunk is generally a part of a cylindrical surface. When the distribution of singular values is σ 1 >>σ 2 ≈σ 3 , and the area normal vector γ is in the horizontal direction, it is recognized as a tree trunk.
④灌木丛的识别; ④Recognition of bushes;
灌木丛上扫描点都分布在一个平面上,当奇异值为σ1>σ2>σ3,区域法向量γ也位于水平方向时,识别为灌木丛。 The scanning points on the bushes are all distributed on a plane. When the singular value is σ 1 >σ 2 >σ 3 and the area normal vector γ is also in the horizontal direction, it is recognized as a bush.
在本发明中,当奇异值σ1与σ2的比值σ1/σ2在[14]内时,认为σ1≈σ2;当奇异值σ1与σ2的比值σ1/σ2大于50时,认为奇异值σ1>>σ2。 In the present invention, when the ratio σ 1 /σ 2 of the singular value σ 1 to σ 2 is within [14], it is considered that σ 1 ≈ σ 2 ; when the ratio σ 1 /σ 2 of the singular value σ 1 to σ 2 is greater than 50, it is considered that the singular value σ 1 >>σ 2 .
第三步:摄像机图像处理,具体包括如下步骤: The third step: camera image processing, specifically includes the following steps:
(1)将划分出来的区域内的激光雷达数据,转换到摄像机的图像坐标系下; (1) Transform the laser radar data in the divided area into the image coordinate system of the camera;
按照第一步中的坐标转换关系进行转换。 Transform according to the coordinate transformation relationship in the first step. the
(2)对初步判断的可行驶区域进一步识别; (2) Further identification of the initially judged drivable area;
由于在野外环境中,除了可行驶的道路、草丛之外,可能会有水滩、石块等。激光雷达扫描在水滩上,部分激光不会反射回来,造成激光雷达扫描数据具有空洞,也有可能会反射回一部分激光,这部分点的高度均在水平面上,因此只采用激光雷达数据进行障碍识别,很容易将水滩作为可行驶区域对待。而实际上,现在的技术还无法对水滩的深度、松软程度进行评估,因此无人驾驶车辆一旦驶入,就可能会造成危险。此外行驶环境中可能会有一些的石块,虽然体积不大,但是会使无人驾驶车辆颠簸,影响车辆的速度。只采用激光雷达的距离数据,由于激光雷达的分辨率相对较低,无法准确描绘出石块形状,所以此时采用摄像机的图像,可以对这些较小的障碍物进行判断。 Because in the wild environment, in addition to the drivable roads and grass, there may be water beaches, stones, etc. When the lidar is scanned on the water beach, part of the laser light will not be reflected back, resulting in holes in the lidar scan data, and a part of the laser light may also be reflected back. The height of these points is on the horizontal plane, so only the lidar data is used for obstacle identification , it is easy to treat the water beach as a drivable area. In fact, the current technology is still unable to evaluate the depth and softness of the water beach, so once an unmanned vehicle drives in, it may cause danger. In addition, there may be some stones in the driving environment. Although they are not large in size, they will make the unmanned vehicle bump and affect the speed of the vehicle. Only the distance data of the lidar is used. Since the resolution of the lidar is relatively low, the shape of the stone cannot be accurately depicted. Therefore, the image of the camera can be used at this time to judge these small obstacles. the
为了将可行驶区域和不可行驶区域分开,使用高斯混合模型(GaussianMixture Models,GMM)对颜色进行分类。可行驶区域的颜色服从一定的统计规律,每个像素的颜色值可以用几个高斯分布的混合模型来表示。当获得新的可行驶区域的图像后,就更新混合高斯分布的模型,如果当前图像的像素点与模型相匹配,那么判断该点为可行驶区域内的点,否则就判定该点为其它性质的点。 To separate the drivable and non-drivable areas, Gaussian Mixture Models (GMM) are used to classify the colors. The color of the drivable area obeys certain statistical laws, and the color value of each pixel can be represented by a mixture model of several Gaussian distributions. When the image of the new drivable area is obtained, the model of the mixed Gaussian distribution is updated. If the pixel point of the current image matches the model, then it is judged that the point is a point in the drivable area, otherwise it is judged that the point is of other properties point. the
具体包括如下步骤: Specifically include the following steps:
(a)建立高斯混合模型; (a) Establish a Gaussian mixture model;
假设图像中像素点的序列为{x1,x2…xi},第i个像素点的值为xi=[Ri,Gi,Bi],该像素点的概率为: Assuming that the sequence of pixels in the image is {x 1 , x 2 ... x i }, the value of the i-th pixel is x i =[R i , G i , B i ], the probability of this pixel is:
其中K是高斯分布的个数,一般选3~5个,ωk,i是权重的估计值,μk,i是第k个高斯分布的均值,∑k,i是第k个高斯分布的协方差矩阵,一般假设协方差矩阵符合 的规律。像素点i在第k个高斯分布函数中的概率密度可由下式计算: Where K is the number of Gaussian distributions, generally 3 to 5 are selected, ω k, i is the estimated value of the weight, μ k, i is the mean value of the kth Gaussian distribution, ∑ k, i is the value of the kth Gaussian distribution Covariance matrix, it is generally assumed that the covariance matrix conforms to the law. The probability density of pixel i in the kth Gaussian distribution function can be calculated by the following formula:
μi为第i个高斯分布的均值,∑是高斯分布的协方差矩阵; μ i is the mean value of the i-th Gaussian distribution, ∑ is the covariance matrix of the Gaussian distribution;
建立可行驶区域颜色特征的高斯混合模型的过程由以下几个步骤组成: The process of building a Gaussian mixture model of the color characteristics of the drivable area consists of the following steps:
①对处理过的图像区域进行颜色空间的转化。 ① Transform the color space of the processed image area. the
②初始化各个参数,包括全局可行驶区域阈值T,学习率α,高斯模型数量K,以及初始化权重ωk,i。 ②Initialize various parameters, including global drivable area threshold T, learning rate α, number of Gaussian models K, and initialization weight ω k, i .
③读取图像初始指定的可行驶的道路和草地图像的颜色值,作为高斯混合模型的均值,方差为预定的经验值,建立高斯混合模型。 ③ Read the color value of the drivable road and grassland image initially designated by the image as the mean value of the Gaussian mixture model, and the variance is the predetermined empirical value, and establish the Gaussian mixture model. the
④读取初始指定的可行驶区域的图像数据,将每个像素的颜色值与已有的高斯模型比较,判断下式是否成立: ④ Read the image data of the initially designated drivable area, compare the color value of each pixel with the existing Gaussian model, and judge whether the following formula is true:
式中,xi为像素的颜色值, 为第k个高斯分布模型的方差。 In the formula, x i is the color value of the pixel, is the variance of the kth Gaussian distribution model.
匹配:更新第k个高斯混合模型的参数和权重,参数包括期望、方差、学习因子。 Matching: Update the parameters and weights of the kth Gaussian mixture model, the parameters include expectation, variance, and learning factors. the
μi=(1-ρ)μi-1+ρxi (34) μ i =(1-ρ)μ i-1 + ρxi (34)
ρ=αη(xi|μk,σk) (36) ρ=αη( xi |μ k , σ k ) (36)
式中,μk和σk分别是此时第k个高斯分布模型的均值矢量和方差,α为 学习率,ρ是模型适应的学习因子,作用与α相似。 In the formula, μ k and σ k are the mean vector and variance of the kth Gaussian distribution model at this time respectively, α is the learning rate, ρ is the learning factor for model adaptation, and its function is similar to that of α.
不匹配:如果k<K,增加一个高斯模型,新的高斯模型分布取新的像素点的颜色值为均值,方差和权重取经验值。如果k=K,用新的高斯分布模型代替权重最低的模型,均值和方差同上。 Mismatch: If k<K, add a Gaussian model, the new Gaussian model distribution takes the color value of the new pixel as the mean value, and the variance and weight take the empirical value. If k=K, replace the model with the lowest weight with a new Gaussian distribution model with the same mean and variance as above. the
权重ωk的更新公式为 The update formula of the weight ω k is
ωk,i=(1-α)ωk,i-1+αMk,i (37) ω k,i = (1-α)ω k,i-1 +αM k,i (37)
式中,ωk,i为当前权重,α为学习率,ωk,i-1为上一次对应权重,Mk,i为匹配量化值,如果匹配Mk,i=1,不匹配则Mk,i=0。 In the formula, ω k, i is the current weight, α is the learning rate, ω k, i-1 is the corresponding weight of the last time, M k, i is the matching quantization value, if it matches M k, i = 1, and if it does not match, then M k,i =0.
(b)区域识别; (b) area identification;
将建立的可行驶区域颜色的高斯模型与当前待判定区域的图像进行绝对值比较,如果|xi-f|≥T,T为阈值,那么认为该点为非可行驶区域点。T的值根据实际需要选取。将该区域内的点建立起相应的高斯混合模型,建立过程和可行驶区域的模型相同,并且添加到知识库中,便于分析。 Compare the absolute value of the established Gaussian model of the color of the drivable area with the image of the current area to be determined. If | xi -f|≥T, T is the threshold, then the point is considered as a non-drivable area point. The value of T is selected according to actual needs. The corresponding Gaussian mixture model is established for the points in the area. The establishment process is the same as the model of the drivable area, and it is added to the knowledge base for easy analysis.
第四步:结果融合,具体包括如下步骤: The fourth step: result fusion, specifically includes the following steps:
将激光雷达和摄像机的判别结果进行融合,判断可行驶区域和不可行驶区域,构建环境地图。 The identification results of the lidar and the camera are fused to determine the drivable area and non-drivable area, and construct an environmental map. the
形象、易用的栅格地图常常用来模拟地表环境。栅格地图最早由Elfes和Moravec提出,后来不断发展。现有的栅格地图一般可以分为两类,二维栅格地图和三维栅格地图。二维栅格地图中的每个栅格单元,只能表示有和无两种状态,但是应用方便;而三维栅格地图还能表示该单元的其他属性,例如高度,从而更精细的表示地形环境。构造栅格地图时,环境地形的分辨率和栅格尺寸的大小紧密相关。如果要求高分辨率,那么就要减小栅格的尺寸,栅格数量增加,造成计算的复杂性。而减少栅格数量,就必须增大栅格尺寸,分辨率降低。因此,栅格地图的建立需要对无人驾驶车辆本身的行驶能力、传感器分辨率、环境地形情况、系统的运算能力等各种因素综合考虑。 Graphical, easy-to-use grid maps are often used to simulate surface environments. The grid map was first proposed by Elfes and Moravec, and has been continuously developed since then. Existing grid maps can generally be divided into two categories, two-dimensional grid maps and three-dimensional grid maps. Each grid cell in a two-dimensional grid map can only represent two states of existence and non-existence, but it is convenient for application; while a three-dimensional grid map can also represent other attributes of the cell, such as height, so as to represent the terrain more finely environment. When constructing a raster map, the resolution of the environment terrain is closely related to the size of the raster. If high resolution is required, then the size of the grid must be reduced, and the number of grids will increase, resulting in computational complexity. To reduce the number of grids, the grid size must be increased and the resolution reduced. Therefore, the establishment of a grid map needs to comprehensively consider various factors such as the driving ability of the unmanned vehicle itself, sensor resolution, environmental terrain conditions, and system computing capabilities. the
综合激光雷达和摄像机的判别结果,将栅格地图划分为可行驶区域和不可行驶区域,更便于无人驾驶车辆对路径进行规划,因此建立了二维的栅格地图。上述划分的原则是: Combining the discrimination results of lidar and camera, the grid map is divided into drivable areas and non-drivable areas, which is more convenient for unmanned vehicles to plan the path, so a two-dimensional grid map is established. The principle of the above division is:
(1)若激光雷达判别结果为不可行驶区域,则将栅格地图划分为不可行驶区域; (1) If the lidar discrimination result is a non-driving area, divide the grid map into a non-driving area;
(2)若激光雷达判别结果为可行驶区域,则需要根据摄像机判别结果进行分析; (2) If the lidar discrimination result is a drivable area, it needs to be analyzed according to the camera discrimination result;
(a)若摄像机判别结果为可行驶区域,则为可行驶区域; (a) If the camera discrimination result is a drivable area, it is a drivable area;
(b)若摄像机判别结果为不可行驶区域,则为不可行驶区域。 (b) If the camera judgment result is a non-driving area, then it is a non-driving area. the
(3)若存在激光雷达和摄像机均未探测到的区域,则该区域为未知区域。 (3) If there is an area that neither the lidar nor the camera detects, then the area is an unknown area. the
建立了二维栅格地图的具体过程如下: The specific process of establishing a two-dimensional grid map is as follows:
(1)在车体坐标系中,划分栅格大小。 (1) In the car body coordinate system, divide the grid size. the
(2)将激光雷达数据和摄像机融合后识别的可行驶区域、不可行驶区域、未知区域进行标记。 (2) Mark the drivable area, non-drivable area, and unknown area identified by the fusion of the lidar data and the camera. the
下面结合一具体实施例对本发明的一种融合距离和图像信息的野外环境障碍检测方法进行描述: A kind of field environment obstacle detection method of fusion distance and image information of the present invention is described below in conjunction with a specific embodiment:
启动三维激光雷达,扫描周围场景,得到一组描述周围环境的距离数据,同时采集此刻惯导设备的姿态信息。 Start the 3D lidar, scan the surrounding scene, get a set of distance data describing the surrounding environment, and collect the attitude information of the inertial navigation equipment at the same time. the
第一步:建立数学模型; The first step: establish a mathematical model;
按照上述步骤建立坐标系,根据惯导设备输出的俯仰角和姿态角,将该组距离数据校正到坐标原点在水平面上的坐标系中。 Establish a coordinate system according to the above steps, and correct the set of distance data to the coordinate system whose coordinate origin is on the horizontal plane according to the pitch angle and attitude angle output by the inertial navigation equipment. the
,然后再转换到车体坐标系下。 , and then transformed to the car body coordinate system. the
第二步:激光雷达距离数据障碍检测; The second step: lidar distance data obstacle detection;
(1)数据预处理; (1) Data preprocessing;
依次扫描距离数据中的各个点,采用3×3的模板对数据进行中值滤波。 Each point in the distance data is scanned in turn, and a 3×3 template is used to perform median filtering on the data. the
(2)区域分割; (2) Region segmentation;
采用区域分割方法,对这些距离数据进行分割,参数设置为k=20,ncmin=150,hmax=0.1米,ρe,max=0.04米,ρa,max=0.011。分割之后可以得到若干区域。 These distance data were segmented using the region segmentation method, and the parameters were set as k=20, n cmin =150, h max =0.1 m, ρ e,max =0.04 m, ρ a,max =0.011. Several regions can be obtained after division.
(3)区域识别; (3) Area identification;
计算区域特征,其中包括平均高度 ,奇异值[σ1 σ2 σ3],区域曲面法向量γ。 Calculate area characteristics, including mean height , singular value [σ 1 σ 2 σ 3 ], area surface normal vector γ.
根据区域的特征量,识别各个区域是哪种路况,如草地、树木、道路、灌木丛等。草地和道路可以认为是可行驶区域,而道路和灌木丛是障碍。参数设置为hmax=0.1。 According to the feature quantity of the area, identify what kind of road conditions are in each area, such as grass, trees, roads, bushes, etc. Grass and roads can be considered as drivable areas, while roads and bushes are obstacles. The parameter is set to h max =0.1.
启动摄像机拍摄图像。将可行驶区域内的距离数据,转换到摄像机坐标系下,根据摄像机图像,获得该区域相应的色彩信息。 Start the camera to capture images. The distance data in the drivable area is transformed into the camera coordinate system, and the corresponding color information of the area is obtained according to the camera image. the
第三步:摄像机图像处理; The third step: camera image processing;
分析摄像机图像,建立可行驶区域的高斯混合模型。参数设置为K=5,初始化权重为ω=0.05,全局可行驶区域阈值T=0.7,学习率α=0.2。 Analyze camera images to build a Gaussian mixture model of the drivable area. The parameter setting is K=5, the initialization weight is ω=0.05, the global drivable area threshold T=0.7, and the learning rate α=0.2. the
第四步:结果融合; The fourth step: result fusion;
对距离数据和摄像机图像的识别结果进行融合。根据高斯混合模型,进一步分析图像,识别可行驶区域内的障碍,将新的障碍添加到知识库中。 The distance data and the recognition results of the camera image are fused. According to the Gaussian mixture model, the image is further analyzed to identify obstacles in the drivable area and add new obstacles to the knowledge base. the
结合激光雷达数据和摄像机融合后识别的可行驶区域、不可行驶区域、未知区域,为相应栅格赋予相应的属性,建立栅格地图。 Combining the drivable area, non-drivable area, and unknown area identified by lidar data and camera fusion, assign corresponding attributes to the corresponding grid and establish a grid map. the
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010195586 CN101975951B (en) | 2010-06-09 | 2010-06-09 | Field environment barrier detection method fusing distance and image information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010195586 CN101975951B (en) | 2010-06-09 | 2010-06-09 | Field environment barrier detection method fusing distance and image information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101975951A CN101975951A (en) | 2011-02-16 |
CN101975951B true CN101975951B (en) | 2013-03-20 |
Family
ID=43575853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201010195586 Expired - Fee Related CN101975951B (en) | 2010-06-09 | 2010-06-09 | Field environment barrier detection method fusing distance and image information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101975951B (en) |
Families Citing this family (109)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102338621B (en) * | 2011-04-27 | 2013-11-20 | 天津工业大学 | Method for detecting height of obstacle for indoor visual navigation |
CN102538766A (en) * | 2011-12-21 | 2012-07-04 | 武汉科技大学 | Obstacle test method for active intelligent vehicle |
US9626599B2 (en) * | 2012-04-09 | 2017-04-18 | GM Global Technology Operations LLC | Reconfigurable clear path detection system |
CN103514427B (en) * | 2012-06-15 | 2016-12-21 | 株式会社理光 | Vehicle checking method and system |
CN102779280B (en) * | 2012-06-19 | 2014-07-30 | 武汉大学 | Traffic information extraction method based on laser sensor |
CN103908346B (en) * | 2012-12-31 | 2016-04-20 | 复旦大学 | A kind of High Precision Automatic Use of Neuronavigation spatial registration method |
CN104899855A (en) * | 2014-03-06 | 2015-09-09 | 株式会社日立制作所 | Three-dimensional obstacle detection method and apparatus |
CN103884281B (en) * | 2014-03-18 | 2015-10-21 | 北京控制工程研究所 | A method of obstacle detection for patrolling vehicles based on active structured light |
EP3128339B1 (en) * | 2014-03-31 | 2020-10-21 | Mitsumi Electric Co., Ltd. | Radar module, transport equipment, and object identification method |
US9098754B1 (en) * | 2014-04-25 | 2015-08-04 | Google Inc. | Methods and systems for object detection using laser point clouds |
KR20160054825A (en) * | 2014-11-07 | 2016-05-17 | 현대모비스 주식회사 | Apparatus and method for judging drivable space |
CN104503449A (en) * | 2014-11-24 | 2015-04-08 | 杭州申昊科技股份有限公司 | Positioning method based on environment line features |
CN104574376B (en) * | 2014-12-24 | 2017-08-08 | 重庆大学 | Avoiding collision based on binocular vision and laser radar joint verification in hustle traffic |
CN104482872B (en) * | 2014-12-31 | 2018-02-13 | 中联重科股份有限公司 | Curb boundary detection method and device based on parallel structured light and engineering machinery |
JP6418961B2 (en) * | 2015-01-28 | 2018-11-07 | シャープ株式会社 | Obstacle detection device, moving object, obstacle detection method, and obstacle detection program |
JP6593588B2 (en) * | 2015-02-16 | 2019-10-23 | パナソニックIpマネジメント株式会社 | Object detection apparatus and object detection method |
CN104833364B (en) * | 2015-05-07 | 2018-05-18 | 深圳市爱民科技有限公司 | A kind of emergency route indicating means on bump course |
CN104933708A (en) * | 2015-06-07 | 2015-09-23 | 浙江大学 | Barrier detection method in vegetation environment based on multispectral and 3D feature fusion |
CN106326810B (en) * | 2015-06-25 | 2019-12-24 | 株式会社理光 | Road scene recognition method and equipment |
CN105205859B (en) * | 2015-09-22 | 2018-05-15 | 王红军 | A kind of method for measuring similarity of the environmental characteristic based on 3 d grid map |
CN105204510B (en) * | 2015-10-09 | 2016-06-22 | 福州华鹰重工机械有限公司 | A kind of generation method for pinpoint probability map and device |
CN105301577B (en) * | 2015-10-10 | 2016-07-06 | 福州华鹰重工机械有限公司 | A kind of laser intensity calibration steps and device |
CN105571599A (en) * | 2015-12-21 | 2016-05-11 | 小米科技有限责任公司 | Road condition information processing method and device |
CN105512641B (en) * | 2015-12-31 | 2019-02-19 | 哈尔滨工业大学 | A method for calibrating dynamic pedestrians and vehicles in video under rain and snow conditions |
CN105818763B (en) * | 2016-03-09 | 2018-06-22 | 睿驰智能汽车(广州)有限公司 | A kind of method, apparatus and system of determining vehicle periphery object distance |
CN108779984A (en) * | 2016-03-16 | 2018-11-09 | 索尼公司 | Signal handling equipment and signal processing method |
CN105956527B (en) * | 2016-04-22 | 2019-10-22 | 百度在线网络技术(北京)有限公司 | Detection of obstacles outcome evaluation method and apparatus for automatic driving car |
US9672446B1 (en) * | 2016-05-06 | 2017-06-06 | Uber Technologies, Inc. | Object detection for an autonomous vehicle |
US10394237B2 (en) * | 2016-09-08 | 2019-08-27 | Ford Global Technologies, Llc | Perceiving roadway conditions from fused sensor data |
CN106447697B (en) * | 2016-10-09 | 2018-10-26 | 湖南穗富眼电子科技有限公司 | A kind of specific moving-target fast tracking method based on moving platform |
CN106546996A (en) * | 2016-10-15 | 2017-03-29 | 北海益生源农贸有限责任公司 | Road Detection and tracking based on four line laser radars |
CA3039666C (en) | 2016-10-28 | 2022-08-23 | Ppg Industries Ohio, Inc. | Coatings for increasing near-infrared detection distances |
US10163015B2 (en) * | 2016-11-16 | 2018-12-25 | Ford Global Technologies, Llc | Detecting foliage using range data |
CN106650640B (en) * | 2016-12-05 | 2020-03-03 | 浙江大学 | Negative obstacle detection method based on laser radar point cloud local structure characteristics |
CN106646474A (en) * | 2016-12-22 | 2017-05-10 | 中国兵器装备集团自动化研究所 | Unstructured road accidented barrier detection apparatus |
JP7157054B2 (en) | 2017-01-26 | 2022-10-19 | モービルアイ ビジョン テクノロジーズ リミテッド | Vehicle navigation based on aligned images and LIDAR information |
CN108535736A (en) * | 2017-03-05 | 2018-09-14 | 苏州中德睿博智能科技有限公司 | Three dimensional point cloud acquisition methods and acquisition system |
CN106909148A (en) * | 2017-03-10 | 2017-06-30 | 南京沃杨机械科技有限公司 | Based on the unmanned air navigation aid of agricultural machinery that farm environment is perceived |
CN106891889A (en) * | 2017-03-10 | 2017-06-27 | 南京沃杨机械科技有限公司 | Agricultural machinery is unmanned to use farm environment cognitive method |
CN107092039A (en) * | 2017-03-10 | 2017-08-25 | 南京沃杨机械科技有限公司 | Farm machinery navigation farm environment cognitive method |
CN108629231B (en) | 2017-03-16 | 2021-01-22 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, apparatus, device and storage medium |
CN106896353A (en) * | 2017-03-21 | 2017-06-27 | 同济大学 | A kind of unmanned vehicle crossing detection method based on three-dimensional laser radar |
CN107194962B (en) * | 2017-04-01 | 2020-06-05 | 深圳市速腾聚创科技有限公司 | Point cloud and plane image fusion method and device |
CN107421507A (en) * | 2017-04-28 | 2017-12-01 | 上海华测导航技术股份有限公司 | Streetscape data acquisition measuring method |
CN107169986B (en) * | 2017-05-23 | 2019-09-17 | 北京理工大学 | A kind of obstacle detection method and system |
CN107167141B (en) * | 2017-06-15 | 2020-08-14 | 同济大学 | Robot autonomous navigation system based on double laser radars |
CN109116374B (en) * | 2017-06-23 | 2021-08-17 | 百度在线网络技术(北京)有限公司 | Method, device and equipment for determining distance of obstacle and storage medium |
CN107215339B (en) * | 2017-06-26 | 2019-08-23 | 地壳机器人科技有限公司 | The lane-change control method and device of automatic driving vehicle |
CN109212532B (en) * | 2017-07-04 | 2021-08-20 | 百度在线网络技术(北京)有限公司 | Method and apparatus for detecting obstacles |
CN109238221B (en) * | 2017-07-10 | 2021-02-26 | 上海汽车集团股份有限公司 | Method and device for detecting surrounding environment of vehicle |
JP6928499B2 (en) * | 2017-07-21 | 2021-09-01 | 株式会社タダノ | Guide information display device and work equipment |
CN107703935A (en) * | 2017-09-12 | 2018-02-16 | 安徽胜佳和电子科技有限公司 | Multiple data weighting fusions carry out method, storage device and the mobile terminal of avoidance |
CN107673283A (en) * | 2017-11-17 | 2018-02-09 | 芜湖金智王机械设备有限公司 | The control system of unmanned fork truck |
CN107966700A (en) * | 2017-11-20 | 2018-04-27 | 天津大学 | A forward obstacle detection system and method for driverless cars |
CN109816135B (en) * | 2017-11-22 | 2023-09-26 | 博泰车联网科技(上海)股份有限公司 | Cluster risk avoiding method, system, computer readable storage medium and service terminal |
CN108256413B (en) * | 2017-11-27 | 2022-02-25 | 科大讯飞股份有限公司 | Passable area detection method and device, storage medium and electronic equipment |
CN108021132A (en) * | 2017-11-29 | 2018-05-11 | 芜湖星途机器人科技有限公司 | Paths planning method |
CN108037490B (en) * | 2017-11-30 | 2020-07-24 | 中煤航测遥感集团有限公司 | Method and system for detecting positioning accuracy of ground penetrating radar |
JP6888538B2 (en) * | 2017-12-18 | 2021-06-16 | トヨタ自動車株式会社 | Vehicle control device |
CN108398672B (en) * | 2018-03-06 | 2020-08-14 | 厦门大学 | Pavement and obstacle detection method based on forward tilting 2D lidar mobile scanning |
CN108573492B (en) * | 2018-04-02 | 2020-04-03 | 电子科技大学 | Real-time radar detection area detection method |
CN108549087B (en) * | 2018-04-16 | 2021-10-08 | 北京瑞途科技有限公司 | Online detection method based on laser radar |
CN108847026A (en) * | 2018-05-31 | 2018-11-20 | 安徽四创电子股份有限公司 | A method of it is converted based on matrix coordinate and realizes that data investigation is shown |
CN109100741B (en) * | 2018-06-11 | 2020-11-20 | 长安大学 | A target detection method based on 3D lidar and image data |
CN112424629B (en) * | 2018-06-26 | 2024-04-09 | 苏州宝时得电动工具有限公司 | Electric device using radar |
WO2020006764A1 (en) * | 2018-07-06 | 2020-01-09 | 深圳前海达闼云端智能科技有限公司 | Path detection method, related device, and computer readable storage medium |
CN110378904B (en) * | 2018-07-09 | 2021-10-01 | 北京京东尚科信息技术有限公司 | Method and device for segmenting point cloud data |
CN110850856B (en) * | 2018-07-25 | 2022-11-25 | 北京欣奕华科技有限公司 | Data processing method and device and robot |
CN109099923A (en) * | 2018-08-20 | 2018-12-28 | 江苏大学 | Road scene based on laser, video camera, GPS and inertial navigation fusion characterizes system and method |
CN110909569B (en) * | 2018-09-17 | 2022-09-23 | 深圳市优必选科技有限公司 | Road condition information identification method and terminal equipment |
CN110927762B (en) * | 2018-09-20 | 2023-09-01 | 上海汽车集团股份有限公司 | Positioning correction method, device and system |
CN109116867B (en) * | 2018-09-28 | 2020-04-14 | 拓攻(南京)机器人有限公司 | Unmanned aerial vehicle flight obstacle avoidance method and device, electronic equipment and storage medium |
CN109444916B (en) * | 2018-10-17 | 2023-07-04 | 上海蔚来汽车有限公司 | Unmanned driving drivable area determining device and method |
KR20240144457A (en) | 2018-11-13 | 2024-10-02 | 피피지 인더스트리즈 오하이오 인코포레이티드 | Method of detecting a concealed pattern |
CN109633686B (en) * | 2018-11-22 | 2021-01-19 | 浙江中车电车有限公司 | Method and system for detecting ground obstacle based on laser radar |
CN111256651B (en) * | 2018-12-03 | 2022-06-07 | 北京京东乾石科技有限公司 | Week vehicle distance measuring method and device based on monocular vehicle-mounted camera |
US11561329B2 (en) | 2019-01-07 | 2023-01-24 | Ppg Industries Ohio, Inc. | Near infrared control coating, articles formed therefrom, and methods of making the same |
CN109781129A (en) * | 2019-01-28 | 2019-05-21 | 重庆邮电大学 | A road safety detection system and method based on inter-vehicle communication |
CN109696173A (en) * | 2019-02-20 | 2019-04-30 | 苏州风图智能科技有限公司 | A kind of car body air navigation aid and device |
CN109901142B (en) * | 2019-02-28 | 2021-03-30 | 东软睿驰汽车技术(沈阳)有限公司 | Calibration method and device |
CN111694903B (en) * | 2019-03-11 | 2023-09-12 | 北京地平线机器人技术研发有限公司 | Map construction method, device, equipment and readable storage medium |
CN109946701B (en) * | 2019-03-26 | 2021-02-02 | 新石器慧通(北京)科技有限公司 | Point cloud coordinate conversion method and device |
CN111830470B (en) * | 2019-04-16 | 2023-06-27 | 杭州海康威视数字技术股份有限公司 | Combined calibration method and device, target object detection method, system and device |
CN110007293B (en) * | 2019-04-24 | 2021-11-02 | 禾多科技(北京)有限公司 | On-line calibration method of field-side multi-beam lidar |
CN112101069B (en) | 2019-06-18 | 2024-12-03 | 深圳引望智能技术有限公司 | Method and device for determining driving area information |
CN110243380B (en) * | 2019-06-26 | 2020-11-24 | 华中科技大学 | A Map Matching Method Based on Multi-sensor Data and Angle Feature Recognition |
CN112179361B (en) * | 2019-07-02 | 2022-12-06 | 华为技术有限公司 | Method, device and storage medium for updating work map of mobile robot |
CN110346808B (en) * | 2019-07-15 | 2023-01-31 | 上海点积实业有限公司 | Point cloud data processing method and system of laser radar |
JP2021018073A (en) * | 2019-07-17 | 2021-02-15 | 本田技研工業株式会社 | Information providing device, information providing method, and program |
CN110488320B (en) * | 2019-08-23 | 2023-02-03 | 南京邮电大学 | A Method of Using Stereo Vision to Detect Vehicle Distance |
CN110702028B (en) * | 2019-09-04 | 2020-09-15 | 中国农业机械化科学研究院 | Three-dimensional detection positioning method and device for orchard trunk |
CN110688440B (en) * | 2019-09-29 | 2022-03-04 | 中山大学 | A Map Fusion Method Applicable to Less Submap Overlaps |
CN110567135A (en) * | 2019-10-08 | 2019-12-13 | 珠海格力电器股份有限公司 | air conditioner control method and device, storage medium and household equipment |
CN110807439B (en) * | 2019-11-12 | 2022-11-25 | 银河水滴科技(北京)有限公司 | Method and device for detecting obstacle |
CN110909671B (en) * | 2019-11-21 | 2020-09-29 | 大连理工大学 | Grid map obstacle detection method integrating probability and height information |
CN113311422B (en) | 2020-02-27 | 2024-09-24 | 富士通株式会社 | Coordinate conversion method, device and data processing equipment |
EP3905106A1 (en) * | 2020-04-27 | 2021-11-03 | Aptiv Technologies Limited | Method for determining a drivable area |
CN111551958B (en) * | 2020-04-28 | 2022-04-01 | 北京踏歌智行科技有限公司 | Mining area unmanned high-precision map manufacturing method |
CN111879287B (en) * | 2020-07-08 | 2022-07-15 | 河南科技大学 | Forward terrain three-dimensional construction method of low-speed vehicle based on multiple sensors |
CN112698301B (en) * | 2020-12-11 | 2024-06-28 | 中国科学院微电子研究所 | Laser radar target recognition method, device and equipment based on distance intensity correlation |
CN112684430A (en) * | 2020-12-23 | 2021-04-20 | 哈尔滨工业大学(威海) | Indoor old person walking health detection method and system, storage medium and terminal |
CN113484875B (en) * | 2021-07-30 | 2022-05-24 | 燕山大学 | A Hierarchical Recognition Method of LiDAR Point Cloud Targets Based on Mixture Gaussian Sorting |
CN114692734A (en) * | 2022-03-11 | 2022-07-01 | 三一智矿科技有限公司 | A method, device, equipment and medium for dust filtering in mining area based on point cloud |
CN115366918A (en) * | 2022-08-26 | 2022-11-22 | 上海仙途智能科技有限公司 | Trajectory planning method and device, terminal equipment and computer readable storage medium |
CN115435772A (en) * | 2022-08-30 | 2022-12-06 | 深圳鹏行智能研究有限公司 | Method and device for establishing local map, electronic equipment and readable storage medium |
CN115662111A (en) * | 2022-09-22 | 2023-01-31 | 北京安融畅信息技术有限公司 | A method for identifying road sections passing through villages and towns and a method for troubleshooting hidden dangers in road traffic safety |
CN115540892B (en) * | 2022-11-28 | 2023-03-24 | 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) | Obstacle-detouring terminal point selection method and system for fixed line vehicle |
CN116985803B (en) * | 2023-09-26 | 2023-12-29 | 赛奎鹰智能装备(威海)有限责任公司 | Self-adaptive speed control system and method for electric scooter |
CN118279876B (en) * | 2024-05-23 | 2024-08-16 | 陕西交通职业技术学院 | Automatic obstacle avoidance method and system for cleaning vehicle based on image processing |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101701818A (en) * | 2009-11-05 | 2010-05-05 | 上海交通大学 | Detection methods for distant obstacles |
-
2010
- 2010-06-09 CN CN 201010195586 patent/CN101975951B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101701818A (en) * | 2009-11-05 | 2010-05-05 | 上海交通大学 | Detection methods for distant obstacles |
Non-Patent Citations (5)
Title |
---|
苏建刚等.激光制导武器弹目视线半实物仿真技术研究.《系统仿真学报》.2007,第19卷(第8期),1717-1720. * |
赵诚等.基于粒子群优化的多分辨率图像融合.《兵工学报》.2010,第31卷(第2期),171-176. * |
邓志红灯.基于不完全观测数据的多速率多传感器数据融合.《系统工程与电子技术》.2010,第32卷(第5期),886-890. * |
邓志红等.一种改进的视觉传感器与激光测距雷达特征匹配点提取算法.《光学技术》.2010,第36卷(第1期),43-48. * |
邓志红等.转台角位置基准误差对激光捷联惯导标定的影响分析.《中国惯性技术学报》.2009,第17卷(第4期),498-505. * |
Also Published As
Publication number | Publication date |
---|---|
CN101975951A (en) | 2011-02-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101975951B (en) | Field environment barrier detection method fusing distance and image information | |
CN112396650B (en) | Target ranging system and method based on fusion of image and laser radar | |
CN109100741B (en) | A target detection method based on 3D lidar and image data | |
CN108828621A (en) | Obstacle detection and road surface partitioning algorithm based on three-dimensional laser radar | |
Lookingbill et al. | Reverse optical flow for self-supervised adaptive autonomous robot navigation | |
Sohn et al. | Using a binary space partitioning tree for reconstructing polyhedral building models from airborne lidar data | |
CN112346463B (en) | A Path Planning Method for Unmanned Vehicles Based on Velocity Sampling | |
CN113156421A (en) | Obstacle detection method based on information fusion of millimeter wave radar and camera | |
CN104933708A (en) | Barrier detection method in vegetation environment based on multispectral and 3D feature fusion | |
CN111257892A (en) | An obstacle detection method for vehicle autonomous driving | |
CN112464812B (en) | A vehicle-based sunken obstacle detection method | |
CN105930819A (en) | System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system | |
CN112487919A (en) | 3D target detection and tracking method based on camera and laser radar | |
CN106199558A (en) | Barrier method for quick | |
CN111880191A (en) | Map generation method based on multi-agent laser radar and visual information fusion | |
CN114821526B (en) | Obstacle 3D bounding box detection method based on 4D millimeter wave radar point cloud | |
Li et al. | Robust localization for intelligent vehicles based on pole-like features using the point cloud | |
Browning et al. | 3D Mapping for high-fidelity unmanned ground vehicle lidar simulation | |
Sun et al. | Obstacle Detection of Intelligent Vehicle Based on Fusion of Lidar and Machine Vision. | |
Wang et al. | High accuracy and low complexity LiDAR place recognition using unitary invariant frobenius norm | |
CN114677531B (en) | Multi-mode information fusion method for detecting and positioning targets of unmanned surface vehicle | |
CN111089580B (en) | A Simultaneous Localization and Map Construction Method for Unmanned Vehicles Based on Covariance Intersection | |
Zhao et al. | Indoor lidar relocalization based on deep learning using a 3d model | |
CN117148853A (en) | Unmanned aerial vehicle environment self-adaptive obstacle avoidance method and system based on 5G technology and deep learning | |
Hu et al. | A simple information fusion method provides the obstacle with saliency labeling as a landmark in robotic mapping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130320 Termination date: 20130609 |