[go: up one dir, main page]

CN109190483B - A Vision-Based Lane Line Detection Method - Google Patents

A Vision-Based Lane Line Detection Method Download PDF

Info

Publication number
CN109190483B
CN109190483B CN201810886340.7A CN201810886340A CN109190483B CN 109190483 B CN109190483 B CN 109190483B CN 201810886340 A CN201810886340 A CN 201810886340A CN 109190483 B CN109190483 B CN 109190483B
Authority
CN
China
Prior art keywords
edge point
line
rising edge
lane
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810886340.7A
Other languages
Chinese (zh)
Other versions
CN109190483A (en
Inventor
肖进胜
周景龙
雷俊锋
眭海刚
李亮
舒成
赵玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201810886340.7A priority Critical patent/CN109190483B/en
Publication of CN109190483A publication Critical patent/CN109190483A/en
Application granted granted Critical
Publication of CN109190483B publication Critical patent/CN109190483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a lane line detection method based on vision. The invention collects images through a camera and converts the images into gray images, and the central points of the gray images are set as reference points to demarcate an interested area; respectively extracting an ascending edge point and a descending edge point in the region of interest by a line scanning gradient value method, respectively obtaining an inverse perspective ascending edge point and an inverse perspective descending edge point by the ascending edge point and the descending edge point through inverse perspective transformation, and respectively obtaining a screened ascending edge point and a screened descending edge point according to lane width characteristic filtering; carrying out self-defined parameter space transformation on the screened rising edge points and the screened falling edge points, counting the number of the screened rising edge points and the screened falling edge points, wherein the angles and the transverse offsets of the lines of the screened rising edge points and the screened falling edge points are equal, and fitting a lane curve to construct the lane line position of the current frame road image; and correlating the lane line position of the next frame of road image by the lane line position of the current frame of road image.

Description

一种基于视觉的车道线检测方法A Vision-Based Lane Line Detection Method

技术领域technical field

本发明涉及信号处理技术领域,具体涉及一种基于视觉的车道线检测方法。The invention relates to the technical field of signal processing, in particular to a vision-based lane line detection method.

背景技术Background technique

事故频繁发生的首要原因在于驾驶员在行驶过程中无意识地偏离车道。随着汽车的保有量的逐年递增,道路环境日趋复杂,为了让人们更有效的出行,自动驾驶或辅助驾驶逐渐成为研究的热点之一。车道线识别是其中最为基础的部分,它既是保证车辆对道路场景有效掌控的关键单元,又可为车辆偏离预警技术提供精确的位置信息,从而一定程度上减缓交通压力和降低交通事故发生。而在嵌入式系统下实现车道线识别和偏离预警,具有低成本、低功耗,小型化和易集成等特点,因此具有较高的实用价值和广阔应用前景。The primary reason for the frequent occurrence of accidents is that drivers unintentionally deviate from their lanes while driving. With the increasing number of cars year by year, the road environment is becoming more and more complex. In order to allow people to travel more efficiently, autonomous driving or assisted driving has gradually become one of the research hotspots. Lane line recognition is the most basic part. It is not only a key unit to ensure the effective control of the road scene by the vehicle, but also provides accurate location information for the vehicle departure warning technology, thereby reducing traffic pressure and reducing traffic accidents to a certain extent. The realization of lane line recognition and departure warning under the embedded system has the characteristics of low cost, low power consumption, miniaturization and easy integration, so it has high practical value and broad application prospects.

随着计算机技术的发展,向处于危险的驾驶员及时提供警告的系统具有挽救大量生命的巨大潜力。旨在帮助驾驶员驾驶过程中的系统称之为高级驾驶辅助系统,它具有自适应巡航控制、防撞功能、盲点检测与交通标志检测等许多功能。车道偏离系统也属于其中一类。车道检测是在道路上定位车道标记并将这些位置信息呈现给智能系统。在智能交通系统中,智能车辆与智能基础设施相融合,能提供更安全的环境和更好的交通条件。With the development of computer technology, systems that provide timely warnings to drivers in danger have great potential to save a large number of lives. The system designed to help the driver while driving is called advanced driver assistance, and it has adaptive cruise control, collision avoidance, blind spot detection and traffic sign detection, among many other features. Lane departure systems also fall into this category. Lane detection is locating lane markings on the road and presenting this location information to intelligent systems. In an intelligent transportation system, intelligent vehicles are integrated with intelligent infrastructure to provide a safer environment and better traffic conditions.

目前,车道线检测的方法主要可以分为三类:基于模型的方法、基于特征的方法、基于标志的方法。特征法利用车道基础特征如颜色、纹理等定位路面图像中的车道线。基于模型的方法通常以线性或者曲线为模板分析车道,一旦模型被定义,检测就较简单。基于深度学习的方法,基础原理是预先标记大量的样本集,使用卷积神经网络的方法训练样本集获取网络参数,达到车道检测和分类的目的。相对于深度学习的方法来说,特征与模型方法的道路信息检测是其必要的。可以将模型和特征的方法作为借鉴,用深度学习方法,更精确识别车道线。由于硬件条件受到限制,模型与特征的方法也有巨大的潜力。At present, the methods of lane line detection can be mainly divided into three categories: model-based methods, feature-based methods, and sign-based methods. The feature method uses the basic features of the lane, such as color, texture, etc., to locate the lane lines in the road surface image. Model-based methods typically analyze lanes with linear or curvilinear templates, and detection is simpler once the model is defined. Based on the deep learning method, the basic principle is to mark a large number of sample sets in advance, and use the convolutional neural network method to train the sample set to obtain network parameters, so as to achieve the purpose of lane detection and classification. Compared with the deep learning method, the road information detection of the feature and model method is necessary. The model and feature method can be used as a reference, and the deep learning method can be used to identify the lane line more accurately. Due to the limited hardware conditions, the model-to-feature approach also has great potential.

针对车道线检测算法存在以下问题:达到实时检测。道路情况复杂,例如存在遮挡、车道线缺失、地面标识、隧道等影响使检测率偏低。由于连续的多帧除了换道以外,车道线的位置信息无较大的变化,需要稳定地检测出多帧图像的车道,使干扰车道线与准确车道线不会一直交换。车道线位置信息需要在换道之前被准确检测,为偏离预警提供正确向导。基于上述情况,如何提高高效、实时、稳定的检测车道线是本领域函待解决的问题。For the lane line detection algorithm, there are the following problems: achieve real-time detection. The road conditions are complex, such as occlusion, lack of lane lines, ground markings, tunnels, etc., which make the detection rate low. Since the position information of the lane lines does not change greatly in the continuous multiple frames except for lane changing, it is necessary to stably detect the lanes of the multi-frame images, so that the interference lane lines and the accurate lane lines will not be exchanged all the time. Lane line position information needs to be accurately detected before changing lanes to provide correct guidance for departure warning. Based on the above situation, how to improve efficient, real-time and stable detection of lane lines is a problem to be solved in this field.

发明内容SUMMARY OF THE INVENTION

本方法所采用的嵌入式平台,设计了高效、实时性强的车道线检测算法,且具有很强的适应性。目的在于解决现有技术中车道线的检测效率较低的问题。The embedded platform used in this method has designed an efficient and real-time lane line detection algorithm, and has strong adaptability. The purpose is to solve the problem of low detection efficiency of lane lines in the prior art.

为到达上述目的,本发明提出了一种基于视觉的车道线检测方法,该方法包括以下步骤:In order to achieve the above purpose, the present invention proposes a vision-based lane line detection method, which includes the following steps:

步骤1:通过摄像头采集图像,将采集图像转换为灰度图像,将灰度图像的中心点设置为基准点,并根据基准点划定感兴趣区域;Step 1: collect an image through a camera, convert the collected image into a grayscale image, set the center point of the grayscale image as a reference point, and delineate a region of interest according to the reference point;

步骤2:通过行扫描梯度值法在感兴趣区域分别提取上升边缘点以及下降边缘点,将上升边缘点以及下降边缘点分别通过逆透视变换得到逆透视上升边缘点以及逆透视下降边缘点,将逆透视上升边缘点以及逆透视下降边缘点使用车道宽度特征滤波分别得到筛选后上升边缘点以及筛选后下降边缘点;Step 2: Extract the rising edge point and the falling edge point in the region of interest by the line scanning gradient value method, respectively obtain the rising edge point and the falling edge point of the inverse perspective through the inverse perspective transformation, and the The inverse perspective rising edge point and the inverse perspective descending edge point use the lane width feature filtering to obtain the filtered ascending edge point and the filtered descending edge point respectively;

步骤3:将筛选后上升边缘点以及筛选后下降边缘点进行自定义的参数空间变换,并统计筛选后上升边缘点以及筛选后下降边缘点其线的角度以及横向偏移量均相等的数量,获取候选车道线,并拟合车道曲线;Step 3: Perform a self-defined parameter space transformation on the rising edge point after screening and the falling edge point after screening, and count the number of the same line angle and horizontal offset of the rising edge point after screening and the falling edge point after screening. Obtain candidate lane lines and fit lane curves;

步骤4:通过当前帧道路图像的车道线位置关联下一帧道路图像的车道线位置。Step 4: Associate the lane line position of the next frame of road image with the lane line position of the current frame of road image.

作为优选,步骤1中所述采集图像宽度为u,高度为v;Preferably, the width of the collected image in step 1 is u, and the height is v;

步骤1中所述灰度图像的中心点为

Figure BDA0001755758600000021
将中心点设置为灰度图像的基准点;The center point of the grayscale image in step 1 is
Figure BDA0001755758600000021
Set the center point as the fiducial point of the grayscale image;

步骤1中所述根据基准点划定感兴趣区域为:The delineation of the region of interest according to the reference point in step 1 is:

根据基准点

Figure BDA0001755758600000022
划定矩形方块,矩形方块宽度取值范围为
Figure BDA0001755758600000023
矩形方块高度取值范围为
Figure BDA0001755758600000024
According to the reference point
Figure BDA0001755758600000022
Delineate a rectangular box, the value range of the width of the rectangular box is
Figure BDA0001755758600000023
The value range of the height of the rectangular block is
Figure BDA0001755758600000024

其中,w取值范围为

Figure BDA0001755758600000025
h取值范围为
Figure BDA0001755758600000026
Among them, the value range of w is
Figure BDA0001755758600000025
The value range of h is
Figure BDA0001755758600000026

作为优选,步骤2中所述通过行扫描梯度值法在感兴趣区域提取边缘点为:Preferably, the extraction of edge points in the region of interest by the line scanning gradient value method described in step 2 is:

计算基于水平行扫描线上的每个像素边缘强度:The calculation is based on each pixel edge intensity on the horizontal line scan line:

Figure BDA0001755758600000031
Figure BDA0001755758600000031

其中,I(i+k,j)表示感兴趣区域第i+k行及第j列的像素值,

Figure BDA0001755758600000032
表示感兴趣区域的图像行数,
Figure BDA0001755758600000033
表示感兴趣区域的图像列数,L表示每行的滤波长度;Among them, I(i+k,j) represents the pixel value of the i+kth row and the jth column of the region of interest,
Figure BDA0001755758600000032
the number of image lines representing the region of interest,
Figure BDA0001755758600000033
Represents the number of image columns in the region of interest, and L represents the filter length of each row;

将像素边缘强度分别与第一阈值以及第二阈值比较,根据检测结果对感兴趣区域像素点进行分类:当E(i,j)>Th1时,I(i,j)是上升边缘边点,当E(i,j)<Th2时,I(i,j)是下降边缘点;Compare the pixel edge intensity with the first threshold and the second threshold respectively, and classify the pixel points in the region of interest according to the detection results: when E(i,j)>Th 1 , I(i,j) is the rising edge point , when E(i,j)<Th 2 , I(i,j) is the falling edge point;

将感兴趣区域内上升边缘点以及下降边缘点通过逆透视变换转换到世界坐标系下的实际道路中边缘特征点,即步骤2中所述逆透视上升边缘点以及逆透视下降边缘点;Convert the rising edge points and falling edge points in the region of interest to the edge feature points in the actual road under the world coordinate system through inverse perspective transformation, that is, the inverse perspective rising edge points and inverse perspective falling edge points described in step 2;

逆透视上升边缘点以及逆透视下降边缘点使用车道宽度特征滤波去除干扰点,对感兴趣区域内同一图像行数的逆透视上升边缘点与逆透视下降边缘点计算欧式距离:若|dis-D|≤dh,dis为欧氏距离,D为距离阈值,dh为距离误差,则逆透视上升边缘点为步骤2中所述筛选后上升边缘点:Inverse perspective rising edge points and inverse perspective descending edge points Use lane width feature filtering to remove interference points, and calculate the Euclidean distance for the inverse perspective rising edge points and inverse perspective descending edge points of the same number of image lines in the region of interest: if |dis-D |≤dh, dis is the Euclidean distance, D is the distance threshold, and dh is the distance error, then the inverse perspective rising edge point is the rising edge point after screening described in step 2:

(xm,ym)(x m ,y m )

其中,

Figure BDA0001755758600000034
M为筛选后上升边缘点数量;in,
Figure BDA0001755758600000034
M is the number of rising edge points after screening;

且逆透视下降边缘点为步骤2中所述筛选后下降边缘点:And the inverse perspective descending edge point is the descending edge point after screening described in step 2:

Figure BDA0001755758600000035
Figure BDA0001755758600000035

其中,

Figure BDA0001755758600000036
N为筛选后下降边缘点数量;in,
Figure BDA0001755758600000036
N is the number of falling edge points after screening;

作为优选,步骤3中所述自定义的参数空间变换为:Preferably, the custom parameter space transformation described in step 3 is:

筛选后上升边缘点自定义的参数空间为:The customized parameter space of the rising edge point after filtering is:

xm=pk,m+ym*tanθk,m x m =p k,m +y m *tanθ k,m

其中,(xm,ym)为步骤2中所述筛选后上升边缘点的坐标,θk,m表示筛选后上升边缘点线的角度,且θk,m∈[α,β],k∈[1,K],K表示上升边缘点线的角度数量,pk,m表示上升边缘点线的横向偏移量,进行θk,m的遍历计算获取相应的pk,mAmong them, (x m , y m ) are the coordinates of the rising edge point after screening described in step 2, θ k,m is the angle of the rising edge point line after screening, and θ k,m ∈[α,β], k ∈[1,K], K represents the angle number of the rising edge point line, p k,m represents the lateral offset of the rising edge point line, perform the traversal calculation of θ k,m to obtain the corresponding p k,m ;

筛选后下降边缘点自定义的参数空间为:The customized parameter space of the descending edge point after filtering is:

Figure BDA0001755758600000041
Figure BDA0001755758600000041

其中,

Figure BDA0001755758600000042
为步骤2中所述筛选后下降边缘点的坐标,
Figure BDA0001755758600000043
表示下降边缘点线的角度,且
Figure BDA0001755758600000044
L表示下降边缘点线的角度数量,
Figure BDA0001755758600000045
表示下降边缘点线的横向偏移量,进行
Figure BDA0001755758600000046
的遍历计算获取相应的
Figure BDA0001755758600000047
in,
Figure BDA0001755758600000042
are the coordinates of the falling edge point after screening as described in step 2,
Figure BDA0001755758600000043
represents the angle of the falling edge point line, and
Figure BDA0001755758600000044
L represents the angle number of the falling edge point line,
Figure BDA0001755758600000045
Indicates the lateral offset of the falling edge point line,
Figure BDA0001755758600000046
traversal calculation to obtain the corresponding
Figure BDA0001755758600000047

步骤3中所述统计筛选后上升边缘点以及筛选后下降边缘点其线的角度以及横向偏移量均相等的数量:In step 3, count the number of the rising edge points after screening and the falling edge points after screening whose line angles and lateral offsets are equal:

筛选后上升边缘点自定义的参数空间中,将任意两个不同的筛选后上升边缘点的上升边缘点线的角度以及上升边缘点线的横向偏移量进行比较,若二者均相等,则:In the custom parameter space of the rising edge point after screening, compare the angle of the rising edge point line and the horizontal offset of the rising edge point line of any two different rising edge points after screening. If both are equal, then :

Hr(p,θ)=Hr(p,θ)+1 r∈[1,Nr]H r (p, θ)=H r (p, θ)+1 r∈[1,N r ]

其中,Hr(p,θ)为第r组上升边缘点线的角度以及上升边缘点线的横向偏移量均相等的筛选后上升边缘点数量;Among them, H r (p, θ) is the number of rising edge points after screening, the angle of the rth group of rising edge point lines and the horizontal offset of the rising edge point line are equal;

筛选后下降边缘点自定义的参数空间中,将任意两个不同的筛选后下降边缘点的下降边缘点线的角度以及下降边缘点线的横向偏移量进行比较,若二者均相等,则:In the customized parameter space of the descending edge point after filtering, compare the angle of the descending edge point line and the horizontal offset of the descending edge point line of any two different descending edge points after filtering. If both are equal, then :

Hd(p,θ)=Hd(p,θ)+1 d∈[1,Nd]H d (p, θ)=H d (p, θ)+1 d∈[1,N d ]

其中,Hd(p,θ)为第d组下降边缘点线的角度以及下降边缘点线的横向偏移量均相等的筛选后下降边缘点数量;Among them, H d (p, θ) is the number of descending edge points after screening where the angle of the d-th group of descending edge point lines and the lateral offset of the descending edge point lines are equal;

在Nr组上升边缘点线的角度以及上升边缘点线的横向偏移量均相等的筛选后上边缘点中,选择Hr(p,θ)值从高到低排序前G组的其中一组Among the upper edge points after screening where the angle of the rising edge point line and the lateral offset of the rising edge point line in the N r groups are equal, select one of the first G groups in the order of H r (p, θ) values from high to low. Group

(pgg)g∈[1,G],不同的(pgg)根据其上升边缘点线的角度以及上升边缘点线的横向偏移量值表示为不同的直线;(p gg )g∈[1,G], different (p gg ) are represented as different straight lines according to the angle of the rising edge point line and the lateral offset value of the rising edge point line;

在Nd组下降边缘点线的角度以及下降边缘点线的横向偏移量均相等的筛选后上边缘点中,选择Hd(p,θ)值从高到低排序前G组的其中一组Among the upper edge points after screening where the angle of the falling edge point line and the lateral offset of the falling edge point line in the N d groups are equal, select one of the first G groups in the order of H d (p, θ) values from high to low. Group

Figure BDA0001755758600000051
不同的
Figure BDA0001755758600000052
根据其下降边缘点线的角度以及下降边缘点线的横向偏移量值表示为不同的直线;
Figure BDA0001755758600000051
different
Figure BDA0001755758600000052
It is expressed as different straight lines according to the angle of its falling edge point line and the lateral offset value of the falling edge point line;

步骤3中所述获取候选车道线为:The candidate lane lines obtained in step 3 are:

针对上升边缘点,由参数值(pgg)g∈[1,G]确定直线为:For the rising edge point, the straight line determined by the parameter value (p gg )g∈[1,G] is:

xi=pg+yi*tanθg x i =p g +y i *tanθ g

其中,

Figure BDA0001755758600000053
xi由直线公式计算得出具体值,(xi,yi)是直线的坐标,以(xi,yi)为基准对已经获得的筛选后的边缘点以外扩方式进一步筛选,只保留外扩范围内的上升边缘点
Figure BDA0001755758600000054
Figure BDA0001755758600000055
同时
Figure BDA0001755758600000056
δ以及φ为设置的阈值;in,
Figure BDA0001755758600000053
The specific value of x i is calculated by the formula of the straight line, ( xi , y i ) is the coordinate of the straight line, and based on ( x i , y i ), the obtained filtered edge points are further screened by external expansion, and only retain Rising edge point within the extended range
Figure BDA0001755758600000054
Figure BDA0001755758600000055
at the same time
Figure BDA0001755758600000056
δ and φ are the set thresholds;

对于下降边缘点与上升边缘点进行相同处理,只保留外扩范围内的下降边缘点

Figure BDA0001755758600000057
The same processing is performed for the falling edge point and the rising edge point, and only the falling edge point within the extended range is retained
Figure BDA0001755758600000057

步骤3中所述拟合车道线为:The fitted lane lines described in step 3 are:

将外扩范围内的上升边缘点进行多次项拟合得到上升拟合车道曲线,参数值为

Figure BDA0001755758600000058
The ascending edge points within the expansion range are fitted multiple times to obtain the ascending fitted lane curve, and the parameter value is
Figure BDA0001755758600000058

将外扩范围内的下降边缘点进行多次项拟合得到下降拟合车道曲线,参数值为

Figure BDA0001755758600000059
The descending edge points within the expansion range are fitted multiple times to obtain the descending fitted lane curve, and the parameter value is
Figure BDA0001755758600000059

多次项拟合可以采用最小二乘法或贝塞尔曲线法;Multi-term fitting can use least squares method or Bezier curve method;

上升拟合车道曲线以及下降拟合车道曲线构成当前帧道路图像的车道线位置;The ascending fitted lane curve and the descending fitted lane curve constitute the lane line position of the current frame road image;

作为优选,步骤4中所述关联为:Preferably, the association described in step 4 is:

下一帧道路图像的车道线位置中上升拟合车道曲线,参数值为

Figure BDA0001755758600000061
The rising fitted lane curve in the lane line position of the next frame of road image, the parameter value is
Figure BDA0001755758600000061

下一帧道路图像的车道线位置中下降拟合车道曲线,参数值为

Figure BDA0001755758600000062
Descending the fitted lane curve in the lane line position of the next frame of road image, the parameter value is
Figure BDA0001755758600000062

Figure BDA0001755758600000063
α,β为设置的阈值,
Figure BDA0001755758600000064
γ,λ为设置的阈值,like
Figure BDA0001755758600000063
α, β are the set thresholds,
Figure BDA0001755758600000064
γ, λ are the set thresholds,

下一帧道路图像的车道线位置有效,否则无效;The lane line position of the next frame of road image is valid, otherwise it is invalid;

Figure BDA0001755758600000065
α,β为设置的阈值,
Figure BDA0001755758600000066
γ,λ为设置的阈值,like
Figure BDA0001755758600000065
α, β are the set thresholds,
Figure BDA0001755758600000066
γ, λ are the set thresholds,

下一帧道路图像的车道线位置有效,否则无效。The lane line position of the next frame of road image is valid, otherwise it is invalid.

本发明方法能够实时、准确地检测出各场景下的车道线,并具有相当好的抗干扰性。根同时结合车道结构特征信息,并且在实时性上去找寻突破点,提出了基于自定义的参数空间变换检测算法,把它应用于以嵌入式环境下。运用消失点自动划定感兴趣区域,避免全图的复杂计算,消除冗余信息提升检测效率,基于行扫描的边缘梯度值的提取,将边缘点提取后用于快速逆透视变换,然后将逆透视变换后的道路图中边缘点与原图像扫描提取的边缘点进行特征融合,剔除干扰点,只保留有效的边缘特征点,为高效的参数空间变换提供保证;有效边缘信息获取后,采用了适合边缘点的自定义参数空间变换方法用于获取候选车道线,然后通过特征信息筛选车道线,将获得的车道线用于实现后续帧图像的稳定检测。The method of the invention can detect the lane lines in each scene accurately in real time, and has quite good anti-interference performance. At the same time, combined with the feature information of the lane structure, and looking for breakthrough points in real-time, a self-defined parameter space transformation detection algorithm was proposed, and it was applied to the embedded environment. Use vanishing point to automatically delineate the region of interest, avoid complex calculation of the whole image, eliminate redundant information to improve detection efficiency, extract edge gradient values based on line scanning, extract edge points for fast inverse perspective transformation, and then inverse The feature fusion of the edge points in the perspective transformed road map and the edge points extracted from the original image scan is performed to eliminate the interference points and retain only the effective edge feature points, which provides a guarantee for efficient parameter space transformation; after the effective edge information is obtained, the A self-defined parameter space transformation method suitable for edge points is used to obtain candidate lane lines, and then the lane lines are screened by feature information, and the obtained lane lines are used to achieve stable detection of subsequent frame images.

附图说明Description of drawings

图1:本发明的方法流程示意图;Fig. 1: method flow schematic diagram of the present invention;

图2:本发明车道线检测算法实施例中感兴趣区域与扫描线的设定示意图;Figure 2: Schematic diagram of the setting of the region of interest and the scan line in the embodiment of the lane line detection algorithm of the present invention;

图3:本发明车道线检测算法实施例中边缘点提取结果图与逆透视变换的鸟瞰图;Fig. 3: The bird's-eye view of edge point extraction result graph and inverse perspective transformation in the embodiment of the lane line detection algorithm of the present invention;

图4:本发明车道线检测算法实施例中宽度特征滤波的后结果图;Fig. 4: The result diagram of the width feature filtering in the embodiment of the lane line detection algorithm of the present invention;

图5:本发明车道线检测算法实施例中参数空间示意图;Figure 5: Schematic diagram of the parameter space in the embodiment of the lane line detection algorithm of the present invention;

图6:本发明车道线检测算法实施例中整合过后的车道线内边界结果图。FIG. 6 is a result diagram of the inner boundary of the lane line after integration in the embodiment of the lane line detection algorithm of the present invention.

图7:本发明车道线检测算法实施例中车道线检测结果图。FIG. 7 is a diagram of the result of lane line detection in the embodiment of the lane line detection algorithm of the present invention.

具体实施方式Detailed ways

为了便于本领域普通技术人员理解和实施本发明,下面结合附图及实施例对本发明作进一步的详细描述,应当理解,此处所描述的实施示例仅用于说明和解释本发明,并不用于限定本发明。In order to facilitate the understanding and implementation of the present invention by those skilled in the art, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the embodiments described herein are only used to illustrate and explain the present invention, but not to limit it. this invention.

下面结合图1至图7介绍本发明的实施方式,具体包含以下步骤:Embodiments of the present invention are described below in conjunction with FIGS. 1 to 7 , which specifically include the following steps:

步骤1:通过摄像头采集图像,将采集图像转换为灰度图像,将灰度图像的中心点设置为基准点,并根据基准点划定感兴趣区域;Step 1: collect an image through a camera, convert the collected image into a grayscale image, set the center point of the grayscale image as a reference point, and delineate a region of interest according to the reference point;

步骤1中所述采集图像宽度为u=1280,高度为v=720;In step 1, the width of the collected image is u=1280, and the height is v=720;

步骤1中所述灰度图像的中心点为

Figure BDA0001755758600000071
将中心点设置为灰度图像的基准点;The center point of the grayscale image in step 1 is
Figure BDA0001755758600000071
Set the center point as the fiducial point of the grayscale image;

步骤1中所述根据基准点划定感兴趣区域为:The delineation of the region of interest according to the reference point in step 1 is:

根据基准点

Figure BDA0001755758600000072
划定矩形方块,矩形方块宽度取值范围为
Figure BDA0001755758600000073
矩形方块高度取值范围为
Figure BDA0001755758600000074
According to the reference point
Figure BDA0001755758600000072
Delineate a rectangular box, the value range of the width of the rectangular box is
Figure BDA0001755758600000073
The value range of the height of the rectangular block is
Figure BDA0001755758600000074

其中,w取值范围为

Figure BDA0001755758600000075
Figure BDA0001755758600000076
h取值范围为
Figure BDA0001755758600000077
Figure BDA0001755758600000078
Among them, the value range of w is
Figure BDA0001755758600000075
Figure BDA0001755758600000076
The value range of h is
Figure BDA0001755758600000077
Figure BDA0001755758600000078

步骤2:通过行扫描梯度值法在感兴趣区域分别提取上升边缘点以及下降边缘点,将上升边缘点以及下降边缘点分别通过逆透视变换得到逆透视上升边缘点以及逆透视下降边缘点,将逆透视上升边缘点以及逆透视下降边缘点使用车道宽度特征滤波分别得到筛选后上升边缘点以及筛选后下降边缘点;Step 2: Extract the rising edge point and the falling edge point in the region of interest by the line scanning gradient value method, respectively obtain the rising edge point and the falling edge point of the inverse perspective through the inverse perspective transformation, and the The inverse perspective rising edge point and the inverse perspective descending edge point use the lane width feature filtering to obtain the filtered ascending edge point and the filtered descending edge point respectively;

步骤2中所述通过行扫描梯度值法在感兴趣区域提取边缘点为:The extraction of edge points in the region of interest by the line scanning gradient value method described in step 2 is:

计算基于水平行扫描线上的每个像素边缘强度:The calculation is based on each pixel edge intensity on the horizontal line scan line:

Figure BDA0001755758600000079
Figure BDA0001755758600000079

其中,I(i+k,j)表示感兴趣区域第i+k行及第j列的像素值,

Figure BDA00017557586000000710
表示感兴趣区域的图像行数,
Figure BDA00017557586000000711
表示感兴趣区域的图像列数,L表示每行的滤波长度,L=8;Among them, I(i+k,j) represents the pixel value of the i+kth row and the jth column of the region of interest,
Figure BDA00017557586000000710
the number of image lines representing the region of interest,
Figure BDA00017557586000000711
Represents the number of image columns in the region of interest, L represents the filter length of each row, L=8;

将像素边缘强度分别与第一阈值以及第二阈值比较,根据检测结果对感兴趣区域像素点进行分类:当E(i,j)>Th1时,I(i,j)是上升边缘边点,Th1=16,当E(i,j)<Th2时,I(i,j)是下降边缘点,Th2=-16;Compare the pixel edge intensity with the first threshold and the second threshold respectively, and classify the pixel points in the region of interest according to the detection results: when E(i,j)>Th 1 , I(i,j) is the rising edge point , Th 1 =16, when E(i,j)<Th 2 , I(i,j) is the falling edge point, Th 2 =-16;

将感兴趣区域内上升边缘点以及下降边缘点通过逆透视变换转换到世界坐标系下的实际道路中边缘特征点,即步骤2中所述逆透视上升边缘点以及逆透视下降边缘点;Convert the rising edge points and falling edge points in the region of interest to the edge feature points in the actual road under the world coordinate system through inverse perspective transformation, that is, the inverse perspective rising edge points and inverse perspective falling edge points described in step 2;

逆透视上升边缘点以及逆透视下降边缘点使用车道宽度特征滤波去除干扰点,对感兴趣区域内同一图像行数的逆透视上升边缘点与逆透视下降边缘点计算欧式距离:若|dis-D|≤dh,dis为欧氏距离,D为距离阈值,为14个像素距离,dh为距离误差,为4个像素距离,则逆透视上升边缘点为步骤2中所述筛选后上升边缘点:Inverse perspective rising edge points and inverse perspective descending edge points Use lane width feature filtering to remove interference points, and calculate the Euclidean distance for the inverse perspective rising edge points and inverse perspective descending edge points of the same number of image lines in the region of interest: if |dis-D |≤dh, dis is the Euclidean distance, D is the distance threshold, which is a distance of 14 pixels, and dh is the distance error, which is a distance of 4 pixels, then the inverse perspective rising edge point is the rising edge point after screening described in step 2:

(xm,ym)(x m ,y m )

其中,

Figure BDA0001755758600000081
M为筛选后上升边缘点数量;in,
Figure BDA0001755758600000081
M is the number of rising edge points after screening;

且逆透视下降边缘点为步骤2中所述筛选后下降边缘点:And the inverse perspective descending edge point is the descending edge point after screening described in step 2:

Figure BDA0001755758600000082
Figure BDA0001755758600000082

其中,

Figure BDA0001755758600000083
N为筛选后下降边缘点数量;in,
Figure BDA0001755758600000083
N is the number of falling edge points after screening;

步骤3:将筛选后上升边缘点以及筛选后下降边缘点进行自定义的参数空间变换,并统计筛选后上升边缘点以及筛选后下降边缘点其线的角度以及横向偏移量均相等的数量,获取候选车道线,并拟合车道曲线;Step 3: Perform a self-defined parameter space transformation on the rising edge point after screening and the falling edge point after screening, and count the number of the same line angle and horizontal offset of the rising edge point after screening and the falling edge point after screening. Obtain candidate lane lines and fit lane curves;

步骤3中所述自定义的参数空间变换为:The custom parameter space transformation described in step 3 is:

筛选后上升边缘点自定义的参数空间为:The customized parameter space of the rising edge point after filtering is:

xm=pk,m+ym*tanθk,m x m =p k,m +y m *tanθ k,m

其中,(xm,ym)为步骤2中所述筛选后上升边缘点的坐标,θk,m表示筛选后上升边缘点线的角度,且θk,m∈[α,β],k∈[1,K],α=1,β=75,K表示上升边缘点线的角度数量,pk,m表示上升边缘点线的横向偏移量,进行θk,m的遍历计算获取相应的pk,mAmong them, (x m , y m ) are the coordinates of the rising edge point after screening described in step 2, θ k,m is the angle of the rising edge point line after screening, and θ k,m ∈[α,β], k ∈[1,K], α=1, β=75, K represents the angle number of the rising edge point line, p k,m represents the lateral offset of the rising edge point line, perform the traversal calculation of θ k,m to obtain the corresponding pk,m of ;

筛选后下降边缘点自定义的参数空间为:The customized parameter space of the descending edge point after filtering is:

Figure BDA0001755758600000091
Figure BDA0001755758600000091

其中,

Figure BDA0001755758600000092
为步骤2中所述筛选后下降边缘点的坐标,
Figure BDA0001755758600000093
表示下降边缘点线的角度,且
Figure BDA0001755758600000094
α=1,β=75,L表示下降边缘点线的角度数量,
Figure BDA0001755758600000095
表示下降边缘点线的横向偏移量,进行
Figure BDA0001755758600000096
的遍历计算获取相应的
Figure BDA0001755758600000097
in,
Figure BDA0001755758600000092
are the coordinates of the falling edge point after screening as described in step 2,
Figure BDA0001755758600000093
represents the angle of the falling edge point line, and
Figure BDA0001755758600000094
α=1, β=75, L represents the angle number of the falling edge point line,
Figure BDA0001755758600000095
Indicates the lateral offset of the falling edge point line,
Figure BDA0001755758600000096
traversal calculation to obtain the corresponding
Figure BDA0001755758600000097

步骤3中所述统计筛选后上升边缘点以及筛选后下降边缘点其线的角度以及横向偏移量均相等的数量:In step 3, count the number of the rising edge points after screening and the falling edge points after screening whose line angles and lateral offsets are equal:

筛选后上升边缘点自定义的参数空间中,将任意两个不同的筛选后上升边缘点的上升边缘点线的角度以及上升边缘点线的横向偏移量进行比较,若二者均相等,则:In the custom parameter space of the rising edge point after screening, compare the angle of the rising edge point line and the horizontal offset of the rising edge point line of any two different rising edge points after screening. If both are equal, then :

Hr(p,θ)=Hr(p,θ)+1 r∈[1,Nr]H r (p, θ)=H r (p, θ)+1 r∈[1,N r ]

其中,Hr(p,θ)为第r组上升边缘点线的角度以及上升边缘点线的横向偏移量均相等的筛选后上升边缘点数量;Among them, H r (p, θ) is the number of rising edge points after screening, the angle of the rth group of rising edge point lines and the horizontal offset of the rising edge point line are equal;

筛选后下降边缘点自定义的参数空间中,将任意两个不同的筛选后下降边缘点的下降边缘点线的角度以及下降边缘点线的横向偏移量进行比较,若二者均相等,则:In the customized parameter space of the descending edge point after filtering, compare the angle of the descending edge point line and the horizontal offset of the descending edge point line of any two different descending edge points after filtering. If both are equal, then :

Hd(p,θ)=Hd(p,θ)+1 d∈[1,Nd]H d (p, θ)=H d (p, θ)+1 d∈[1,N d ]

其中,Hd(p,θ)为第d组下降边缘点线的角度以及下降边缘点线的横向偏移量均相等的筛选后下降边缘点数量;Among them, H d (p, θ) is the number of descending edge points after screening where the angle of the d-th group of descending edge point lines and the lateral offset of the descending edge point lines are equal;

在Nr组上升边缘点线的角度以及上升边缘点线的横向偏移量均相等的筛选后上边缘点中,选择Hr(p,θ)值从高到低排序前G组的其中一组Among the upper edge points after screening where the angle of the rising edge point line and the lateral offset of the rising edge point line in the N r groups are equal, select one of the first G groups in the order of H r (p, θ) values from high to low. Group

(pgg)g∈[1,G],G=10,不同的(pgg)根据其上升边缘点线的角度以及上升边缘点线的横向偏移量值表示为不同的直线;(p gg )g∈[1,G], G=10, different (p gg ) are expressed as different according to the angle of the rising edge point line and the lateral offset value of the rising edge point line straight line;

在Nd组下降边缘点线的角度以及下降边缘点线的横向偏移量均相等的筛选后上边缘点中,选择Hd(p,θ)值从高到低排序前G组的其中一组Among the upper edge points after screening where the angle of the falling edge point line and the lateral offset of the falling edge point line in the N d groups are equal, select one of the first G groups in the order of H d (p, θ) values from high to low. Group

Figure BDA0001755758600000101
不同的
Figure BDA0001755758600000102
根据其下降边缘点线的角度以及下降边缘点线的横向偏移量值表示为不同的直线;
Figure BDA0001755758600000101
different
Figure BDA0001755758600000102
It is expressed as different straight lines according to the angle of its falling edge point line and the lateral offset value of the falling edge point line;

步骤3中所述获取候选车道线为:The candidate lane lines obtained in step 3 are:

针对上升边缘点,由参数值(pgg)g∈[1,G]确定直线为:For the rising edge point, the straight line determined by the parameter value (p gg )g∈[1,G] is:

xi=pg+yi*tanθg x i =p g +y i *tanθ g

其中,

Figure BDA0001755758600000103
xi由直线公式计算得出具体值,(xi,yi)是直线的坐标,以(xi,yi)为基准对已经获得的筛选后的边缘点以外扩方式进一步筛选,只保留外扩范围内的上升边缘点
Figure BDA0001755758600000104
Figure BDA0001755758600000105
同时
Figure BDA0001755758600000106
δ以及
Figure BDA0001755758600000107
为设置的阈值,δ=3,
Figure BDA0001755758600000108
in,
Figure BDA0001755758600000103
The specific value of x i is calculated by the formula of the straight line, ( xi , y i ) is the coordinate of the straight line, and based on ( x i , y i ), the obtained filtered edge points are further screened by external expansion, and only retain Rising edge point within the extended range
Figure BDA0001755758600000104
Figure BDA0001755758600000105
at the same time
Figure BDA0001755758600000106
delta and
Figure BDA0001755758600000107
is the set threshold, δ=3,
Figure BDA0001755758600000108

对于下降边缘点与上升边缘点进行相同处理,只保留外扩范围内的下降边缘点

Figure BDA0001755758600000109
The same processing is performed for the falling edge point and the rising edge point, and only the falling edge point within the extended range is retained
Figure BDA0001755758600000109

步骤3中所述拟合车道线为:The fitted lane lines described in step 3 are:

将外扩范围内的上升边缘点进行多次项拟合得到上升拟合车道曲线,参数值为

Figure BDA00017557586000001010
The ascending edge points within the expansion range are fitted multiple times to obtain the ascending fitted lane curve, and the parameter value is
Figure BDA00017557586000001010

将外扩范围内的下降边缘点进行多次项拟合得到下降拟合车道曲线,参数值为

Figure BDA00017557586000001011
The descending edge points within the expansion range are fitted multiple times to obtain the descending fitted lane curve, and the parameter value is
Figure BDA00017557586000001011

多次项拟合可以采用最小二乘法或贝塞尔曲线法;Multi-term fitting can use least squares method or Bezier curve method;

上升拟合车道曲线以及下降拟合车道曲线构成当前帧道路图像的车道线位置;The ascending fitted lane curve and the descending fitted lane curve constitute the lane line position of the current frame road image;

步骤4:通过当前帧道路图像的车道线位置关联下一帧道路图像的车道线位置;Step 4: Associate the lane line position of the next frame of road image with the lane line position of the current frame of road image;

步骤4中所述关联为:The associations described in step 4 are:

下一帧道路图像的车道线位置中上升拟合车道曲线,参数值为

Figure BDA00017557586000001012
The rising fitted lane curve in the lane line position of the next frame of road image, the parameter value is
Figure BDA00017557586000001012

下一帧道路图像的车道线位置中下降拟合车道曲线,参数值为

Figure BDA0001755758600000111
Descending the fitted lane curve in the lane line position of the next frame of road image, the parameter value is
Figure BDA0001755758600000111

Figure BDA0001755758600000112
α,β为设置的阈值,α=20,β=25,
Figure BDA0001755758600000113
γ,λ为设置的阈值,γ=6,λ=7,like
Figure BDA0001755758600000112
α, β are the set thresholds, α=20, β=25,
Figure BDA0001755758600000113
γ, λ are the set thresholds, γ=6, λ=7,

下一帧道路图像的车道线位置有效,否则无效;The lane line position of the next frame of road image is valid, otherwise it is invalid;

Figure BDA0001755758600000114
α,β为设置的阈值,α=20,β=25,
Figure BDA0001755758600000115
γ,λ为设置的阈值,γ=6,λ=7。like
Figure BDA0001755758600000114
α, β are the set thresholds, α=20, β=25,
Figure BDA0001755758600000115
γ, λ are the set thresholds, γ=6, λ=7.

下一帧道路图像的车道线位置有效,否则无效。The lane line position of the next frame of road image is valid, otherwise it is invalid.

应当理解的是,本说明书未详细阐述的部分均属于现有技术。It should be understood that the parts not described in detail in this specification belong to the prior art.

应当理解的是,上述针对较佳实施例的描述较为详细,并不能因此而认为是对本发明专利保护范围的限制,本领域的普通技术人员在本发明的启示下,在不脱离本发明权利要求所保护的范围情况下,还可以做出替换或变形,均落入本发明的保护范围之内,本发明的请求保护范围应以所附权利要求为准。It should be understood that the above description of the preferred embodiments is relatively detailed, and therefore should not be considered as a limitation on the protection scope of the patent of the present invention. In the case of the protection scope, substitutions or deformations can also be made, which all fall within the protection scope of the present invention, and the claimed protection scope of the present invention shall be subject to the appended claims.

Claims (4)

1.一种基于视觉的车道线检测方法,其特征在于,包括以下步骤:1. a vision-based lane line detection method, is characterized in that, comprises the following steps: 步骤1:通过摄像头采集图像,将采集图像转换为灰度图像,将灰度图像的中心点设置为基准点,并根据基准点划定感兴趣区域;Step 1: collect an image through a camera, convert the collected image into a grayscale image, set the center point of the grayscale image as a reference point, and delineate a region of interest according to the reference point; 步骤2:通过行扫描梯度值法在感兴趣区域分别提取上升边缘点以及下降边缘点,将上升边缘点以及下降边缘点分别通过逆透视变换得到逆透视上升边缘点以及逆透视下降边缘点,将逆透视上升边缘点以及逆透视下降边缘点使用车道宽度特征滤波分别得到筛选后上升边缘点以及筛选后下降边缘点;Step 2: Extract the rising edge point and the falling edge point in the region of interest by the line scanning gradient value method, respectively obtain the rising edge point and the falling edge point of the inverse perspective through the inverse perspective transformation, and the The inverse perspective rising edge point and the inverse perspective descending edge point use the lane width feature filtering to obtain the filtered ascending edge point and the filtered descending edge point respectively; 步骤3:将筛选后上升边缘点以及筛选后下降边缘点进行自定义的参数空间变换,并统计筛选后上升边缘点以及筛选后下降边缘点其线的角度以及横向偏移量均相等的数量,获取候选车道线,并拟合车道曲线;Step 3: Perform a self-defined parameter space transformation on the rising edge point after screening and the falling edge point after screening, and count the number of the same line angle and horizontal offset of the rising edge point after screening and the falling edge point after screening. Obtain candidate lane lines and fit lane curves; 步骤3中所述自定义的参数空间变换为:The custom parameter space transformation described in step 3 is: 筛选后上升边缘点自定义的参数空间为:The customized parameter space of the rising edge point after filtering is: xm=pk,m+ym*tanθk,m x m =p k,m +y m *tanθ k,m 其中,(xm,ym)为步骤2中所述筛选后上升边缘点的坐标,θk,m表示筛选后上升边缘点线的角度,且θk,m∈[α,β],k∈[1,K],K表示上升边缘点线的角度数量,pk,m表示上升边缘点线的横向偏移量,进行θk,m的遍历计算获取相应的pk,mAmong them, (x m , y m ) are the coordinates of the rising edge point after screening described in step 2, θ k,m is the angle of the rising edge point line after screening, and θ k,m ∈[α,β], k ∈[1,K], K represents the angle number of the rising edge point line, p k,m represents the lateral offset of the rising edge point line, perform the traversal calculation of θ k,m to obtain the corresponding p k,m ; 筛选后下降边缘点自定义的参数空间为:The customized parameter space of the descending edge point after filtering is:
Figure FDA0002946440970000011
Figure FDA0002946440970000011
其中,
Figure FDA0002946440970000012
为步骤2中所述筛选后下降边缘点的坐标,
Figure FDA0002946440970000013
表示下降边缘点线的角度,且
Figure FDA0002946440970000014
l∈[1,L],L表示下降边缘点线的角度数量,
Figure FDA0002946440970000015
表示下降边缘点线的横向偏移量,进行
Figure FDA0002946440970000016
的遍历计算获取相应的
Figure FDA0002946440970000017
in,
Figure FDA0002946440970000012
are the coordinates of the falling edge point after screening as described in step 2,
Figure FDA0002946440970000013
represents the angle of the falling edge point line, and
Figure FDA0002946440970000014
l∈[1,L], L represents the angle number of the falling edge point line,
Figure FDA0002946440970000015
Indicates the lateral offset of the falling edge point line,
Figure FDA0002946440970000016
traversal calculation to obtain the corresponding
Figure FDA0002946440970000017
步骤3中所述统计筛选后上升边缘点以及筛选后下降边缘点其线的角度以及横向偏移量均相等的数量:In step 3, count the number of the rising edge points after screening and the falling edge points after screening whose line angles and lateral offsets are equal: 筛选后上升边缘点自定义的参数空间中,将任意两个不同的筛选后上升边缘点的上升边缘点线的角度以及上升边缘点线的横向偏移量进行比较,若二者均相等,则:In the custom parameter space of the rising edge point after screening, compare the angle of the rising edge point line and the horizontal offset of the rising edge point line of any two different rising edge points after screening. If both are equal, then : Hr(p,θ)=Hr(p,θ)+1,r∈[1,Nr]H r (p, θ)=H r (p, θ)+1, r∈[1,N r ] 其中,Hr(p,θ)为第r组上升边缘点线的角度以及上升边缘点线的横向偏移量均相等的筛选后上升边缘点数量;Among them, H r (p, θ) is the number of rising edge points after screening, the angle of the rth group of rising edge point lines and the horizontal offset of the rising edge point line are equal; 筛选后下降边缘点自定义的参数空间中,将任意两个不同的筛选后下降边缘点的下降边缘点线的角度以及下降边缘点线的横向偏移量进行比较,若二者均相等,则:In the customized parameter space of the descending edge point after filtering, compare the angle of the descending edge point line and the horizontal offset of the descending edge point line of any two different descending edge points after filtering. If both are equal, then : Hd(p,θ)=Hd(p,θ)+1,d∈[1,Nd]H d (p, θ)=H d (p, θ)+1, d∈[1,N d ] 其中,Hd(p,θ)为第d组下降边缘点线的角度以及下降边缘点线的横向偏移量均相等的筛选后下降边缘点数量;Among them, H d (p, θ) is the number of descending edge points after screening where the angle of the d-th group of descending edge point lines and the lateral offset of the descending edge point lines are equal; 在Nr组上升边缘点线的角度以及上升边缘点线的横向偏移量均相等的筛选后上边缘点中,选择Hr(p,θ)值从高到低排序前G组的其中一组Among the upper edge points after screening where the angle of the rising edge point line and the lateral offset of the rising edge point line in the N r groups are equal, select one of the first G groups in the order of H r (p, θ) values from high to low. Group (pgg)g∈[1,G],不同的(pgg)根据其上升边缘点线的角度以及上升边缘点线的横向偏移量值表示为不同的直线;(p gg )g∈[1,G], different (p gg ) are represented as different straight lines according to the angle of the rising edge point line and the lateral offset value of the rising edge point line; 在Nd组下降边缘点线的角度以及下降边缘点线的横向偏移量均相等的筛选后上边缘点中,选择Hd(p,θ)值从高到低排序前G组的其中一组Among the upper edge points after screening where the angle of the falling edge point line and the lateral offset of the falling edge point line in the N d groups are equal, select one of the first G groups in the order of H d (p, θ) values from high to low. Group
Figure FDA0002946440970000021
不同的
Figure FDA0002946440970000022
根据其下降边缘点线的角度以及下降边缘点线的横向偏移量值表示为不同的直线;
Figure FDA0002946440970000021
different
Figure FDA0002946440970000022
It is expressed as different straight lines according to the angle of its falling edge point line and the lateral offset value of the falling edge point line;
步骤3中所述获取候选车道线为:The candidate lane lines obtained in step 3 are: 针对上升边缘点,由参数值(pgg)g∈[1,G]确定直线为:For the rising edge point, the straight line determined by the parameter value (p gg )g∈[1,G] is: xi=pg+yi*tanθg x i =p g +y i *tanθ g 其中,
Figure FDA0002946440970000023
xi由直线公式计算得出具体值,(xi,yi)是直线的坐标,以(xi,yi)为基准对已经获得的筛选后的边缘点以外扩方式进一步筛选,只保留外扩范围内的上升边缘点
Figure FDA0002946440970000024
同时
Figure FDA0002946440970000025
δ以及φ为设置的阈值;
in,
Figure FDA0002946440970000023
The specific value of x i is calculated by the formula of the straight line, ( xi , y i ) is the coordinate of the straight line, and based on ( x i , y i ), the obtained filtered edge points are further screened by external expansion, and only retain Rising edge point within the extended range
Figure FDA0002946440970000024
at the same time
Figure FDA0002946440970000025
δ and φ are the set thresholds;
对于下降边缘点与上升边缘点进行相同处理,只保留外扩范围内的下降边缘点
Figure FDA0002946440970000031
The same processing is performed for the falling edge point and the rising edge point, and only the falling edge point within the extended range is retained
Figure FDA0002946440970000031
步骤3中所述拟合车道线为:The fitted lane lines described in step 3 are: 将外扩范围内的上升边缘点进行多次项拟合得到上升拟合车道曲线,参数值为
Figure FDA0002946440970000032
The ascending edge points within the expansion range are fitted multiple times to obtain the ascending fitted lane curve, and the parameter value is
Figure FDA0002946440970000032
将外扩范围内的下降边缘点进行多次项拟合得到下降拟合车道曲线,参数值为
Figure FDA0002946440970000033
The descending edge points within the expansion range are fitted multiple times to obtain the descending fitted lane curve, and the parameter value is
Figure FDA0002946440970000033
多次项拟合可以采用最小二乘法或贝塞尔曲线法;Multi-term fitting can use least squares method or Bezier curve method; 上升拟合车道曲线以及下降拟合车道曲线构成当前帧道路图像的车道线位置;The ascending fitted lane curve and the descending fitted lane curve constitute the lane line position of the current frame road image; 步骤4:通过当前帧道路图像的车道线位置关联下一帧道路图像的车道线位置。Step 4: Associate the lane line position of the next frame of road image with the lane line position of the current frame of road image.
2.根据权利要求1所述的基于视觉的车道线检测方法,其特征在于:步骤1中所述采集图像宽度为u,高度为v;2. vision-based lane line detection method according to claim 1, is characterized in that: the width of the collected image described in step 1 is u, and the height is v; 步骤1中所述灰度图像的中心点为
Figure FDA0002946440970000034
将中心点设置为灰度图像的基准点;
The center point of the grayscale image in step 1 is
Figure FDA0002946440970000034
Set the center point as the fiducial point of the grayscale image;
步骤1中所述根据基准点划定感兴趣区域为:The delineation of the region of interest according to the reference point in step 1 is: 根据基准点
Figure FDA0002946440970000035
划定矩形方块,矩形方块宽度取值范围为
Figure FDA0002946440970000036
矩形方块高度取值范围为
Figure FDA0002946440970000037
According to the reference point
Figure FDA0002946440970000035
Delineate a rectangular box, the value range of the width of the rectangular box is
Figure FDA0002946440970000036
The value range of the height of the rectangular block is
Figure FDA0002946440970000037
其中,w取值范围为
Figure FDA0002946440970000038
h取值范围为
Figure FDA0002946440970000039
Among them, the value range of w is
Figure FDA0002946440970000038
The value range of h is
Figure FDA0002946440970000039
3.根据权利要求1所述的基于视觉的车道线检测方法,其特征在于:步骤2中所述通过行扫描梯度值法在感兴趣区域提取边缘点为:3. vision-based lane line detection method according to claim 1, is characterized in that: described in step 2, extracting edge point in region of interest by line scanning gradient value method is: 计算基于水平行扫描线上的每个像素边缘强度:The calculation is based on each pixel edge intensity on the horizontal line scan line:
Figure FDA00029464409700000310
Figure FDA00029464409700000310
其中,I(i+k,j)表示感兴趣区域第i+k行及第j列的像素值,i表示感兴趣区域的图像行数,j表示感兴趣区域的图像列数,L表示每行的滤波长度,
Figure FDA00029464409700000311
Among them, I(i+k,j) represents the pixel value of the i+kth row and the jth column of the region of interest, i represents the number of image rows in the region of interest, j represents the number of image columns in the region of interest, and L represents each the filter length of the line,
Figure FDA00029464409700000311
将像素边缘强度分别与第一阈值以及第二阈值比较,根据检测结果对感兴趣区域像素点进行分类:当E(i,j)>Th1时,I(i,j)是上升边缘边点,当E(i,j)<Th2时,I(i,j)是下降边缘点;Compare the pixel edge intensity with the first threshold and the second threshold respectively, and classify the pixel points in the region of interest according to the detection results: when E(i,j)>Th 1 , I(i,j) is the rising edge point , when E(i,j)<Th 2 , I(i,j) is the falling edge point; 将感兴趣区域内上升边缘点以及下降边缘点通过逆透视变换转换到世界坐标系下的实际道路中边缘特征点,即步骤2中所述逆透视上升边缘点以及逆透视下降边缘点;Convert the rising edge point and the falling edge point in the region of interest to the edge feature point in the actual road under the world coordinate system through inverse perspective transformation, that is, the inverse perspective rising edge point and the inverse perspective falling edge point described in step 2; 逆透视上升边缘点以及逆透视下降边缘点使用车道宽度特征滤波去除干扰点,对感兴趣区域内同一图像行数的逆透视上升边缘点与逆透视下降边缘点计算欧式距离:若|dis-D|≤dh,dis为欧氏距离,D为距离阈值,dh为距离误差,则逆透视上升边缘点为步骤2中所述筛选后上升边缘点:Inverse perspective rising edge points and inverse perspective falling edge points Use lane width feature filtering to remove interference points, and calculate the Euclidean distance for the inverse perspective rising edge points and inverse perspective falling edge points of the same image line in the region of interest: if |dis-D |≤dh, dis is the Euclidean distance, D is the distance threshold, and dh is the distance error, then the inverse perspective rising edge point is the rising edge point after screening described in step 2: (xm,ym)(x m ,y m ) 其中,
Figure FDA0002946440970000041
m∈[1,M],M为筛选后上升边缘点数量;
in,
Figure FDA0002946440970000041
m∈[1,M], M is the number of rising edge points after screening;
且逆透视下降边缘点为步骤2中所述筛选后下降边缘点:And the inverse perspective descending edge point is the descending edge point after screening described in step 2:
Figure FDA0002946440970000042
Figure FDA0002946440970000042
其中,
Figure FDA0002946440970000043
n∈[1,N],N为筛选后下降边缘点数量。
in,
Figure FDA0002946440970000043
n∈[1,N], where N is the number of falling edge points after screening.
4.根据权利要求1所述的基于视觉的车道线检测方法,其特征在于:步骤4中所述关联为:4. vision-based lane line detection method according to claim 1, is characterized in that: the association described in step 4 is: 下一帧道路图像的车道线位置中上升拟合车道曲线,参数值为
Figure FDA0002946440970000044
The rising fitted lane curve in the lane line position of the next frame of road image, the parameter value is
Figure FDA0002946440970000044
下一帧道路图像的车道线位置中下降拟合车道曲线,参数值为
Figure FDA0002946440970000045
Descending the fitted lane curve in the lane line position of the next frame of road image, the parameter value is
Figure FDA0002946440970000045
Figure FDA0002946440970000046
α,β为设置的阈值,
Figure FDA0002946440970000047
γ,λ为设置的阈值,
like
Figure FDA0002946440970000046
α, β are the set thresholds,
Figure FDA0002946440970000047
γ, λ are the set thresholds,
下一帧道路图像的车道线位置有效,否则无效;The lane line position of the next frame of road image is valid, otherwise it is invalid;
Figure FDA0002946440970000048
α,β为设置的阈值,
Figure FDA0002946440970000049
γ,λ为设置的阈值,
like
Figure FDA0002946440970000048
α, β are the set thresholds,
Figure FDA0002946440970000049
γ, λ are the set thresholds,
下一帧道路图像的车道线位置有效,否则无效。The lane line position of the next frame of road image is valid, otherwise it is invalid.
CN201810886340.7A 2018-08-06 2018-08-06 A Vision-Based Lane Line Detection Method Active CN109190483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810886340.7A CN109190483B (en) 2018-08-06 2018-08-06 A Vision-Based Lane Line Detection Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810886340.7A CN109190483B (en) 2018-08-06 2018-08-06 A Vision-Based Lane Line Detection Method

Publications (2)

Publication Number Publication Date
CN109190483A CN109190483A (en) 2019-01-11
CN109190483B true CN109190483B (en) 2021-04-02

Family

ID=64920295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810886340.7A Active CN109190483B (en) 2018-08-06 2018-08-06 A Vision-Based Lane Line Detection Method

Country Status (1)

Country Link
CN (1) CN109190483B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110077399B (en) * 2019-04-09 2020-11-06 魔视智能科技(上海)有限公司 Vehicle anti-collision method based on road marking and wheel detection fusion
CN110569704B (en) * 2019-05-11 2022-11-22 北京工业大学 A Multi-strategy Adaptive Lane Line Detection Method Based on Stereo Vision
CN110472578B (en) * 2019-08-15 2020-09-18 宁波中车时代传感技术有限公司 Lane line keeping method based on lane curvature
CN110675637A (en) * 2019-10-15 2020-01-10 上海眼控科技股份有限公司 Vehicle illegal video processing method and device, computer equipment and storage medium
CN111563412B (en) * 2020-03-31 2022-05-17 武汉大学 Rapid lane line detection method based on parameter space voting and Bessel fitting
EP4224361A4 (en) * 2020-10-22 2023-08-16 Huawei Technologies Co., Ltd. Lane line detection method and apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136341A (en) * 2013-02-04 2013-06-05 北京航空航天大学 Lane line reconstruction device based on Bezier curve
CN104657727A (en) * 2015-03-18 2015-05-27 厦门麦克玛视电子信息技术有限公司 Lane line detection method
DE102014109063A1 (en) * 2014-06-27 2015-12-31 Connaught Electronics Ltd. Method for detecting an object having a predetermined geometric shape in a surrounding area of a motor vehicle, camera system and motor vehicle
CN107031623A (en) * 2017-03-16 2017-08-11 浙江零跑科技有限公司 A kind of road method for early warning based on vehicle-mounted blind area camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136341A (en) * 2013-02-04 2013-06-05 北京航空航天大学 Lane line reconstruction device based on Bezier curve
DE102014109063A1 (en) * 2014-06-27 2015-12-31 Connaught Electronics Ltd. Method for detecting an object having a predetermined geometric shape in a surrounding area of a motor vehicle, camera system and motor vehicle
CN104657727A (en) * 2015-03-18 2015-05-27 厦门麦克玛视电子信息技术有限公司 Lane line detection method
CN107031623A (en) * 2017-03-16 2017-08-11 浙江零跑科技有限公司 A kind of road method for early warning based on vehicle-mounted blind area camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A robust lane detection method for autonomous car-like robot;Sun T 等;《2013 Fourth International Conference on Intelligent Control and Information Processing (ICICIP). IEEE》;20110624;第373-378页 *

Also Published As

Publication number Publication date
CN109190483A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109190483B (en) A Vision-Based Lane Line Detection Method
CN111563412B (en) Rapid lane line detection method based on parameter space voting and Bessel fitting
CN105678285B (en) A kind of adaptive road birds-eye view transform method and road track detection method
CN109752701B (en) Road edge detection method based on laser point cloud
Kaur et al. Lane detection techniques: A review
CN105005771B (en) A kind of detection method of the lane line solid line based on light stream locus of points statistics
CN102682292B (en) Road edge detection and rough positioning method based on monocular vision
CN107025432B (en) A kind of efficient lane detection tracking and system
CN103500322B (en) Automatic lane line identification method based on low latitude Aerial Images
CN104021378B (en) Traffic lights real-time identification method based on space time correlation Yu priori
CN101608924B (en) Method for detecting lane lines based on grayscale estimation and cascade Hough transform
CN110210451B (en) A zebra crossing detection method
CN109435942A (en) A kind of parking stall line parking stall recognition methods and device based on information fusion
CN111079611A (en) Automatic extraction method for road surface and marking line thereof
CN103927526A (en) A vehicle detection method based on Gaussian difference multi-scale edge fusion
CN103206957B (en) The lane detection and tracking method of vehicular autonomous navigation
CN101470807A (en) Accurate detection method for highroad lane marker line
CN105654073A (en) Automatic speed control method based on visual detection
CN106156723A (en) A kind of crossing fine positioning method of view-based access control model
CN104036246A (en) Lane line positioning method based on multi-feature fusion and polymorphism mean value
CN111539303A (en) Monocular vision-based vehicle driving deviation early warning method
CN111694011A (en) Road edge detection method based on data fusion of camera and three-dimensional laser radar
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN102201054A (en) Method for detecting street lines based on robust statistics
CN104063882A (en) Vehicle video speed measuring method based on binocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant