[go: up one dir, main page]

CN109426800B - A kind of lane line detection method and device - Google Patents

A kind of lane line detection method and device Download PDF

Info

Publication number
CN109426800B
CN109426800B CN201810688772.7A CN201810688772A CN109426800B CN 109426800 B CN109426800 B CN 109426800B CN 201810688772 A CN201810688772 A CN 201810688772A CN 109426800 B CN109426800 B CN 109426800B
Authority
CN
China
Prior art keywords
lane line
data
detection result
current
line detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810688772.7A
Other languages
Chinese (zh)
Other versions
CN109426800A (en
Inventor
刘思远
王明东
侯晓迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tusimple Inc
Original Assignee
Tusimple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/683,463 external-priority patent/US10373003B2/en
Priority claimed from US15/683,494 external-priority patent/US10482769B2/en
Application filed by Tusimple Inc filed Critical Tusimple Inc
Publication of CN109426800A publication Critical patent/CN109426800A/en
Application granted granted Critical
Publication of CN109426800B publication Critical patent/CN109426800B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种车道线检测方法和装置,以解决现有技术中道线检测方案存在的定位不准确的问题。该方法包括:车道线检测装置获取车辆的驾驶环境的当前感知数据;其中,当前感知数据包括当前帧图像数据和当前定位数据;获取车道线模板数据;其中,车道线模板数据是上一次车道线检测处理得到的车道线检测结果数据;根据感知数据提取得到当前的车道线图像数据;根据车道线图像数据和车道线模板数据,确定得到当前的车道线检测结果数据;其中,当前的车道线检测结果数据中包括表达车辆和车道线的相对位置关系的数据。

Figure 201810688772

The invention discloses a lane line detection method and device to solve the problem of inaccurate positioning existing in the lane line detection scheme in the prior art. The method includes: a lane line detection device acquires current perception data of the driving environment of the vehicle; wherein, the current perception data includes current frame image data and current positioning data; acquiring lane line template data; wherein, the lane line template data is the last lane line The lane line detection result data obtained by the detection processing; the current lane line image data is extracted according to the perception data; the current lane line detection result data is determined according to the lane line image data and the lane line template data; wherein, the current lane line detection The resulting data includes data expressing the relative positional relationship between the vehicle and the lane line.

Figure 201810688772

Description

Lane line detection method and device
Technical Field
The invention relates to the field of computer vision, in particular to a lane line detection method and a lane line detection device.
Background
Currently, one of the main research points of Advanced Driver Assistance Systems (ADAS) is to improve the safety of the vehicle itself or the driving of the vehicle and to reduce road accidents. Intelligent vehicles and unmanned vehicles are expected to solve the problems of road safety, traffic problems and passenger comfort. Lane line detection is a complex and challenging task in research tasks for smart vehicles or unmanned vehicles. The lane line is used as a main part of a road and plays a role in providing reference for the unmanned vehicle and guiding safe driving. The lane line detection includes road positioning, a relative positional relationship between the vehicle and the road, and a traveling direction of the vehicle.
In the current technical solution, lane line detection is usually implemented according to images acquired by a camera and a positioning signal provided by a GPS device. However, the accuracy of the lane line and the position information of the lane line and the relative position information between the lane line and the vehicle determined by this method is low, and the driving demand of the autonomous vehicle cannot be satisfied. That is, the problem of low positioning accuracy exists in the existing technical scheme of lane line detection.
Disclosure of Invention
In view of this, embodiments of the present invention provide a lane line detection method and apparatus, so as to solve the problem of inaccurate positioning in the existing lane line detection technology.
In one aspect, an embodiment of the present application provides a lane line detection method, including:
the method comprises the steps that a lane line detection device obtains current sensing data of a driving environment of a vehicle; the current perception data comprises current frame image data and current positioning data;
acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing;
extracting current lane line image data according to the perception data;
determining to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
On the other hand, the embodiment of the present application provides a lane line detection device, including:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring current perception data of the driving environment of the admixture, and the current perception data comprises current frame image data and positioning data; acquiring lane line template data, wherein the lane line template data is lane line detection result data obtained by last lane line detection processing;
the extraction unit is used for extracting current lane line image data according to the perception data;
the determining unit is used for determining and obtaining the current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
In another aspect, an embodiment of the present application provides a lane line detection apparatus, including a processor and at least one memory, where the at least one memory stores at least one machine executable instruction, and the processor executes the at least one machine executable instruction to perform:
acquiring current perception data of a driving environment of a vehicle; the current perception data comprises current frame image data and current positioning data;
acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing;
extracting current lane line image data according to the perception data;
determining to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
According to the technical scheme provided by the embodiment of the invention, the lane line detection device acquires the current sensing data of the driving environment of the lane, extracts the lane line image data from the current sensing data, receives the lane line detection result data (namely, the lane line template data) obtained by the last lane line detection processing, and determines to obtain the current lane line detection result data according to the current lane line image data and the last lane line detection result data. Because the last lane line detection result data comprises more accurate lane line positioning information, positioning reference information can be provided for the current lane line detection processing. Compared with the prior art, the lane line detection is carried out only by means of the currently acquired sensing data, more accurate lane line detection can be carried out, more accurate positioning information is determined, and therefore the problem that the lane line detection scheme in the prior art is inaccurate in positioning is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a processing flow chart of a lane line detection method according to an embodiment of the present disclosure;
FIG. 2a is an example of lane line image data;
fig. 2b is another processing flow chart of the lane line detection method according to the embodiment of the present disclosure;
FIG. 3a is a flowchart of the process of step 104 in FIG. 1 or FIG. 2;
fig. 3b is another processing flow chart of the lane line detection method according to the embodiment of the present disclosure;
fig. 4 is another processing flow chart of the lane line detection method according to the embodiment of the present application;
FIG. 5 is a flowchart of the process of step 105 in FIG. 4;
FIG. 6a is a flowchart of the process of step 106 in FIG. 4;
FIG. 6b is another flowchart of the process of step 106 in FIG. 4;
FIG. 7 is an example image;
FIG. 8 is a diagram illustrating an example of the expanded lane line of step 1061 in FIG. 6 a;
FIG. 9 is a schematic view of the expanded lane line of FIG. 8 after adjustment;
fig. 10 is a block diagram of a lane line detection apparatus according to an embodiment of the present application;
fig. 11 is another structural block diagram of a lane line detection apparatus according to an embodiment of the present application;
fig. 12 is another block diagram of the lane line detection apparatus according to the embodiment of the present application;
fig. 13 is another structural block diagram of the lane line detection apparatus according to the embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Aiming at the problem of inaccurate positioning in the technical scheme of lane line detection in the prior art, the embodiment of the application provides a lane line detection method and a lane line detection device, which are used for solving the problem.
In the lane detection scheme provided in the embodiment of the present application, the lane detection apparatus obtains current sensing data of a driving environment of a lane, extracts lane image data from the current sensing data, receives lane detection result data (i.e., lane template data) obtained by previous lane detection processing, and determines to obtain current lane detection result data according to the current lane image data and the previous lane detection result data. Because the last lane line detection result data comprises more accurate lane line positioning information, positioning reference information can be provided for the current lane line detection processing. Compared with the prior art, the lane line detection is carried out only by means of the currently acquired sensing data, more accurate lane line detection can be carried out, more accurate positioning information is determined, and therefore the problem that the lane line detection scheme in the prior art is inaccurate in positioning is solved.
The foregoing is the core idea of the present invention, and in order to make the technical solutions in the embodiments of the present invention better understood and make the above objects, features and advantages of the embodiments of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention are further described in detail with reference to the accompanying drawings.
Fig. 1 shows a processing flow of a lane line detection method provided in an embodiment of the present application, where the method includes the following processing procedures:
step 101, acquiring current sensing data of a driving environment of a vehicle by a lane line detection device; the current perception data comprises current frame image data and current positioning data;
102, acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing;
103, extracting current lane line image data according to the perception data;
104, determining to obtain current lane line detection result data according to the lane line image data and the lane line template; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
Wherein, the execution sequence of the step 102 and the step 103 is not in sequence.
The above-described implementation is described in detail below.
In the above step 101, the current perception data of the driving environment of the vehicle may be acquired by a perception device mounted on the vehicle. For example, at least one vehicle-mounted camera acquires at least one frame of current frame image data, and a positioning device acquires current positioning data, wherein the positioning device includes a Global Positioning System (GPS) and/or an Inertial Measurement Unit (IMU). The perception data may further include map data of the current driving environment, laser radar (LIDAR) data. The map data may be real map data acquired in advance, or map data provided by a Simultaneous Localization and Mapping unit (SLAM) of the vehicle.
In step 102, the lane line template data is lane line detection result data obtained in the previous lane line detection process, and includes lane line position information and data of relative positional relationship between the vehicle and the lane line. The lane line template data (i.e., lane line detection result data) may be expressed as 3D space data of a top view angle, for example, a direction perpendicular to a vehicle traveling direction is an X axis with the vehicle traveling direction as a Y axis in a coordinate system.
After the lane line template data is obtained in the last lane line detection process, the lane line template data may be stored in a storage device, which may be a local storage of the lane line detection apparatus, another storage in the vehicle, or a remote storage.
The lane line detection device may read the lane line template data from the storage device in the current lane line detection process, or may receive the lane line template data in a predetermined processing cycle.
In the embodiment of the application, because the lane line detection result data obtained by the last lane line detection processing includes the position information of the lane line and the relative position relationship between the vehicle and the lane line, in the obtained two adjacent frames of image data, the change of the position of the lane line is small, and the lane line has continuity and stability, and the relative position relationship between the vehicle and the lane line is relatively stable and changes continuously.
In the step 103, the process of extracting the current lane line image data according to the sensing data may be implemented in various ways.
In a first mode, a semantic segmentation method is adopted for current frame image data respectively acquired by at least one camera, an algorithm or a model obtained through pre-training is used for carrying out classification marking on pixels of the current frame image data, and current lane line image data is extracted from the pixels.
The algorithm or model obtained by pre-training may be obtained by performing iterative training on the neural network according to real data (ground route) of the driving environment and image data obtained by the camera.
In a second mode, the current lane line image data can also be extracted and obtained in an object recognition mode according to the current frame image data and the current positioning data.
An example of one lane line image data is shown in fig. 2 a.
The present application only lists the above two methods for extracting lane line image data, and may also obtain current lane line image data according to other processing methods, which are not strictly specified in the present application.
In some embodiments of the present application, since the result of the previous lane line detection process is not necessarily fully consistent with the prior knowledge or the common reason, the lane line template data needs to be further adjusted before performing step 104, as shown in fig. 2b, which includes:
step 104S, adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
For example, a priori knowledge or constraints may include: (1) the lane lines on the road are parallel to each other; (2) the curved lane line is an arc line; (3) the length of the curve lane line is less than 300 meters; (4) the distance between adjacent lane lines is between 3 and 4 meters, for example, about 3.75 meters; (5) the color of the lane line is different from the color of the other parts of the road. The a priori knowledge or the constraint condition may also include other contents or data according to the needs of a specific application scenario, and the embodiments of the present application are not limited to strict specific limitations.
Through the adjustment, the lane line template data can be adjusted to be more in line with the priori knowledge or the conventional principle, and more accurate positioning reference information is provided for determining the current lane line detection result data.
The processing in step 104 may specifically be to map the lane line template data to the lane line image data, and obtain current lane line detection result data according to the mapping result, and the processing may be implemented in various ways.
For example, the lane line template data and the lane line image data are subjected to coordinate conversion, the lane line template data subjected to the coordinate conversion is projected into the lane line image data subjected to the coordinate conversion, and current lane line detection result data is obtained by fitting according to a predetermined formula or algorithm and a projection result. The processing of step 104 may also be implemented in other ways.
In addition to the foregoing implementation, an embodiment of the present application provides an implementation based on machine learning, as shown in fig. 3a, specifically including:
step 1041, inputting the lane line image data and the lane line template data into a predetermined Loss Function (Loss Function), wherein the Loss Function outputs a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data;
step 1042, iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data.
The operation of iteratively modifying the position of the lane line in the lane line template data can be realized by a gradient descent algorithm.
Further, in some embodiments of the present application, the loss function may be optimized according to the continuously accumulated lane line detection result data, so as to enhance the accuracy, stability and robustness of the loss function.
By processing as shown in fig. 3a, the distance between the lane line in the lane line template data and the lane line in the lane line image data is continuously measured by the loss function, and the lane line in the lane line template data is continuously fitted to the lane line in the lane line image data by the gradient descent algorithm, so that a more accurate current lane line detection result can be obtained, which may be 3D spatial data of a top view angle, and includes data expressing a relative positional relationship between the vehicle and the lane line, and may also include data expressing a position of the lane line.
Through the lane line detection processing shown in fig. 1, more accurate positioning reference information of the lane line and the vehicle can be obtained by adopting the result data of the last lane line detection, and the result data of the last lane line detection is projected into the current lane line image data to be fitted to obtain the current lane line detection result data, so that more accurate positioning information of the current lane line and the vehicle can be obtained, and the problem that more accurate lane line detection cannot be performed in the prior art can be solved.
In addition, in the embodiment of the application, the sensing data further includes various data, such as map data, which can further provide more accurate positioning information for the lane line detection processing, so as to obtain lane line detection result data with higher accuracy.
Further, in some embodiments, as shown in step 104t of fig. 3b, the current lane line detection result data obtained through the above-described processing is determined as lane line template data of the next lane line detection processing.
Alternatively, in other embodiments, the lane marking detection result data obtained in step 104 may be further checked and optimally adjusted to ensure that lane marking template data with more accurate positioning information is provided for the next lane marking detection process.
FIG. 4 illustrates a lane line inspection and optimization process following the method illustrated in FIG. 1, including:
105, checking the current lane line detection result data;
106, under the condition of passing the inspection, optimizing and adjusting the current lane line detection result data to obtain lane line template data for next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
As shown in fig. 5, step 105 includes the following processing steps:
step 1051, determining the Confidence (Confidence) of the current lane line detection result data according to a Confidence Model (Confidence Model) obtained by pre-training;
specifically, the current lane line detection result data may be provided as an input to a confidence model, which outputs a confidence corresponding to the current lane line detection result data.
The confidence model is obtained by training a deep neural network in advance according to historical lane line detection result data and lane line real data (ground route). The confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
In the process of training the deep neural network according to historical lane line detection result data and real data of a lane line, comparing the historical lane line detection result data with the real data; and classifying or labeling the historical lane line detection results according to the comparison results, for example, labeling the detection result data a, c and d as successful detection results, and labeling the detection result data b and e as failed detection results. And training a neural network according to the labeled historical lane line detection result data and the real data to obtain a confidence coefficient model. The trained confidence model can reflect the success probability or failure probability (i.e. confidence) of the lane line detection result data.
Step 1052, under the condition that the obtained confidence coefficient is determined to meet the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
For example, the test conditions may include: in the case of a success probability with a confidence greater than or equal to X%, the test is determined to be successful, otherwise the test fails.
Further, in some embodiments of the present application, the confidence model may also be optimally trained according to the continuously accumulated lane line detection result data and the lane line real data, and the processing procedure of the optimal training is similar to the processing of the confidence model obtained by training, and is not described here again.
As shown in fig. 6a, step 106 is the following process:
step 1061, under the condition of passing the inspection, expanding the lane lines in the current lane line detection result data;
specifically, the process of expanding the lane line may include:
s1, copying and translating the lane lines at the edges according to the lane line structure in the lane line detection result data;
step S2, under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line and storing new lane line detection result data;
in step S3, when the duplicated and translated lane line cannot be included in the lane line detection result data, the duplicated and translated lane line is discarded.
For example, as shown in fig. 7, the current lane line detection result data includes two lane lines, CL1 and CL2, and new lane lines EL1 and EL2 can be obtained by expanding the lane lines. Fig. 8 is an expanded lane line displayed on the lane line image data.
And 1062, adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for next lane line detection processing.
The adjustment process may refer to step 104S described above.
Fig. 9 shows an example in which the lane line EL2 is adjusted for the expanded lane line shown in fig. 8 to obtain an adjusted lane line EL2 ', and the adjusted lane line EL 2' is closer to a straight line than the lane line EL2 before the adjustment.
In some embodiments of the present application, step 104S and step 1062 may be provided simultaneously. In other embodiments of the present application, one of step 104S and step 1062 may be provided.
Step 1063, discarding the current lane line detection result data when the detection fails.
Further, as shown in step 1064 of fig. 6b, after discarding the current lane line detection result data, a preset lane line template data is determined as the lane line template data for the next lane line detection process. The lane line template data may be general lane line template data, lane line template data corresponding to a type of driving environment, or lane line template data of a specific driving environment. For example, the lane line template data may be one applicable to all environments, or may be lane line template data of one highway environment, lane line template data of an urban road, or lane line template data of a specific road where a vehicle is located. The preset lane line template data can be specifically set according to the requirements of specific application scenarios.
The preset lane line template data may be pre-stored locally in the lane line detection device, may be pre-stored in the automatic driving processing device of the vehicle, or may be stored in a remote server. When the lane line detection device needs to acquire the preset lane line template data, the lane line detection device can acquire the preset lane line template data in a reading or remote request and receiving mode.
Through the optimization adjustment processing shown in fig. 4, the embodiment of the present application can obtain lane line template data containing more accurate positioning information; compared with the lane line template data obtained by the method shown in fig. 1, the lane line template data obtained by the method shown in fig. 4 enables the lane line detection method provided by the embodiment of the application to have higher stability and robustness.
Based on the same inventive concept, the embodiment of the application also provides a lane line detection device.
Fig. 10 is a block diagram illustrating a structure of a lane line detection apparatus according to an embodiment of the present application, where the lane line detection apparatus includes:
the system comprises an acquisition unit 11, a processing unit and a display unit, wherein the acquisition unit is used for acquiring current perception data of the driving environment of a vehicle, and the current perception data comprises current frame image data and positioning data; acquiring lane line template data, wherein the lane line template data is lane line detection result data obtained by last lane line detection processing;
the perception data further comprises at least one of the following: map data, LIDAR (LIDAR) data of a current driving environment; the positioning data comprises GPS positioning data and/or inertial navigation positioning data;
the extraction unit 12 is used for extracting current lane line image data according to the perception data;
a determining unit 13, configured to determine to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
The lane line template data and the lane line detection result data are 3D space data of an overlooking angle.
In some embodiments, the extracting unit 12 extracts lane line image data from the current frame image data, including: and extracting lane line image data from the current frame image data according to an object recognition method or semantic segmentation.
The determining unit 13 determines to obtain current lane line detection result data according to the lane line image data and the lane line template data, and includes: and mapping the lane line template data to the lane line image data, and fitting according to the mapping result to obtain the current lane line detection result data. Further, the determination unit 13 inputs the lane line image data and the lane line template data into a predetermined loss function, which outputs a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data; iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data. In some application scenarios, the determination unit 13 iteratively modifies the position of the lane line in the lane line template data using a gradient descent algorithm.
Before the determining unit 13 determines to obtain the current lane line detection result data according to the lane line image data and the lane line template data, the determining unit 13 further adjusts the lane line template data, including: adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions; wherein the a priori knowledge or predetermined constraints comprise object metric parameters or data representations relating to the road structure.
Further, the determination unit 13 is also configured to determine the current lane line detection result data as lane line template data for the next lane line detection process.
In other embodiments, the lane line detection apparatus may further include, as shown in fig. 11:
a checking unit 14 for checking the current lane line detection result data;
the optimization unit 15 is used for optimizing and adjusting the current lane line detection result data under the condition that the inspection unit 14 passes the inspection, so as to obtain lane line template data for the next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
The inspection unit 14 inspects the current lane line detection result data, and includes: determining the confidence coefficient of the current lane line detection result data according to a confidence coefficient model obtained by pre-training; under the condition that the obtained confidence coefficient is determined to accord with the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
Further, as shown in fig. 12, the lane line detection apparatus provided in the embodiment of the present application may further include: the pre-training unit 16 is used for training the deep neural network to obtain a confidence model according to historical lane line detection result data and lane line real data in advance; the confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
The optimizing unit 15 optimizes and adjusts the current lane line detection result data, and includes: expanding the lane lines in the current lane line detection result data; adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for the next lane line detection processing; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
The optimization unit 15 expands the lane line in the current lane line detection result data, and includes: copying and translating the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data; under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line, and storing new lane line detection result data; and when the copied and translated lane line cannot be included in the lane line detection result data, abandoning the copied and translated lane line.
Further, the optimizing unit 15 is further configured to determine a preset lane line template data as the lane line template data for the next lane line detection processing after discarding the current lane line detection result data.
Through the lane line detection device that this application embodiment provided, adopt the last lane line to detect the result data, can obtain the location reference information of comparatively accurate lane line and vehicle, and with the last lane line result data projection to current lane line image data that detects, the fitting obtains current lane line testing result data, can obtain the location information of comparatively accurate current lane line and vehicle, thereby can solve among the prior art and can't carry out the problem that comparatively accurate lane line detected.
Based on the same inventive concept, the embodiment of the application also provides a lane line detection device.
As shown in fig. 13, the lane line detection apparatus provided in the embodiment of the present application includes a processor 131 and at least one memory 132, where the at least one memory stores at least one machine executable instruction, and the processor executes the at least one machine executable instruction to perform:
acquiring current perception data of a driving environment of a vehicle; the current perception data comprises current frame image data and current positioning data;
acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing;
extracting current lane line image data according to the perception data;
determining to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
The lane line template data and the lane line detection result data are 3D space data of an overlooking angle. The perception data further comprises at least one of the following: map data, LIDAR (LIDAR) data of a current driving environment; the positioning data includes GPS positioning data and/or inertial navigation positioning data.
In some embodiments, the processor 131 executing at least one machine executable instruction performs extracting lane line image data from the current frame image data, including: and extracting the lane line image data from the current frame image data according to an object recognition method or a semantic segmentation method.
The processor 131 executes at least one machine executable instruction to determine to obtain current lane line detection result data according to the lane line image data and the lane line template data, including: and mapping the lane line template data to the lane line image data, and fitting according to the mapping result to obtain the current lane line detection result data. The processing may specifically include: inputting the lane line image data and the lane line template data into a predetermined loss function, the loss function outputting a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data; iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data. In some application scenarios, processor 131 may execute at least one machine executable instruction to perform: and iteratively modifying the position of the lane line in the lane line template data by adopting a gradient descent algorithm.
The processor 131 executes at least one machine executable instruction to perform the following steps before determining to obtain the current lane line detection result data according to the lane line image data and the lane line template data: adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
The processor executing the at least one machine executable instruction further performs: and determining the current lane line detection result data as lane line template data for next lane line detection processing.
In other embodiments, execution of the at least one machine executable instruction by processor 131 further performs: checking the current lane line detection result data; under the condition of passing the inspection, optimizing and adjusting the current lane line detection result data to obtain lane line template data for next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
Processor 131 executes at least one machine executable instruction to perform a verification of current lane marking detection result data, comprising: determining the confidence coefficient of the current lane line detection result data according to a confidence coefficient model obtained by pre-training; under the condition that the obtained confidence coefficient is determined to accord with the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
Processor 131 executing the at least one machine-executable instruction further performs pre-training to obtain a confidence model, comprising: training a deep neural network to obtain a confidence model according to historical lane line detection result data and lane line real data in advance; the confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
Processor 131 executes at least one machine executable instruction to perform an optimization adjustment on the current lane line detection result data, including: expanding the lane lines in the current lane line detection result data; adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for the next lane line detection processing; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
Processor 131 executes at least one machine executable instruction to perform expanding lane lines in the current lane line detection result data, including: copying and translating the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data; under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line, and storing new lane line detection result data; and when the copied and translated lane line cannot be included in the lane line detection result data, abandoning the copied and translated lane line.
The processor 131 executes at least one machine executable instruction to determine a preset lane line template data as a lane line template data for a next lane line detection process after discarding the current lane line detection result data.
Through the lane line detection device that this application embodiment provided, adopt the last lane line to detect the result data, can obtain the location reference information of comparatively accurate lane line and vehicle, and with the last lane line result data projection to current lane line image data that detects, the fitting obtains current lane line testing result data, can obtain the location information of comparatively accurate current lane line and vehicle, thereby can solve among the prior art and can't carry out the problem that comparatively accurate lane line detected.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (39)

1.一种车道线检测方法,其特征在于,包括:1. a lane line detection method, is characterized in that, comprises: 车道线检测装置获取车辆的驾驶环境的当前感知数据;其中,当前感知数据包括当前帧图像数据和当前定位数据;The lane line detection device obtains current perception data of the driving environment of the vehicle; wherein, the current perception data includes current frame image data and current positioning data; 获取车道线模板数据;其中,车道线模板数据是上一帧车道线检测处理得到的车道线检测结果数据;Obtaining lane line template data; wherein, the lane line template data is the lane line detection result data obtained by the lane line detection processing of the previous frame; 根据感知数据提取得到当前的车道线图像数据;Extract the current lane line image data according to the perception data; 根据车道线图像数据和车道线模板数据,确定得到当前的车道线检测结果数据;其中,当前的车道线检测结果数据中包括表达车辆和车道线的相对位置关系的数据;According to the lane line image data and the lane line template data, it is determined to obtain the current lane line detection result data; wherein, the current lane line detection result data includes data expressing the relative positional relationship between the vehicle and the lane line; 根据车道线图像数据和车道线模板数据,确定得到当前的车道线检测结果数据,包括:According to the lane line image data and the lane line template data, it is determined to obtain the current lane line detection result data, including: 将车道线模板数据映射到车道线图像数据中,根据映射结果拟合得到当前的车道线检测结果数据;Map the lane line template data to the lane line image data, and fit the current lane line detection result data according to the mapping result; 其中,将车道线模板数据映射到车道线图像数据中,根据映射结果拟合得到当前的车道线检测结果数据,包括:Among them, the lane line template data is mapped to the lane line image data, and the current lane line detection result data is obtained by fitting according to the mapping result, including: 将车道线图像数据和车道线模板数据输入到一个预定的损失函数中,该损失函数输出一个代价值;其中,该损失函数是一个表达车道线模板数据与车道线图像数据之间的车道线的位置关系的函数,该代价值为车道线模板数据中的车道线与车道线图像数据中的车道线之间的距离;The lane line image data and the lane line template data are input into a predetermined loss function, and the loss function outputs a cost value; wherein, the loss function is an expression of the lane line between the lane line template data and the lane line image data. The function of the position relationship, the cost value is the distance between the lane line in the lane line template data and the lane line in the lane line image data; 在相邻两次代价值的差值大于一个预定阈值的情况下,迭代修改车道线模板数据中的车道线的位置;在相邻两次代价值的差值小于或等于该预定阈值的情况下,结束迭代处理,并得到当前的车道线检测结果数据。If the difference between the two adjacent cost values is greater than a predetermined threshold, iteratively revise the position of the lane line in the lane line template data; if the difference between the two adjacent cost values is less than or equal to the predetermined threshold, end Iterative processing, and get the current lane line detection result data. 2.根据权利要求1所述的方法,其特征在于,迭代修改车道线模板数据中的车道线的位置,包括:2. The method according to claim 1, wherein the iteratively modifying the position of the lane line in the lane line template data comprises: 采用梯度下降算法迭代修改车道线模板数据中的车道线的位置。The gradient descent algorithm is used to iteratively modify the position of the lane lines in the lane line template data. 3.根据权利要求1所述的方法,其特征在于,在根据车道线图像数据和车道线模板数据,确定得到当前的车道线检测结果数据之前,所述方法还包括:3. The method according to claim 1, wherein, before determining to obtain the current lane line detection result data according to the lane line image data and the lane line template data, the method further comprises: 根据当前感知数据、先验知识和/或预定的约束条件对车道线模板数据中的车道线进行调整;Adjust the lane lines in the lane line template data according to current perception data, prior knowledge and/or predetermined constraints; 其中,先验知识或者预定的约束条件包括关于道路结构的物理度量参数或数据表达。Wherein, the prior knowledge or predetermined constraints include physical measurement parameters or data representations about the road structure. 4.根据权利要求1所述的方法,其特征在于,还包括:4. The method of claim 1, further comprising: 对当前的车道线检测结果数据进行检验;Check the current lane line detection result data; 在检验通过的情况下,对当前的车道线检测结果数据进行优化调整,得到用于下一次车道线检测处理的车道线模板数据;在检验失败的情况下,抛弃当前的车道线检测结果数据。In the case of passing the test, the current lane line detection result data is optimized and adjusted to obtain the lane line template data for the next lane line detection processing; in the case of failure of the test, the current lane line detection result data is discarded. 5.根据权利要求4所述的方法,其特征在于,对当前的车道线检测结果数据进行检验,包括:5. The method according to claim 4, wherein the current lane line detection result data is checked, comprising: 根据预先训练得到的置信度模型,确定得到当前的车道线检测结果数据的置信度;According to the confidence model obtained by pre-training, determine the confidence of the current lane line detection result data; 在确定得到的置信度符合预定的检验条件的情况下,检验成功;在确定得到的置信度不符合预定的检验条件的情况下,检验失败。If it is determined that the obtained confidence level meets the predetermined inspection condition, the inspection is successful; if it is determined that the obtained confidence level does not meet the predetermined inspection condition, the inspection fails. 6.根据权利要求5所述的方法,其特征在于,所述方法还包括预先训练得到置信度模型,包括:6. The method according to claim 5, wherein the method further comprises pre-training to obtain a confidence model, comprising: 预先根据历史的车道线检测结果数据和车道线的真实数据,训练深度神经网络得到置信度模型;置信度模型用于表示车道线检测结果数据与置信度之间的对应关系。In advance, according to the historical lane line detection result data and the real data of the lane line, the deep neural network is trained to obtain the confidence model; the confidence model is used to represent the correspondence between the lane line detection result data and the confidence. 7.根据权利要求4所述的方法,其特征在于,对当前的车道线检测结果数据进行优化调整,包括:7. The method according to claim 4, wherein optimizing and adjusting the current lane line detection result data, comprising: 对当前的车道线检测结果数据中的车道线进行扩展;Extend the lane lines in the current lane line detection result data; 根据当前感知数据、先验知识和/或预定的约束条件对车道线模板数据中的车道线进行调整,得到用于下一次车道线检测处理的车道线模板数据;其中,先验知识或者预定的约束条件包括关于道路结构的物理度量参数或数据表达。Adjust the lane lines in the lane line template data according to the current perception data, prior knowledge and/or predetermined constraints to obtain lane line template data for the next lane line detection processing; Constraints include physical metrics or data representations about the road structure. 8.根据权利要求7所述的方法,其特征在于,对当前的车道线检测结果数据中的车道线进行扩展,包括:8. The method according to claim 7, wherein extending the lane lines in the current lane line detection result data, comprising: 根据车道线检测结果数据中的车道线结构,对车道线检测结果数据中的边缘车道线进行复制平移;Copy and translate the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data; 在车道线检测结果数据中能够包括复制平移的车道线的情况下,保留复制平移的车道线,并保存新的车道线检测结果数据;In the case that the copied and translated lane lines can be included in the lane line detection result data, keep the copied and translated lane lines, and save the new lane line detection result data; 在车道线检测结果数据中无法包括复制平移的车道线的情况下,放弃复制平移的车道线。In the case that the copied and translated lane lines cannot be included in the lane line detection result data, the copied and translated lane lines are discarded. 9.根据权利要求4所述的方法,其特征在于,在抛弃当前的车道线检测结果数据后,将一个预设的车道线模板数据确定为用于下一次车道线检测处理的车道线模板数据。9 . The method according to claim 4 , wherein after discarding the current lane line detection result data, a preset lane line template data is determined as the lane line template data for the next lane line detection processing. 10 . . 10.根据权利要求1所述的方法,其特征在于,所述方法还包括:10. The method of claim 1, wherein the method further comprises: 将当前的车道线检测结果数据确定为用于下一次车道线检测处理的车道线模板数据。Determine the current lane line detection result data as lane line template data for the next lane line detection processing. 11.根据权利要求1所述的方法,其特征在于,车道线模板数据和车道线检测结果数据为俯视角度的3D空间数据。11 . The method according to claim 1 , wherein the lane line template data and the lane line detection result data are 3D space data from a top view angle. 12 . 12.根据权利要求1所述的方法,其特征在于,从当前帧图像数据中提取出车道线图像数据,包括:12. The method according to claim 1, wherein extracting lane line image data from the current frame image data comprises: 根据物体识别的方法或者语义分割的方法从当前帧图像数据中提取出车道线图像数据。The lane line image data is extracted from the current frame image data according to the method of object recognition or the method of semantic segmentation. 13.根据权利要求1所述的方法,其特征在于,感知数据中还包括至少以下之一:当前驾驶环境的地图数据、激光雷达(LIDAR)数据;13. The method according to claim 1, wherein the perception data further comprises at least one of the following: map data of the current driving environment, and lidar (LIDAR) data; 定位数据包括GPS定位数据和/或惯性导航定位数据。The positioning data includes GPS positioning data and/or inertial navigation positioning data. 14.一种车道线检测装置,其特征在于,包括:14. A lane line detection device, comprising: 获取单元,用于获取车辆的驾驶环境的当前感知数据,其中,当前感知数据包括当前帧图像数据和定位数据;获取车道线模板数据,其中,车道线模板数据是上一帧车道线检测处理得到的车道线检测结果数据;an acquiring unit, used for acquiring current perception data of the driving environment of the vehicle, wherein the current perception data includes current frame image data and positioning data; acquiring lane line template data, wherein the lane line template data is obtained by processing the lane lines of the previous frame The lane line detection result data; 提取单元,用于根据感知数据提取得到当前的车道线图像数据;an extraction unit, used for extracting the current lane line image data according to the perception data; 确定单元,用于根据车道线图像数据和车道线模板数据,确定得到当前的车道线检测结果数据;其中,当前的车道线检测结果数据中包括表达车辆和车道线的相对位置关系的数据;a determining unit, configured to determine and obtain the current lane line detection result data according to the lane line image data and the lane line template data; wherein, the current lane line detection result data includes data expressing the relative positional relationship between the vehicle and the lane line; 确定单元根据车道线图像数据和车道线模板数据,确定得到当前的车道线检测结果数据,包括:The determining unit determines and obtains the current lane line detection result data according to the lane line image data and the lane line template data, including: 将车道线模板数据映射到车道线图像数据中,根据映射结果拟合得到当前的车道线检测结果数据;Map the lane line template data to the lane line image data, and fit the current lane line detection result data according to the mapping result; 其中,确定单元将车道线模板数据映射到车道线图像数据中,根据映射结果拟合得到当前的车道线检测结果数据,包括:The determining unit maps the lane line template data to the lane line image data, and obtains the current lane line detection result data by fitting according to the mapping result, including: 将车道线图像数据和车道线模板数据输入到一个预定的损失函数中,该损失函数输出一个代价值;其中,该损失函数是一个表达车道线模板数据与车道线图像数据之间的车道线的位置关系的函数,该代价值为车道线模板数据中的车道线与车道线图像数据中的车道线之间的距离;The lane line image data and the lane line template data are input into a predetermined loss function, and the loss function outputs a cost value; wherein, the loss function is an expression of the lane line between the lane line template data and the lane line image data. The function of the position relationship, the cost value is the distance between the lane line in the lane line template data and the lane line in the lane line image data; 在相邻两次代价值的差值大于一个预定阈值的情况下,迭代修改车道线模板数据中的车道线的位置;在相邻两次代价值的差值小于或等于该预定阈值的情况下,结束迭代处理,并得到当前的车道线检测结果数据。If the difference between the two adjacent cost values is greater than a predetermined threshold, iteratively revise the position of the lane line in the lane line template data; if the difference between the two adjacent cost values is less than or equal to the predetermined threshold, end Iterative processing, and get the current lane line detection result data. 15.根据权利要求14所述的装置,其特征在于,确定单元迭代修改车道线模板数据中的车道线的位置,包括:15. The apparatus according to claim 14, wherein the determining unit iteratively modifies the position of the lane line in the lane line template data, comprising: 采用梯度下降算法迭代修改车道线模板数据中的车道线的位置。The gradient descent algorithm is used to iteratively modify the position of the lane lines in the lane line template data. 16.根据权利要求14所述的装置,其特征在于,确定模块在根据车道线图像数据和车道线模板数据,确定得到当前的车道线检测结果数据之前,还用于:16. The device according to claim 14, wherein the determining module is further used for: before determining to obtain the current lane line detection result data according to the lane line image data and the lane line template data: 根据当前感知数据、先验知识和/或预定的约束条件对车道线模板数据中的车道线进行调整;Adjust the lane lines in the lane line template data according to current perception data, prior knowledge and/or predetermined constraints; 其中,先验知识或者预定的约束条件包括关于道路结构的物体度量参数或数据表达。Wherein, the prior knowledge or the predetermined constraint conditions include object metric parameters or data representations about the road structure. 17.根据权利要求14所述的装置,其特征在于,所述装置还包括:17. The apparatus of claim 14, wherein the apparatus further comprises: 检验单元,用于对当前的车道线检测结果数据进行检验;The inspection unit is used to inspect the current lane line detection result data; 优化单元,在检验单元检验通过的情况下,对当前的车道线检测结果数据进行优化调整,得到用于下一次车道线检测处理的车道线模板数据;在检验失败的情况下,抛弃当前的车道线检测结果数据。The optimization unit, in the case of passing the inspection of the inspection unit, optimizes and adjusts the current lane line detection result data to obtain the lane line template data for the next lane line detection processing; in the case of failure of the inspection, discards the current lane Line detection result data. 18.根据权利要求17所述的装置,其特征在于,检验单元对当前的车道线检测结果数据进行检验,包括:18. The device according to claim 17, wherein the checking unit checks the current lane line detection result data, comprising: 根据预先训练得到的置信度模型,确定得到当前的车道线检测结果数据的置信度;According to the confidence model obtained by pre-training, determine the confidence of the current lane line detection result data; 在确定得到的置信度符合预定的检验条件的情况下,检验成功;在确定得到的置信度不符合预定的检验条件的情况下,检验失败。If it is determined that the obtained confidence level meets the predetermined inspection condition, the inspection is successful; if it is determined that the obtained confidence level does not meet the predetermined inspection condition, the inspection fails. 19.根据权利要求18所述的装置,其特征在于,所述装置还包括:19. The apparatus of claim 18, wherein the apparatus further comprises: 预训练单元,用于预先根据历史的车道线检测结果数据和车道线的真实数据,训练深度神经网络得到置信度模型;置信度模型用于表示车道线检测结果数据与置信度之间的对应关系。The pre-training unit is used to train the deep neural network to obtain the confidence model according to the historical lane line detection result data and the real data of the lane line in advance; the confidence model is used to represent the correspondence between the lane line detection result data and the confidence level . 20.根据权利要求17所述的装置,其特征在于,优化单元对当前的车道线检测结果数据进行优化调整,包括:20. The device according to claim 17, wherein the optimization unit optimizes and adjusts the current lane line detection result data, comprising: 对当前的车道线检测结果数据中的车道线进行扩展;Extend the lane lines in the current lane line detection result data; 根据当前感知数据、先验知识和/或预定的约束条件对车道线模板数据中的车道线进行调整,得到用于下一次车道线检测处理的车道线模板数据;其中,先验知识或者预定的约束条件包括关于道路结构的物理度量参数或者数据表达。Adjust the lane lines in the lane line template data according to the current perception data, prior knowledge and/or predetermined constraints to obtain lane line template data for the next lane line detection processing; Constraints include physical metrics or data representations about the road structure. 21.根据权利要求20所述的装置,其特征在于,优化单元对当前的车道线检测结果数据中的车道线进行扩展,包括:21. The device according to claim 20, wherein the optimization unit expands the lane lines in the current lane line detection result data, comprising: 根据车道线检测结果数据中的车道线结构,对车道线检测结果数据中的边缘车道线进行复制平移;Copy and translate the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data; 在车道线检测结果数据中能够包括复制平移的车道线的情况下,保留复制平移的车道线,并保存新的车道线检测结果数据;In the case that the copied and translated lane lines can be included in the lane line detection result data, keep the copied and translated lane lines, and save the new lane line detection result data; 在车道线检测结果数据中无法包括复制平移的车道线的情况下,放弃复制平移的车道线。In the case that the copied and translated lane lines cannot be included in the lane line detection result data, the copied and translated lane lines are discarded. 22.根据权利要求17所述的装置,其特征在于,优化单元在抛弃当前的车道线检测结果数据后,还用于将一个预设的车道线模板数据确定为用于下一次车道线检测处理的车道线模板数据。22. The device according to claim 17, characterized in that, after discarding the current lane line detection result data, the optimization unit is also used to determine a preset lane line template data to be used for the next lane line detection processing lane line template data. 23.根据权利要求14所述的装置,其特征在于,确定单元还用于将当前的车道线检测结果数据确定为用于下一次车道线检测处理的车道线模板数据。23. The apparatus according to claim 14, wherein the determining unit is further configured to determine the current lane line detection result data as lane line template data for the next lane line detection processing. 24.根据权利要求14所述的装置,其特征在于,提取单元从当前帧图像数据中提取出车道线图像数据,包括:24. The device according to claim 14, wherein the extraction unit extracts the lane line image data from the current frame image data, comprising: 根据物体识别的方法或者语义分割从当前帧图像数据中提取出车道线图像数据。The lane line image data is extracted from the current frame image data according to the method of object recognition or semantic segmentation. 25.根据权利要求14所述的装置,其特征在于,车道线模板数据和车道线检测结果数据为俯视角度的3D空间数据。25 . The device according to claim 14 , wherein the lane line template data and the lane line detection result data are 3D space data from a top view angle. 26 . 26.根据权利要求14所述的装置,其特征在于,感知数据中还包括至少以下之一:当前驾驶环境的地图数据、激光雷达(LIDAR)数据;26. The device according to claim 14, wherein the perception data further comprises at least one of the following: map data of the current driving environment, and lidar (LIDAR) data; 定位数据包括GPS定位数据和/或惯性导航定位数据。The positioning data includes GPS positioning data and/or inertial navigation positioning data. 27.一种车道线检测装置,其特征在于,包括一个处理器和至少一个存储器,至少一个存储器中存储至少一条机器可执行指令,处理器执行至少一条机器可执行指令以执行:27. A lane line detection device, characterized by comprising a processor and at least one memory, wherein at least one machine-executable instruction is stored in the at least one memory, and the processor executes the at least one machine-executable instruction to execute: 获取车辆的驾驶环境的当前感知数据;其中,当前感知数据包括当前帧图像数据和当前定位数据;Obtain current perception data of the driving environment of the vehicle; wherein, the current perception data includes current frame image data and current positioning data; 获取车道线模板数据;其中,车道线模板数据是上一帧车道线检测处理得到的车道线检测结果数据;Obtaining lane line template data; wherein, the lane line template data is the lane line detection result data obtained by the lane line detection processing of the previous frame; 根据感知数据提取得到当前的车道线图像数据;Extract the current lane line image data according to the perception data; 根据车道线图像数据和车道线模板数据,确定得到当前的车道线检测结果数据;其中,当前的车道线检测结果数据中包括表达车辆和车道线的相对位置关系的数据;According to the lane line image data and the lane line template data, it is determined to obtain the current lane line detection result data; wherein, the current lane line detection result data includes data expressing the relative positional relationship between the vehicle and the lane line; 处理器执行至少一条机器可执行指令执行根据车道线图像数据和车道线模板数据,确定得到当前的车道线检测结果数据,包括:The processor executes at least one machine-executable instruction to determine and obtain the current lane line detection result data according to the lane line image data and the lane line template data, including: 将车道线模板数据映射到车道线图像数据中,根据映射结果拟合得到当前的车道线检测结果数据;Map the lane line template data to the lane line image data, and fit the current lane line detection result data according to the mapping result; 其中,处理器执行至少一条机器可执行指令执行将车道线模板数据映射到车道线图像数据中,根据映射结果拟合得到当前的车道线检测结果数据,包括:The processor executes at least one machine-executable instruction to map the lane line template data into the lane line image data, and obtains the current lane line detection result data by fitting according to the mapping result, including: 将车道线图像数据和车道线模板数据输入到一个预定的损失函数中,该损失函数输出一个代价值;其中,该损失函数是一个表达车道线模板数据与车道线图像数据之间的车道线的位置关系的函数,该代价值为车道线模板数据中的车道线与车道线图像数据中的车道线之间的距离;The lane line image data and the lane line template data are input into a predetermined loss function, and the loss function outputs a cost value; wherein, the loss function is an expression of the lane line between the lane line template data and the lane line image data. The function of the position relationship, the cost value is the distance between the lane line in the lane line template data and the lane line in the lane line image data; 在相邻两次代价值的差值大于一个预定阈值的情况下,迭代修改车道线模板数据中的车道线的位置;在相邻两次代价值的差值小于或等于该预定阈值的情况下,结束迭代处理,并得到当前的车道线检测结果数据。If the difference between the two adjacent cost values is greater than a predetermined threshold, iteratively revise the position of the lane line in the lane line template data; if the difference between the two adjacent cost values is less than or equal to the predetermined threshold, end Iterative processing, and get the current lane line detection result data. 28.根据权利要求27所述的装置,其特征在于,处理器执行至少一条机器可执行指令执行迭代修改车道线模板数据中的车道线的位置,包括:28. The apparatus of claim 27, wherein the processor executes at least one machine-executable instruction to iteratively modify the position of the lane line in the lane line template data, comprising: 采用梯度下降算法迭代修改车道线模板数据中的车道线的位置。The gradient descent algorithm is used to iteratively modify the position of the lane lines in the lane line template data. 29.根据权利要求27所述的装置,其特征在于,处理器执行至少一条机器可执行指令执行在根据车道线图像数据和车道线模板数据,确定得到当前的车道线检测结果数据之前,还执行:29. The apparatus according to claim 27, wherein the processor executes at least one machine-executable instruction to execute before determining to obtain the current lane line detection result data according to the lane line image data and the lane line template data, and also executes : 根据当前感知数据、先验知识和/或预定的约束条件对车道线模板数据中的车道线进行调整;Adjust the lane lines in the lane line template data according to current perception data, prior knowledge and/or predetermined constraints; 其中,先验知识或者预定的约束条件包括关于道路结构的物理度量参数或者数据表达。Wherein, the prior knowledge or the predetermined constraint conditions include physical metric parameters or data representations about the road structure. 30.根据权利要求27所述的装置,其特征在于,处理器执行至少一条机器可执行指令还执行:30. The apparatus of claim 27, wherein the processor executing the at least one machine-executable instruction further executes: 对当前的车道线检测结果数据进行检验;Check the current lane line detection result data; 在检验通过的情况下,对当前的车道线检测结果数据进行优化调整,得到用于下一次车道线检测处理的车道线模板数据;在检验失败的情况下,抛弃当前的车道线检测结果数据。In the case of passing the test, the current lane line detection result data is optimized and adjusted to obtain the lane line template data for the next lane line detection processing; in the case of failure of the test, the current lane line detection result data is discarded. 31.根据权利要求30所述的装置,其特征在于,处理器执行至少一条机器可执行指令执行对当前的车道线检测结果数据进行检验,包括:31. The apparatus according to claim 30, wherein the processor executes at least one machine-executable instruction to check the current lane line detection result data, comprising: 根据预先训练得到的置信度模型,确定得到当前的车道线检测结果数据的置信度;According to the confidence model obtained by pre-training, determine the confidence of the current lane line detection result data; 在确定得到的置信度符合预定的检验条件的情况下,检验成功;在确定得到的置信度不符合预定的检验条件的情况下,检验失败。If it is determined that the obtained confidence level meets the predetermined inspection condition, the inspection is successful; if it is determined that the obtained confidence level does not meet the predetermined inspection condition, the inspection fails. 32.根据权利要求31所述的装置,其特征在于,处理器执行至少一条机器可执行指令还执行预先训练得到置信度模型,包括:32. The apparatus according to claim 31, wherein the processor executes at least one machine-executable instruction and also executes pre-training to obtain a confidence model, comprising: 预先根据历史的车道线检测结果数据和车道线的真实数据,训练深度神经网络得到置信度模型;置信度模型用于表示车道线检测结果数据与置信度之间的对应关系。In advance, according to the historical lane line detection result data and the real data of the lane line, the deep neural network is trained to obtain the confidence model; the confidence model is used to represent the correspondence between the lane line detection result data and the confidence. 33.根据权利要求30所述的装置,其特征在于,处理器执行至少一条机器可执行指令执行对当前的车道线检测结果数据进行优化调整,包括:33. The apparatus according to claim 30, wherein the processor executes at least one machine-executable instruction to optimize and adjust the current lane line detection result data, comprising: 对当前的车道线检测结果数据中的车道线进行扩展;Extend the lane lines in the current lane line detection result data; 根据当前感知数据、先验知识和/或预定的约束条件对车道线模板数据中的车道线进行调整,得到用于下一次车道线检测处理的车道线模板数据;其中,先验知识或者预定的约束条件包括关于道路结构的物理度量参数或数据表达。Adjust the lane lines in the lane line template data according to the current perception data, prior knowledge and/or predetermined constraints to obtain lane line template data for the next lane line detection processing; Constraints include physical metrics or data representations about the road structure. 34.根据权利要求33所述的装置,其特征在于,处理器执行至少一条机器可执行指令执行对当前的车道线检测结果数据中的车道线进行扩展,包括:34. The apparatus according to claim 33, wherein the processor executes at least one machine-executable instruction to extend the lane lines in the current lane line detection result data, comprising: 根据车道线检测结果数据中的车道线结构,对车道线检测结果数据中的边缘车道线进行复制平移;Copy and translate the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data; 在车道线检测结果数据中能够包括复制平移的车道线的情况下,保留复制平移的车道线,并保存新的车道线检测结果数据;In the case that the copied and translated lane lines can be included in the lane line detection result data, keep the copied and translated lane lines, and save the new lane line detection result data; 在车道线检测结果数据中无法包括复制平移的车道线的情况下,放弃复制平移的车道线。In the case that the copied and translated lane lines cannot be included in the lane line detection result data, the copied and translated lane lines are discarded. 35.根据权利要求30所述的装置,其特征在于,处理器执行至少一条机器可执行指令执行在抛弃当前的车道线检测结果数据之后,还执行将一个预设的车道线模板数据确定为用于下一次车道线检测处理的车道线模板数据。35. The apparatus according to claim 30, wherein the processor executes at least one machine-executable instruction to perform, after discarding the current lane line detection result data, also executes determining a preset lane line template data as a Lane line template data for the next lane line detection processing. 36.根据权利要求27所述的装置,其特征在于,处理器执行至少一条机器可执行指令还执行:36. The apparatus of claim 27, wherein the processor executing the at least one machine-executable instruction further executes: 将当前的车道线检测结果数据确定为用于下一次车道线检测处理的车道线模板数据。Determine the current lane line detection result data as lane line template data for the next lane line detection processing. 37.根据权利要求27所述的装置,其特征在于,车道线模板数据和车道线检测结果数据为俯视角度的3D空间数据。37. The apparatus according to claim 27, wherein the lane line template data and the lane line detection result data are 3D space data from a top view angle. 38.根据权利要求27所述的装置,其特征在于,处理器执行至少一条机器可执行指令执行从当前帧图像数据中提取出车道线图像数据,包括:38. The apparatus of claim 27, wherein the processor executes at least one machine-executable instruction to extract lane line image data from the current frame image data, comprising: 根据物体识别的方法或者语义分割的方法从当前帧图像数据中提取出车道线图像数据。The lane line image data is extracted from the current frame image data according to the method of object recognition or the method of semantic segmentation. 39.根据权利要求27所述的装置,其特征在于,感知数据中还包括至少以下之一:当前驾驶环境的地图数据、激光雷达(LIDAR)数据;39. The device according to claim 27, wherein the perception data further comprises at least one of the following: map data of the current driving environment, and lidar (LIDAR) data; 定位数据包括GPS定位数据和/或惯性导航定位数据。The positioning data includes GPS positioning data and/or inertial navigation positioning data.
CN201810688772.7A 2017-08-22 2018-06-28 A kind of lane line detection method and device Active CN109426800B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US15/683,463 US10373003B2 (en) 2017-08-22 2017-08-22 Deep module and fitting module system and method for motion-based lane detection with multiple sensors
US15/683,494 US10482769B2 (en) 2017-08-22 2017-08-22 Post-processing module system and method for motioned-based lane detection with multiple sensors
USUS15/683,494 2017-08-22
USUS15/683,463 2017-08-22

Publications (2)

Publication Number Publication Date
CN109426800A CN109426800A (en) 2019-03-05
CN109426800B true CN109426800B (en) 2021-08-13

Family

ID=65514491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810688772.7A Active CN109426800B (en) 2017-08-22 2018-06-28 A kind of lane line detection method and device

Country Status (1)

Country Link
CN (1) CN109426800B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111295666A (en) * 2019-04-29 2020-06-16 深圳市大疆创新科技有限公司 Lane line detection method, device, control equipment and storage medium
CN110595490B (en) * 2019-09-24 2021-12-14 百度在线网络技术(北京)有限公司 Preprocessing method, device, equipment and medium for lane line perception data
CN112154449B (en) * 2019-09-26 2025-01-07 深圳市卓驭科技有限公司 Lane line fusion method, lane line fusion device, vehicle and storage medium
CN111439259B (en) * 2020-03-23 2020-11-27 成都睿芯行科技有限公司 Agricultural garden scene lane deviation early warning control method and system based on end-to-end convolutional neural network
CN111898540B (en) * 2020-07-30 2024-07-09 平安科技(深圳)有限公司 Lane line detection method, lane line detection device, computer equipment and computer readable storage medium
CN112180923A (en) * 2020-09-23 2021-01-05 深圳裹动智驾科技有限公司 Automatic driving method, intelligent control equipment and automatic driving vehicle
CN112699747B (en) * 2020-12-21 2024-07-26 阿波罗智联(北京)科技有限公司 Method and device for determining vehicle state, road side equipment and cloud control platform
CN113167885B (en) * 2021-03-03 2022-05-31 华为技术有限公司 Lane line detection method and lane line detection device
CN113175937B (en) * 2021-06-29 2021-09-28 天津天瞳威势电子科技有限公司 A method and device for evaluating lane line perception results

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9286524B1 (en) * 2015-04-15 2016-03-15 Toyota Motor Engineering & Manufacturing North America, Inc. Multi-task deep convolutional neural networks for efficient and robust traffic lane detection
WO2016130719A2 (en) * 2015-02-10 2016-08-18 Amnon Shashua Sparse map for autonomous vehicle navigation
US9443320B1 (en) * 2015-05-18 2016-09-13 Xerox Corporation Multi-object tracking with generic object proposals
CN106611147A (en) * 2015-10-15 2017-05-03 腾讯科技(深圳)有限公司 Vehicle tracking method and device
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9429943B2 (en) * 2012-03-05 2016-08-30 Florida A&M University Artificial intelligence valet systems and methods
US20150112765A1 (en) * 2013-10-22 2015-04-23 Linkedln Corporation Systems and methods for determining recruiting intent
CN104700072B (en) * 2015-02-06 2018-01-19 中国科学院合肥物质科学研究院 Recognition methods based on lane line historical frames
CN105046235B (en) * 2015-08-03 2018-09-07 百度在线网络技术(北京)有限公司 The identification modeling method and device of lane line, recognition methods and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016130719A2 (en) * 2015-02-10 2016-08-18 Amnon Shashua Sparse map for autonomous vehicle navigation
US9286524B1 (en) * 2015-04-15 2016-03-15 Toyota Motor Engineering & Manufacturing North America, Inc. Multi-task deep convolutional neural networks for efficient and robust traffic lane detection
US9443320B1 (en) * 2015-05-18 2016-09-13 Xerox Corporation Multi-object tracking with generic object proposals
CN106611147A (en) * 2015-10-15 2017-05-03 腾讯科技(深圳)有限公司 Vehicle tracking method and device
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene;Jun Li et al.;《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》;20170331;第28卷(第3期);第690-703页 *
一种基于帧间关联的实时车道线检测算法;李超 等;《计算机科学》;20170228;第44卷(第2期);第318-321页 *

Also Published As

Publication number Publication date
CN109426800A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
CN109426800B (en) A kind of lane line detection method and device
US11176701B2 (en) Position estimation system and position estimation method
EP3607272B1 (en) Automated image labeling for vehicle based on maps
EP3732657B1 (en) Vehicle localization
KR102483649B1 (en) Vehicle localization method and vehicle localization apparatus
US20250054320A1 (en) Camera initialization for lane detection and distance estimation using single-view geometry
CN109767637B (en) Method and device for identifying and processing countdown signal lamp
EP2458336B1 (en) Method and system for reporting errors in a geographic database
JP7645258B2 (en) Map data updates
US20180045516A1 (en) Information processing device and vehicle position detecting method
US10718628B2 (en) Host vehicle position estimation device
CN103770704A (en) System and method for recognizing parking space line markings for vehicle
JPWO2019111976A1 (en) Object detection device, prediction model creation device, object detection method and program
CN114543819B (en) Vehicle positioning method, device, electronic equipment and storage medium
JP7461399B2 (en) Method and device for assisting the running operation of a motor vehicle, and motor vehicle
KR20180067199A (en) Apparatus and method for recognizing object
US11908206B2 (en) Compensation for vertical road curvature in road geometry estimation
Hui et al. Vision-HD: road change detection and registration using images and high-definition maps
CN115705720A (en) Training of 3D lane detection models for automotive applications
CN114821530A (en) Deep learning-based lane line detection method and system
JP2021018823A (en) Method and system for improving detection capability of driving support system based on machine learning
CN114821539A (en) Lane line detection method, device, equipment and medium based on neural network
CN109325962B (en) Information processing method, device, equipment and computer readable storage medium
CN114729816A (en) Method for detecting traffic map changes by means of a classifier
CN114972494B (en) Map construction method and device for memorizing parking scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant