Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Aiming at the problem of inaccurate positioning in the technical scheme of lane line detection in the prior art, the embodiment of the application provides a lane line detection method and a lane line detection device, which are used for solving the problem.
In the lane detection scheme provided in the embodiment of the present application, the lane detection apparatus obtains current sensing data of a driving environment of a lane, extracts lane image data from the current sensing data, receives lane detection result data (i.e., lane template data) obtained by previous lane detection processing, and determines to obtain current lane detection result data according to the current lane image data and the previous lane detection result data. Because the last lane line detection result data comprises more accurate lane line positioning information, positioning reference information can be provided for the current lane line detection processing. Compared with the prior art, the lane line detection is carried out only by means of the currently acquired sensing data, more accurate lane line detection can be carried out, more accurate positioning information is determined, and therefore the problem that the lane line detection scheme in the prior art is inaccurate in positioning is solved.
The foregoing is the core idea of the present invention, and in order to make the technical solutions in the embodiments of the present invention better understood and make the above objects, features and advantages of the embodiments of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention are further described in detail with reference to the accompanying drawings.
Fig. 1 shows a processing flow of a lane line detection method provided in an embodiment of the present application, where the method includes the following processing procedures:
step 101, acquiring current sensing data of a driving environment of a vehicle by a lane line detection device; the current perception data comprises current frame image data and current positioning data;
102, acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing;
103, extracting current lane line image data according to the perception data;
104, determining to obtain current lane line detection result data according to the lane line image data and the lane line template; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
Wherein, the execution sequence of the step 102 and the step 103 is not in sequence.
The above-described implementation is described in detail below.
In the above step 101, the current perception data of the driving environment of the vehicle may be acquired by a perception device mounted on the vehicle. For example, at least one vehicle-mounted camera acquires at least one frame of current frame image data, and a positioning device acquires current positioning data, wherein the positioning device includes a Global Positioning System (GPS) and/or an Inertial Measurement Unit (IMU). The perception data may further include map data of the current driving environment, laser radar (LIDAR) data. The map data may be real map data acquired in advance, or map data provided by a Simultaneous Localization and Mapping unit (SLAM) of the vehicle.
In step 102, the lane line template data is lane line detection result data obtained in the previous lane line detection process, and includes lane line position information and data of relative positional relationship between the vehicle and the lane line. The lane line template data (i.e., lane line detection result data) may be expressed as 3D space data of a top view angle, for example, a direction perpendicular to a vehicle traveling direction is an X axis with the vehicle traveling direction as a Y axis in a coordinate system.
After the lane line template data is obtained in the last lane line detection process, the lane line template data may be stored in a storage device, which may be a local storage of the lane line detection apparatus, another storage in the vehicle, or a remote storage.
The lane line detection device may read the lane line template data from the storage device in the current lane line detection process, or may receive the lane line template data in a predetermined processing cycle.
In the embodiment of the application, because the lane line detection result data obtained by the last lane line detection processing includes the position information of the lane line and the relative position relationship between the vehicle and the lane line, in the obtained two adjacent frames of image data, the change of the position of the lane line is small, and the lane line has continuity and stability, and the relative position relationship between the vehicle and the lane line is relatively stable and changes continuously.
In the step 103, the process of extracting the current lane line image data according to the sensing data may be implemented in various ways.
In a first mode, a semantic segmentation method is adopted for current frame image data respectively acquired by at least one camera, an algorithm or a model obtained through pre-training is used for carrying out classification marking on pixels of the current frame image data, and current lane line image data is extracted from the pixels.
The algorithm or model obtained by pre-training may be obtained by performing iterative training on the neural network according to real data (ground route) of the driving environment and image data obtained by the camera.
In a second mode, the current lane line image data can also be extracted and obtained in an object recognition mode according to the current frame image data and the current positioning data.
An example of one lane line image data is shown in fig. 2 a.
The present application only lists the above two methods for extracting lane line image data, and may also obtain current lane line image data according to other processing methods, which are not strictly specified in the present application.
In some embodiments of the present application, since the result of the previous lane line detection process is not necessarily fully consistent with the prior knowledge or the common reason, the lane line template data needs to be further adjusted before performing step 104, as shown in fig. 2b, which includes:
step 104S, adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
For example, a priori knowledge or constraints may include: (1) the lane lines on the road are parallel to each other; (2) the curved lane line is an arc line; (3) the length of the curve lane line is less than 300 meters; (4) the distance between adjacent lane lines is between 3 and 4 meters, for example, about 3.75 meters; (5) the color of the lane line is different from the color of the other parts of the road. The a priori knowledge or the constraint condition may also include other contents or data according to the needs of a specific application scenario, and the embodiments of the present application are not limited to strict specific limitations.
Through the adjustment, the lane line template data can be adjusted to be more in line with the priori knowledge or the conventional principle, and more accurate positioning reference information is provided for determining the current lane line detection result data.
The processing in step 104 may specifically be to map the lane line template data to the lane line image data, and obtain current lane line detection result data according to the mapping result, and the processing may be implemented in various ways.
For example, the lane line template data and the lane line image data are subjected to coordinate conversion, the lane line template data subjected to the coordinate conversion is projected into the lane line image data subjected to the coordinate conversion, and current lane line detection result data is obtained by fitting according to a predetermined formula or algorithm and a projection result. The processing of step 104 may also be implemented in other ways.
In addition to the foregoing implementation, an embodiment of the present application provides an implementation based on machine learning, as shown in fig. 3a, specifically including:
step 1041, inputting the lane line image data and the lane line template data into a predetermined Loss Function (Loss Function), wherein the Loss Function outputs a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data;
step 1042, iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data.
The operation of iteratively modifying the position of the lane line in the lane line template data can be realized by a gradient descent algorithm.
Further, in some embodiments of the present application, the loss function may be optimized according to the continuously accumulated lane line detection result data, so as to enhance the accuracy, stability and robustness of the loss function.
By processing as shown in fig. 3a, the distance between the lane line in the lane line template data and the lane line in the lane line image data is continuously measured by the loss function, and the lane line in the lane line template data is continuously fitted to the lane line in the lane line image data by the gradient descent algorithm, so that a more accurate current lane line detection result can be obtained, which may be 3D spatial data of a top view angle, and includes data expressing a relative positional relationship between the vehicle and the lane line, and may also include data expressing a position of the lane line.
Through the lane line detection processing shown in fig. 1, more accurate positioning reference information of the lane line and the vehicle can be obtained by adopting the result data of the last lane line detection, and the result data of the last lane line detection is projected into the current lane line image data to be fitted to obtain the current lane line detection result data, so that more accurate positioning information of the current lane line and the vehicle can be obtained, and the problem that more accurate lane line detection cannot be performed in the prior art can be solved.
In addition, in the embodiment of the application, the sensing data further includes various data, such as map data, which can further provide more accurate positioning information for the lane line detection processing, so as to obtain lane line detection result data with higher accuracy.
Further, in some embodiments, as shown in step 104t of fig. 3b, the current lane line detection result data obtained through the above-described processing is determined as lane line template data of the next lane line detection processing.
Alternatively, in other embodiments, the lane marking detection result data obtained in step 104 may be further checked and optimally adjusted to ensure that lane marking template data with more accurate positioning information is provided for the next lane marking detection process.
FIG. 4 illustrates a lane line inspection and optimization process following the method illustrated in FIG. 1, including:
105, checking the current lane line detection result data;
106, under the condition of passing the inspection, optimizing and adjusting the current lane line detection result data to obtain lane line template data for next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
As shown in fig. 5, step 105 includes the following processing steps:
step 1051, determining the Confidence (Confidence) of the current lane line detection result data according to a Confidence Model (Confidence Model) obtained by pre-training;
specifically, the current lane line detection result data may be provided as an input to a confidence model, which outputs a confidence corresponding to the current lane line detection result data.
The confidence model is obtained by training a deep neural network in advance according to historical lane line detection result data and lane line real data (ground route). The confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
In the process of training the deep neural network according to historical lane line detection result data and real data of a lane line, comparing the historical lane line detection result data with the real data; and classifying or labeling the historical lane line detection results according to the comparison results, for example, labeling the detection result data a, c and d as successful detection results, and labeling the detection result data b and e as failed detection results. And training a neural network according to the labeled historical lane line detection result data and the real data to obtain a confidence coefficient model. The trained confidence model can reflect the success probability or failure probability (i.e. confidence) of the lane line detection result data.
Step 1052, under the condition that the obtained confidence coefficient is determined to meet the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
For example, the test conditions may include: in the case of a success probability with a confidence greater than or equal to X%, the test is determined to be successful, otherwise the test fails.
Further, in some embodiments of the present application, the confidence model may also be optimally trained according to the continuously accumulated lane line detection result data and the lane line real data, and the processing procedure of the optimal training is similar to the processing of the confidence model obtained by training, and is not described here again.
As shown in fig. 6a, step 106 is the following process:
step 1061, under the condition of passing the inspection, expanding the lane lines in the current lane line detection result data;
specifically, the process of expanding the lane line may include:
s1, copying and translating the lane lines at the edges according to the lane line structure in the lane line detection result data;
step S2, under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line and storing new lane line detection result data;
in step S3, when the duplicated and translated lane line cannot be included in the lane line detection result data, the duplicated and translated lane line is discarded.
For example, as shown in fig. 7, the current lane line detection result data includes two lane lines, CL1 and CL2, and new lane lines EL1 and EL2 can be obtained by expanding the lane lines. Fig. 8 is an expanded lane line displayed on the lane line image data.
And 1062, adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for next lane line detection processing.
The adjustment process may refer to step 104S described above.
Fig. 9 shows an example in which the lane line EL2 is adjusted for the expanded lane line shown in fig. 8 to obtain an adjusted lane line EL2 ', and the adjusted lane line EL 2' is closer to a straight line than the lane line EL2 before the adjustment.
In some embodiments of the present application, step 104S and step 1062 may be provided simultaneously. In other embodiments of the present application, one of step 104S and step 1062 may be provided.
Step 1063, discarding the current lane line detection result data when the detection fails.
Further, as shown in step 1064 of fig. 6b, after discarding the current lane line detection result data, a preset lane line template data is determined as the lane line template data for the next lane line detection process. The lane line template data may be general lane line template data, lane line template data corresponding to a type of driving environment, or lane line template data of a specific driving environment. For example, the lane line template data may be one applicable to all environments, or may be lane line template data of one highway environment, lane line template data of an urban road, or lane line template data of a specific road where a vehicle is located. The preset lane line template data can be specifically set according to the requirements of specific application scenarios.
The preset lane line template data may be pre-stored locally in the lane line detection device, may be pre-stored in the automatic driving processing device of the vehicle, or may be stored in a remote server. When the lane line detection device needs to acquire the preset lane line template data, the lane line detection device can acquire the preset lane line template data in a reading or remote request and receiving mode.
Through the optimization adjustment processing shown in fig. 4, the embodiment of the present application can obtain lane line template data containing more accurate positioning information; compared with the lane line template data obtained by the method shown in fig. 1, the lane line template data obtained by the method shown in fig. 4 enables the lane line detection method provided by the embodiment of the application to have higher stability and robustness.
Based on the same inventive concept, the embodiment of the application also provides a lane line detection device.
Fig. 10 is a block diagram illustrating a structure of a lane line detection apparatus according to an embodiment of the present application, where the lane line detection apparatus includes:
the system comprises an acquisition unit 11, a processing unit and a display unit, wherein the acquisition unit is used for acquiring current perception data of the driving environment of a vehicle, and the current perception data comprises current frame image data and positioning data; acquiring lane line template data, wherein the lane line template data is lane line detection result data obtained by last lane line detection processing;
the perception data further comprises at least one of the following: map data, LIDAR (LIDAR) data of a current driving environment; the positioning data comprises GPS positioning data and/or inertial navigation positioning data;
the extraction unit 12 is used for extracting current lane line image data according to the perception data;
a determining unit 13, configured to determine to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
The lane line template data and the lane line detection result data are 3D space data of an overlooking angle.
In some embodiments, the extracting unit 12 extracts lane line image data from the current frame image data, including: and extracting lane line image data from the current frame image data according to an object recognition method or semantic segmentation.
The determining unit 13 determines to obtain current lane line detection result data according to the lane line image data and the lane line template data, and includes: and mapping the lane line template data to the lane line image data, and fitting according to the mapping result to obtain the current lane line detection result data. Further, the determination unit 13 inputs the lane line image data and the lane line template data into a predetermined loss function, which outputs a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data; iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data. In some application scenarios, the determination unit 13 iteratively modifies the position of the lane line in the lane line template data using a gradient descent algorithm.
Before the determining unit 13 determines to obtain the current lane line detection result data according to the lane line image data and the lane line template data, the determining unit 13 further adjusts the lane line template data, including: adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions; wherein the a priori knowledge or predetermined constraints comprise object metric parameters or data representations relating to the road structure.
Further, the determination unit 13 is also configured to determine the current lane line detection result data as lane line template data for the next lane line detection process.
In other embodiments, the lane line detection apparatus may further include, as shown in fig. 11:
a checking unit 14 for checking the current lane line detection result data;
the optimization unit 15 is used for optimizing and adjusting the current lane line detection result data under the condition that the inspection unit 14 passes the inspection, so as to obtain lane line template data for the next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
The inspection unit 14 inspects the current lane line detection result data, and includes: determining the confidence coefficient of the current lane line detection result data according to a confidence coefficient model obtained by pre-training; under the condition that the obtained confidence coefficient is determined to accord with the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
Further, as shown in fig. 12, the lane line detection apparatus provided in the embodiment of the present application may further include: the pre-training unit 16 is used for training the deep neural network to obtain a confidence model according to historical lane line detection result data and lane line real data in advance; the confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
The optimizing unit 15 optimizes and adjusts the current lane line detection result data, and includes: expanding the lane lines in the current lane line detection result data; adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for the next lane line detection processing; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
The optimization unit 15 expands the lane line in the current lane line detection result data, and includes: copying and translating the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data; under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line, and storing new lane line detection result data; and when the copied and translated lane line cannot be included in the lane line detection result data, abandoning the copied and translated lane line.
Further, the optimizing unit 15 is further configured to determine a preset lane line template data as the lane line template data for the next lane line detection processing after discarding the current lane line detection result data.
Through the lane line detection device that this application embodiment provided, adopt the last lane line to detect the result data, can obtain the location reference information of comparatively accurate lane line and vehicle, and with the last lane line result data projection to current lane line image data that detects, the fitting obtains current lane line testing result data, can obtain the location information of comparatively accurate current lane line and vehicle, thereby can solve among the prior art and can't carry out the problem that comparatively accurate lane line detected.
Based on the same inventive concept, the embodiment of the application also provides a lane line detection device.
As shown in fig. 13, the lane line detection apparatus provided in the embodiment of the present application includes a processor 131 and at least one memory 132, where the at least one memory stores at least one machine executable instruction, and the processor executes the at least one machine executable instruction to perform:
acquiring current perception data of a driving environment of a vehicle; the current perception data comprises current frame image data and current positioning data;
acquiring lane line template data; the lane line template data is lane line detection result data obtained by last lane line detection processing;
extracting current lane line image data according to the perception data;
determining to obtain current lane line detection result data according to the lane line image data and the lane line template data; the current lane line detection result data includes data expressing the relative position relationship between the vehicle and the lane line.
The lane line template data and the lane line detection result data are 3D space data of an overlooking angle. The perception data further comprises at least one of the following: map data, LIDAR (LIDAR) data of a current driving environment; the positioning data includes GPS positioning data and/or inertial navigation positioning data.
In some embodiments, the processor 131 executing at least one machine executable instruction performs extracting lane line image data from the current frame image data, including: and extracting the lane line image data from the current frame image data according to an object recognition method or a semantic segmentation method.
The processor 131 executes at least one machine executable instruction to determine to obtain current lane line detection result data according to the lane line image data and the lane line template data, including: and mapping the lane line template data to the lane line image data, and fitting according to the mapping result to obtain the current lane line detection result data. The processing may specifically include: inputting the lane line image data and the lane line template data into a predetermined loss function, the loss function outputting a cost value; wherein the loss function is a function expressing a positional relationship of a lane line between the lane line template data and the lane line image data, and the cost value is a distance between the lane line in the lane line template data and the lane line in the lane line image data; iteratively modifying the position of the lane line in the lane line template data under the condition that the difference value of the two adjacent cost values is greater than a preset threshold value; and under the condition that the difference value of the two adjacent cost values is less than or equal to the preset threshold value, finishing the iterative processing and obtaining the current lane line detection result data. In some application scenarios, processor 131 may execute at least one machine executable instruction to perform: and iteratively modifying the position of the lane line in the lane line template data by adopting a gradient descent algorithm.
The processor 131 executes at least one machine executable instruction to perform the following steps before determining to obtain the current lane line detection result data according to the lane line image data and the lane line template data: adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
The processor executing the at least one machine executable instruction further performs: and determining the current lane line detection result data as lane line template data for next lane line detection processing.
In other embodiments, execution of the at least one machine executable instruction by processor 131 further performs: checking the current lane line detection result data; under the condition of passing the inspection, optimizing and adjusting the current lane line detection result data to obtain lane line template data for next lane line detection processing; in the case of a failed verification, the current lane line detection result data is discarded.
Processor 131 executes at least one machine executable instruction to perform a verification of current lane marking detection result data, comprising: determining the confidence coefficient of the current lane line detection result data according to a confidence coefficient model obtained by pre-training; under the condition that the obtained confidence coefficient is determined to accord with the preset detection condition, the detection is successful; in the case where it is determined that the obtained confidence does not meet the predetermined test condition, the test fails.
Processor 131 executing the at least one machine-executable instruction further performs pre-training to obtain a confidence model, comprising: training a deep neural network to obtain a confidence model according to historical lane line detection result data and lane line real data in advance; the confidence coefficient model is used for representing the corresponding relation between the lane line detection result data and the confidence coefficient.
Processor 131 executes at least one machine executable instruction to perform an optimization adjustment on the current lane line detection result data, including: expanding the lane lines in the current lane line detection result data; adjusting the lane lines in the lane line template data according to the current sensing data, the prior knowledge and/or the preset constraint conditions to obtain lane line template data for the next lane line detection processing; wherein the a priori knowledge or predetermined constraints comprise physical metric parameters or data expressions regarding the road structure.
Processor 131 executes at least one machine executable instruction to perform expanding lane lines in the current lane line detection result data, including: copying and translating the edge lane lines in the lane line detection result data according to the lane line structure in the lane line detection result data; under the condition that the lane line detection result data can include the copied and translated lane line, keeping the copied and translated lane line, and storing new lane line detection result data; and when the copied and translated lane line cannot be included in the lane line detection result data, abandoning the copied and translated lane line.
The processor 131 executes at least one machine executable instruction to determine a preset lane line template data as a lane line template data for a next lane line detection process after discarding the current lane line detection result data.
Through the lane line detection device that this application embodiment provided, adopt the last lane line to detect the result data, can obtain the location reference information of comparatively accurate lane line and vehicle, and with the last lane line result data projection to current lane line image data that detects, the fitting obtains current lane line testing result data, can obtain the location information of comparatively accurate current lane line and vehicle, thereby can solve among the prior art and can't carry out the problem that comparatively accurate lane line detected.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.