GB2611117A - Method and system for lane tracking for an autonomous vehicle - Google Patents
Method and system for lane tracking for an autonomous vehicle Download PDFInfo
- Publication number
- GB2611117A GB2611117A GB2117061.8A GB202117061A GB2611117A GB 2611117 A GB2611117 A GB 2611117A GB 202117061 A GB202117061 A GB 202117061A GB 2611117 A GB2611117 A GB 2611117A
- Authority
- GB
- United Kingdom
- Prior art keywords
- lane
- clothoid
- measured
- values
- tracking system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 104
- 238000001514 detection method Methods 0.000 claims abstract description 90
- 238000012549 training Methods 0.000 claims abstract description 62
- 230000006403 short-term memory Effects 0.000 claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims description 37
- 230000015654 memory Effects 0.000 claims description 29
- 238000005259 measurement Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 19
- 230000007704 transition Effects 0.000 claims description 17
- 238000004891 communication Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 239000013598 vector Substances 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000007774 longterm Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013499 data model Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 101000666896 Homo sapiens V-type immunoglobulin domain-containing suppressor of T-cell activation Proteins 0.000 description 1
- 102100038282 V-type immunoglobulin domain-containing suppressor of T-cell activation Human genes 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- JLYFCTQDENRSOL-VIFPVBQESA-N dimethenamid-P Chemical compound COC[C@H](C)N(C(=O)CCl)C=1C(C)=CSC=1C JLYFCTQDENRSOL-VIFPVBQESA-N 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- IJJVMEJXYNJXOJ-UHFFFAOYSA-N fluquinconazole Chemical compound C=1C=C(Cl)C=C(Cl)C=1N1C(=O)C2=CC(F)=CC=C2N=C1N1C=NC=N1 IJJVMEJXYNJXOJ-UHFFFAOYSA-N 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
The application relates to field of autonomous vehicles that discloses method and system for training a lane tracking system and lane tracking for an autonomous vehicle. During training phase, lane tracking system receives ground truth and measured values corresponding to lane boundary detection points and determines ground truth and measured clothoid points. Thereafter, co-efficient values of clothoid parameters are determined for measured clothoid points to model lane boundaries and then Kalman filter parameters are determined for co-efficient values to track lane boundaries, using Long Short-Term Memory (LSTM) networks. Further, co-efficient values of clothoid parameters are updated using Kalman filter parameters, using which, measured clothoid point is reconstructed, and used to track lane boundaries. Further, training error is minimised based on the difference between reconstructed measured clothoid points and the corresponding ground truth set in each cycle until the training error is below a predefined threshold. The trained lane tracking system is then deployed for lane tracking in the dynamic environment.
Description
METHOD AND SYSTEM FOR LANE TRACKING FOR AN AUTONOMOUS
VEHICLE
Technical Field
100011 The present subject matter relates generally to the field of autonomous vehicles, and more particularly, but not exclusively to a method and a system for lane tracking for an autonomous vehicle
Background
100021 Nowadays, automotive industries have started to move towards autonomous vehicles. Autonomous vehicles, as used in this description and claims, are vehicles that are capable of sensing environment around them in order to move on the roads with or without human intervention. The autonomous vehicles sense the environment with the help of sensors configured in the autonomous vehicles such as Laser, Light Detection and Ranging (LIDAR), Global Positioning System (GPS), computer vision and the like. The autonomous vehicles highly rely on lane detection and tracking on the road for navigating smoothly.
100031 Existing lane detection and tracking techniques may use Kalman filters for tracking the lane boundaries. Especially, Kalman filters may be used to predict lane parameters and to smooth the output of a lane tracker which tracks the lane boundaries. Generally, Kalman filters are opted while tracking lane boundaries as the Kalman filter has the capability of estimating dynamics of state vectors, even in the presence of noisy measurements or noisy processes. Major parameters that help in determining the Kalman filter are process noise covariance matrix (Q) and measurement noise covariance matrix (R). The existing lane detection and tracking techniques that rely on Kalman filters for tracking lane boundaries, use predefined or fixed Q and R values for determining Kalman filters. In reality, Q and R are dynamically varying parameters based on scenarios, detectors used for measurement, kind of process used for measurement and tracking, and the like. However, the existing techniques fail to incorporate dynamic nature of Q and R, and instead use fixed or predefined values for Q and R, which affects the accuracy of prediction performed based on the Kalman filters for lane tracking. Inaccurate lane tracking may generate incorrect steering commands and warning signals to the autonomous vehicle, which may jeopardize vehicle safety.
[0004] Additionally, since Q and R values are fixed in the existing techniques, existing techniques lack the flexibility to incorporate changes occurring in state over time, thus restricting the predictions to only few types or small range of lane structures.
[0005] Therefore, there is a need for a method that can perform lane tracking using Kalman filters, with enhanced accuracy and flexibility.
[0006] The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the disclosure and should not be taken as an acknowledgement or any form of suggestion that this information forms prior art already known to a person skilled in the art.
Summary
[0001] Disclosed herein is a method of training a lane tracking system for an autonomous vehicle. The method comprising receiving, by a lane tracking system, ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points, from a lane boundary detecting system associated with the lane tracking system. Further, the method includes determining, for ground truth clothoid point and measured clothoid point formed using a ground truth set and a measured set, respectively, co-efficient values of clothoid parameters, to model lane boundaries of a lane. The ground truth set comprises a subset of continuous lane boundary detection points and the corresponding ground truth values, and the measured set comprises a subset of continuous lane boundary detection points and the corresponding measured values. Thereafter, the method includes determining Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane. The Kalman filter parameters are determined using Long Short-Term Memory (LSTM) networks. Upon determining Kalman filter parameters, the method includes updating the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters. Further, the method includes reconstructing the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters. Each reconstructed measured clothoid point enables the lane tracking system to track the lane boundaries for the autonomous vehicle. Finally, the method includes minimizing a training error based on a difference between the reconstructed measured clothoid point and the corresponding ground truth set, in each cycle, until the training error is below a predefined threshold.
100021 Further, the present disclosure includes a lane tracking system for an autonomous vehicle. The lane tracking system comprising a processor and a memory communicatively coupled to the processor. The memory stores the processor instructions, which, on execution, causes the processor to train the lane tracking system, wherein for training, the processor is configured to receive ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points, from a lane boundary detecting system associated with the lane tracking system. Further, the processor determines, for ground truth clothoid point and measured clothoid point formed using a ground truth set and a measured set, respectively, co-efficient values of clothoid parameters, to model lane boundaries of a lane. The ground truth set comprises a subset of continuous lane boundary detection points and the corresponding ground truth values, and the measured set comprises a subset of continuous lane boundary detection points and the corresponding measured values. Thereafter, the processor determines Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane. The Kalman filter parameters are determined using Long Short-Term Memory (LSTM) networks. Upon determining Kalman filter parameters, the processor updates the coefficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters. Further, the processor reconstructs the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters. Each reconstructed measured clothoid point enables the lane tracking system to track the lane boundaries for the autonomous vehicle. Finally, the processor minimizes a training error based on a difference between the reconstructed measured clothoid point and the corresponding ground truth set, in each cycle, until the training error is below a predefined threshold.
100031 Further, the present disclosure discloses a method of lane tracking for an autonomous vehicle. The method comprising receiving, by a lane tracking system, measured values corresponding to the plurality of lane boundary detection points, from a lane boundary detecting system associated with the lane tracking system. Thereafter, the method includes determining, for a measured clothoid point formed using a measured set, co-efficient values of clothoid parameters, to model lane boundaries of a lane. The measured set comprises a subset of continuous lane boundary detection points and the corresponding measured values. Subsequently, the method includes determining Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane. The Kalman filter parameters are determined using Long Short-Term Memory (LSTM) networks. Upon determining the Kalman filters, the method includes updating the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters. Finally, the method includes reconstructing the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters. Each reconstructed measured clothoid point enables the lane tracking system to track the lane boundaries for the autonomous vehicle.
100041 Furthermore, the present disclosure discloses a lane tracking system for an autonomous vehicle. The lane tracking system comprises a processor and a memory communicatively coupled to the processor. The memory stores the processor instructions, which, on execution, causes the processor to receive measured values corresponding to the plurality of lane boundary detection points, from a lane boundary detecting system associated with the lane tracking system. Thereafter, the processor determines, for a measured clothoid point formed using a measured set, co-efficient values of clothoid parameters, to model lane boundaries of a lane. The measured set comprises a subset of continuous lane boundary detection points and the corresponding measured values. Subsequently, the processor determines Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane. The Kalman filter parameters are determined using Long Short-Term Memory (LSTM) networks. Upon determining the Kalman filters, the processor updates the coefficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters. Finally, the processor reconstructs the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters. Each reconstructed measured clothoid point enables the lane tracking system to track the lane boundaries for the autonomous vehicle.
[0007] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the fol 1 owing detailed description.
Brief Description of the Accompanying Diagrams
100081 The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which: [0009] FIG.1 shows an exemplary architecture for performing lane tracking for an autonomous vehicle in accordance with some embodiments of the present disclosure.
[0010] FIG.2A shows a detailed block diagram of an exemplary lane tracking system for lane tracking for an autonomous vehicle in accordance with some embodiments of the present disclosure.
[0011] FIG.2B illustrates exemplary plurality of lane boundary detection points positioned continuously according to exemplary ground truth values in accordance with some embodiments of the present disclosure.
100121 FIG.2C illustrates exemplary plurality of lane boundary detection points positioned continuously according to exemplary measured values, in accordance with some embodiments of the present disclosure [0013] FIG.2D shows exemplary lanes tracked using exemplary reconstructed clothoid points, in accordance with some embodiments of the present disclosure.
[0014] FIG.2E shows plurality of exemplary lanes tracked, that belong to a road on which the autonomous vehicle is moving, in accordance with some embodiments of the present disclosure [0015] FIG.3A shows a flowchart illustrating a method of training a lane tracking system for an autonomous vehicle in accordance with some embodiments of the present disclosure.
[0016] FIG.3B shows a flowchart illustrating a method of lane tracking system for an autonomous vehicle in accordance with some embodiments of the present disclosure [0017] FIG.4 is a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
[0018] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether or not such computer or processor is explicitly shown
Detailed Description
[0019] In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or mplementation of the present subject matter described herein as "exemplary" is not necessarily be construed as preferred or advantageous over other embodiments.
[0020] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the scope of the disclosure.
[0021] The terms comprises", "comprising", "includes" or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that includes a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by -comprises a" does not, without more constraints, preclude the existence of other elements or additional elements in the system or method [0022] Disclosed herein is a method and a system for lane tracking for an autonomous vehicle. In some embodiments, lane tracking involves detecting presence of one or more lanes on the road along which the autonomous vehicle is moving, and tracking the detected lanes in order to generate one or more commands to the autonomous vehicle. As an example, the one or more commands may be steering commands, braking commands, lane shifting commands, overtaking commands, warning signals and the like, that help in movement of the autonomous vehicle. The present disclosure provides an Artificial Intelligence (AI) based method, that performs lane tracking for the autonomous vehicle in a way that it addresses one or more problems of the existing techniques stated in the background section of the present disclosure. Since, the method is an Al based method, a lane tracking system disclosed in the present disclosure for tracking lanes for the autonomous vehicle requires training prior to deployment in a dynamic environment.
[0023] In some embodiments, the lane tracking system may be trained using ground truth values corresponding to a plurality of lane boundary detection points along with the measured values corresponding to the plurality of lane boundary detection points determined using image frames corresponding to various on-road scenarios, and different types of lanes. In some embodiments, ground truth values may refer to original values, or in other words, information which is known to be real and provided based on direct observation. However, measured values are the values that are determined or predicted by a system, and are not based on direct observation like the ground truth values. Therefore, the measured values and ground truth values may be same or different, depending on the accuracy of the measured values. In some embodiments, a subset of the measured values may be used to form a measured clothoid point and a subset of ground truth values may be used to form a ground truth clothoid point. During the training phase, the lane tracking system may be trained to determine Kalman filters to update co-efficient values of clothoid parameters of the measured clothoid point and then reconstruct the measured clothoid point using the updated co-efficient values of the clothoid parameters. Thereafter, the lane tracking system may be trained to determine training error and minimize the training error for each cycle. The training phase of the lane tracking system for tracking lanes for the autonomous vehicle based on clothoid points is explained in detail in the later part detailed description of the present disclosure, with suitable figures.
[0024] During the training phase, the lane tracking system may determine Kalman filter parameters using one or more Long Short-Term Memory (LSTM) techniques, which are Al based techniques. The LSTIVIs are capable of storing memory associated with historic events and learn long-term dependencies based on the stored memory. In the present disclosure, the lane tracking system may be trained during the training phase to determine a measurement noise covariance matrix (R) and a process noise covariance matrix (Q) dynamically using one or more same or different LSTM networks. For instance, the measurement noise covariance matrix (R) may be dynamically determined using a first LSTM network and the process noise covariance matrix (Q) may be dynamically determined using a second LSTM network. In some other embodiments, the measurement noise covariance matrix (R) and the process noise covariance matrix (Q) may be dynamically determined using the same LSTM network. In reality, Q and R vary dynamically based on scenarios, detectors used for measurement, kind of process used for measurement and tracking, and the like. By using the LSTM networks to determine Q and R, enables determination of Q and R based on captured historic data from past cycles of the autonomous vehicle, by the LSTM network. Therefore, the LSTM network may determine Q and R by analyzing changes over time and hence leads to joint evolution of Q and R. Therefore, the Q and R values determined using the LSTM network(s) are not some random predefined or static values but are specific to the current scenario captured in the image frame. Since, the Q and R values are determined using the LSTM networks that are data driven i.e. analyzing based on captured historic data from past cycles and using the analysis results to determine Q and R for the current scenario, the determined values of Q and R are accurate and robust. Moreover, since the LSTM networks are trained during the training phase using ground truth values along with the measured values, the determined values of Q and R are closer to the ground tnith values, which adds to the accuracy levels of the dynamically determined Q and R values. Such dynamically determined accurate Q and R values enable determination of an accurate Kalman filter for lane tracking, that in turn results in accurate updated coefficient values of clothoid parameters and reconstructed measured clothoid point.
100251 The training error that may occur due to the reconstructed clothoid point may be determined based on a difference between the reconstructed clothoid points and the corresponding ground truth set i.e. subset of the ground truth values of corresponding lane boundary detection points, in each cycle. Such training error may be minimized until the training error is below a predefined threshold. Therefore, even the slightest training error that may negatively impact the accuracy of lane tracking is reduced by minimizing the training error during the training phase. In the present disclosure, the training error may be obtained by using L2 norm between the clothoid points and not by using the clothoid co-efficients as used in some existing techniques. Therefore, in the present disclosure, a new way of minimizing error is followed, which involves comparing reconstructed measured clothoid point with ground truth values of corresponding lane boundary detection points initially used for forming the measured clothoid point, that is reconstructed. This enhances the error minimization in a better manner and in lesser number of cycles, when compared to conventional error minimization technique that involves comparing updated coefficient values of the clothoid parameters of the measured clothoid point with the co-efficient values of the clothoid parameters of the ground truth clothoid point.
[0026] Additionally, since Q and R essentially indicate process noise and measurement noise respectively, determination of Q and R require data from one or more sensors configured in the autonomous vehicle. Usage of LSTM network to determine the LSTM network provides flexibility to the present disclosure to include a sensor error model in the LSTM network. Such sensor error model may provide low level features related to the one or more sensors such as amount of noise involved in the measurement, amount of noise involved in the measurement process and the like. Such low level features help in directly correcting the sensor errors using the sensor error model, and result in enhancement in the accuracy of the dynamically determined values of Q and Ft. This in turn helps in performing lane tracking with utmost accuracy. Therefore, performing lane tracking based on the clothoid parameters using the LSTM networks and Kalman filters not only enable accurate lane tracking, but also reduce generation of incorrect steering commands and warning signals, and enhancing safety of the autonomous vehicle.
[0027] A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the disclosure [0028] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure The following description is, therefore, not to be taken in a limiting sense.
[0029] FIG.1 shows an exemplary architecture for lane tracking for an autonomous vehicle in accordance with some embodiments of the present disclosure.
100301 The architecture 100 includes an autonomous vehicle 101, a lane tracking system 103 and a lane boundary detecting system 105, a sensor 1131 to sensor 113n (also referred as one or more sensors 113), and an image capturing device 115. As an example, the autonomous vehicle 101 may be a car, a bus, a truck, a lorry and the like, which are integrated with Electronic Control Units (ECUs) and systems capable of communicating through the in-vehicle network of the autonomous vehicles. In some embodiments, the lane boundary detecting system 105 may be associated with the lane tracking system 103 via a communication network (not shown in the FIG.1). The communication network may be at least one of a wired communication network and a wireless communication network. In some embodiments, both the lane boundary detecting system 105 and the lane tracking system 103 may be configured in the vehicle 101 to perform lane tracking for the autonomous vehicle 101. In some other embodiments, both the lane boundary detecting system 105 and the lane tracking system 103 may be externally associated with the ECUs of the autonomous vehicle 101 to perform lane tracking for the autonomous vehicle 101. In yet other embodiments, one of the systems may be configured in the autonomous vehicle 101 and the other system may be externally associated with the ECUs of the autonomous vehicle 101 to perform lane tracking for the autonomous vehicle 101.
100311 In some embodiments, the autonomous vehicle 101 may be configured with the one or more sensors 113 and the image capturing device 115 The autonomous vehicle 101 may sense the environment with the help of the one or more sensors 113 such as Laser, Light Detection and Ranging (LIDAR), Global Positioning System (GPS), computer vision and the like, Further,the image capturing device 115 may be mounted to the autonomous vehicle 101, to capture image frames of an area in front of the autonomous vehicle 101. In some embodiments, the image capturing device 115 may include, but not limited to, a RedGreen-Blue (RGB) camera, a monochrome camera, a depth camera, a 360-degree camera, a night vision camera and the like. In some embodiments, the autonomous vehicle 101 may be mounted with more than one image capturing device 115. The image capturing device(s) 115 may be mounted in an area of the autonomous vehicle 101 such that, the area in front of the autonomous vehicle 101 is properly covered in the image frames. For instance, the image capturing device(s) 115 may be mounted on top of the autonomous vehicle 101, in the headlight region of the autonomous vehicle 101, on the external rear view mirrors and the like.
100321 In some embodiments, the lane tracking system 103 is an Artificial Intelligence (Al) based system which may be trained to perform lane tracking for the autonomous vehicle 101 prior to deployment of the lane tracking system 103 for a dynamic environment when the autonomous vehicle 101 is navigating. In some embodiments, during the training phase, the lane tracking system 103 may receive ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points, from the lane boundary detecting system 105 In some embodiments, ground truth values may refer to original values, or in other words, information which is known to be real and provided based on direct observation. However, measured values are the values that are determined or predicted by a system, and are not based on direct observation like the ground truth values In some embodiments, lane boundaries may be lines that mark the limit of a lane. Each lane may have a left lane boundary and a right lane boundary enclosing the lane. In some embodiments, the plurality of lane boundary detection points may be points that indicate a boundary region of a road along which the autonomous vehicle 101 moves. In other words, the plurality of lane boundary detection points correspond to the left lane boundary and the right lane boundary of a plurality of lanes belonging to a road along which the autonomous vehicle 101 is moving. A subset of continuous lane boundary detection points and the corresponding ground truth values may be referred as a ground truth set and a subset of continuous lane boundary detection points and the corresponding measured values may be referred as a measured set. Thereafter, the lane tracking system 103 may generate a ground truth clothoid point using the ground truth set and a measured clothoid point using the measured set. In some embodiments, during this training phase, the lane tracking system 103 may be trained to select the ground truth set and the measured set required for generating the ground truth clothoid point and the measured clothoid point, respectively. Clothoid points are generally spiral curves whose curvature varies linearly over arc length, that allows a smooth movement of a steering wheel when the autonomous vehicle 101 is moving on road segments with different horizontal curvature. The lane tracking system 103 may be thereafter trained to determine co-efficient values of clothoid parameters for the ground truth clothoid point and the measured clothoid point, to model lane boundaries of the lane along which the autonomous vehicle 101 would move. In some embodiments, the clothoid parameters may include, but not limited to, initial curvature of lane boundary (co), curvature rate (ci) of the lane boundary, and heading angle with respect to autonomous vehicle's driving direction. In some embodiments, the initial curvature of lane boundary (co) may be defined as a first curvature angle of the lane determined in an image frame, the curvature rate (cr) of the lane boundary may be defined as rate at which the curvature of the lane is changing in the image frame when compared to the initial curvature, and the heading angle (13) may be defined as angle of curvature of the lane with respect to the autonomous vehicle on that lane. Thereafter, the lane tracking system 103 may be trained to determine Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane. In some embodiments, the lane tracking system 103 may determine the Kalman filter parameters using Long Short-Term Memory (LSTNI) networks. Upon determining the Kalman filter parameters, the lane tracking system 103 may be trained to update the coefficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters. Thereafter, the lane tracking system 103 may be trained to reconstruct the measured clothoid point using the corresponding updated co-efficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point enables the lane tracking system 103 to track the lane boundaries for the autonomous vehicle 101. The lane tracking system 103 may then determine a training error by computing a difference between the reconstructed clothoid point and the corresponding ground truth set. During the training phase, the lane tracking system 103 may minimize the training error determined in each cycle, until the training error is below a predefined threshold.
100331 The lane tracking system 103 thus trained may be used in the dynamic environment when the autonomous vehicle 101 is moving on the road. In some embodiments, the lane tracking system 103 may include a processor 107, an Input/Output (I/O) interface 109 and a memory 111 as shown in the FIG.!. The I/O interface 109 of the lane tracking system 103 may receive measured values corresponding to the plurality of lane boundary detection points from the lane boundary detecting system 105. The plurality of lane boundary detection points correspond to the left lane boundary and the right lane boundary of the lane along which the autonomous vehicle 101 is moving. The processor 107 may generate a measured clothoid point based on a measured set comprising a subset of continuous lane boundary detection points and the corresponding measured values. In some embodiments, the processor 107 may dynamically select the subset of continuous lane boundary detection points to form the measured set. In some embodiments, the processor 107 may determine coefficient values of clothoid parameters for the measured clothoid point, to model lane boundaries of the lane. Thereafter, the processor 107 may determine Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid point, to track the lane boundaries of the lane, and update the coefficient values of the clothoid parameters determined for the measured clothoid point, using the Kalman filter parameters. Finally, the processor 107 may reconstruct the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point may enable the lane tracking system 103 to track the lane boundaries for the autonomous vehicle 101.
100341 FIG.2A shows a detailed block diagram of a lane tracking system 103 for an autonomous vehicle in accordance with some embodiments of the present disclosure.
100351 In some implementations, the lane tracking system 103 may include data 203 and modules 205. As an example, the data 203 is stored in a memory 111 of the lane tracking system 103 as shown in the FIG.2A. In one embodiment, the data 203 may include training data 207, clothoid points data 209, Kalman filter parametric data 211, reconstructed data 213, and other data 215. In the illustrated FIG.2A, modules 205 are described herein in detail.
100361 In some embodiments, the data 203 may be stored in the memory 111 in form of various data structures. Additionally, the data 203 can be organized using data models, such as relational or hierarchical data models The other data 215 may store data, including temporary data and temporary files, generated by the modules 205 for performing the various functions of the lane tracking system 103.
100371 In some embodiments, the training data 207 may include, data used for training the lane tracking system 103 for lane tracking of an autonomous vehicle 101. For instance, the training data 207 may include, but not limited to, ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points, ground truth clothoid points and measured clothoid points generated using a ground truth set and a measured set respectively, Kalman filter parameters and co-efficient values of clothoid parameters determined for the measured clothoid points, updated co-efficient values of the clothoid parameters, reconstructed measured clothoid points, and training errors, that are used for training the lane tracking system 103 100381 In some embodiments, the clothoid points data 209 may include data related to the clothoid points generated in a dynamic environment, when the autonomous vehicle 101 is moving on the road As an example, the clothoid points data 209 may include, but not limited to, a measured clothoid point, a measured set comprising a plurality of subset of continuous lane boundary detection points and corresponding measured values used for generating the measured clothoid point, and co-efficient values of clothoid parameters determined for the measured clothoid point.
100391 In some embodiments, the Kalman filter parametric data 211 may include, but not limited to, Kalman filter parameters determined for the co-efficient values of the clothoid parameters determined for the measured clothoid point, using Long Short-Term Memory (LSTM) networks.
[0040] In some embodiments, LSTNI networks are a special kind of Recurrent Neural Network (RNN), capable of learning long-term dependencies. LSTMs are explicitly designed to avoid the long-term dependency problem. All RNNs have the form of a chain of repeating modules of neural network. In standard RNNs, this repeating module will have a very simple structure, such as a single tanh layer. LSTMs also have this chain like structure, but the repeating module of the LSTMs has a different structure when compared to the general RNNs. Instead of having a single neural network layer, there are four neural networks interacting in a special manner. The LSTN4 has the ability to remove or add information to a cell state, carefully regulated by structures called gates. Gates are a way to optionally let information through. For instance, an LSTM has three of these gates, to protect and control the cell state: (a) Input gate -decides what new information is going to be stored in the cell state, (b) Forget gate -decides what information is going to be thrown away from the cell state, and (c) Output gate -decides what information is going as output.
[0041] In some embodiments, the reconstructed data 213 may include, but not limited to, reconstructed measured clothoid point, and updated co-efficient values of the clothoid parameters determined for the measured clothoid point that are used for reconstructing the measured clothoid point.
[0042] In some embodiments, the data 203 stored in the memory 111 may be processed by the modules 205 of the lane tracking system 103. The modules 205 may be stored within the memory III. In an example, the modules 205 communicatively coupled to the processor 107 of the lane tracking system 103, may also be present outside the memory III as shown in FIG.2A and implemented as hardware. As used herein, the term modules 205 may refer to an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
100431 In some embodiments, the modules 205 may include, for example, a receiving module 221, a co-efficient value determining module 223, Kalman filter determining module 225, a reconstructing module 227, a learning module 229 and other modules 231. The other modules 231 may be used to perform various miscellaneous functionalities of the lane tracking system 103. It will be appreciated that such aforementioned modules 205 may be represented as a single module or a combination of different modules.
100441 The lane tracking system 103 may be trained to perform lane tracking for the autonomous vehicle 101 prior to deployment of the lane tracking system 103 for a dynamic environment when the autonomous vehicle 101 is navigating.
100451 In some embodiments, during the training phase, the receiving module 221 may receive ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points, from a lane boundary detecting system 105 associated with the lane tracking system 103. In some embodiments, the plurality of lane boundary detection points correspond to a left lane boundary and a right lane boundary of a plurality of lanes belonging to a road along which the autonomous vehicle 101 is moving. In some embodiments, lane boundary detecting system 105 may determine the plurality of lane boundary detection points using at least one of lane data received from one or more sensors 113 configured in the autonomous vehicle 101 and an image frame of the lane received in real-time. In some embodiments, the image frame of the lane is received from an image capturing device 115 associated with the autonomous vehicle 101. In some other embodiments, the at least one of the lane data and the image frame of the lane are retrieved from a database configured to store the lane data and the image frame captured in real-time. As an example, the lane data may include, but not limited to, lane markings, lane pattern, lane color, number of lanes, and the like. FIG.2B shows an exemplary figure illustrating exemplary plurality of lane boundary detection points 233 positioned continuously according to exemplary ground truth values. Since the ground truth values corresponding to the exemplary plurality of lane boundary detection points 233 are original values, the lines indicated in white, over the lanes in FIG.2B, Is a smooth line that superimposes properly over the lanes of the road. FIG.2C shows an exemplary figure illustrating exemplary plurality of lane boundary detection points 235 positioned continuously according to exemplary measured values. Since the measured values corresponding to the exemplary plurality of lane boundary detection points 235 are measured by the lane tracking system 103 which is being trained, the lines indicated in white, over the lanes in FIG.2B, is not smooth and also does not superimpose over the lanes of the road.
[0046] Further, in some embodiments, during the training phase, a co-efficient value determining module 223 may select a subset of continuous lane boundary detection points and the corresponding ground truth values as a ground truth set and a subset of continuous lane boundary detection points and the corresponding measured values as a measured set. The a co-efficient value determining module 223 may generate a ground truth clothoid point using the ground truth set and a measured clothoid point using the measured set. In some embodiments, a clothoid point is generally generated using a predefined number of continuous lane boundary detection points. Therefore, as the autonomous vehicle 101 is moving along a lane of the road, the ground truth sets, and the measured sets will be selected continuously one after the other for generating respective ground truth clothoid points and measured clothoid points. An exemplary ground truth set comprising "N" number of lane boundary detection points of ground truth values may be as shown below: Rxo,y0), (x,,y), [0047] Similarly, an exemplary ground truth set comprising "M" number of lane boundary detection points of measured values may be as shown below: [(xo,y0), (xl,y1), (x2,y2), (xm-hym-i)] [0048] Thereafter, during the training phase, a co-efficient value determining module 223 may be trained to determine co-efficient values of clothoid parameters for the ground truth clothoid point and the measured clothoid point, to model lane boundaries of the lane along which the autonomous vehicle 101 would move. In some embodiments, the clothoid parameters may include, but not limited to initial curvature of lane boundary (co), curvature rate (c1) of the lane boundary, and heading angle (ft) with respect to autonomous vehicle's driving direction. In some embodiments, the initial curvature of lane boundary (co) may be defined as a first curvature angle of the lane determined in an image frame, the curvature rate (c1) of the lane boundary may be defined as rate at which the curvature of the lane is changing in the image frame when compared to the initial curvature, and the heading angle (,6) may be defined as angle along which the autonomous vehicle is expected to move ahead with respect to the curvature of the lane.
[0049] In some embodiments, since clothoid points cannot be evaluated in closed form, the coefficient value determining module 223 may be trained to determine the ground truth clothoid point and the measured clothoid point using the below Equation 1.
Equation 1 [0050] In the above Equation 1, -x and), refer to lane boundary detection points of one of the ground truth set or the measured set, - c, refers to initial curvature of lane boundary; - ci refers to curvature rate of the lane boundary, - /f refers to heading angle with respect to autonomous vehicle's driving direction, and - zufset refers to initial lateral offset between lane boundaries and the autonomous vehicle 101 (ego vehicle).
100511 In some embodiments, the co-efficient value determining module 223 may be trained to determine the co-efficient values of clothoid parameters for the ground truth clotho d point using the below Equation 2.
(Alt A)-1 4 Equation 2 100521 In the above Equation 2, A is: Delta values of the lane detection boundary points of the ground truth set are determined as shown below: c, refers to initial curvature of lane boundary; ci refers to curvature rate of the lane boundary, and /3 refers to heading angle with respect to autonomous vehicle's driving direction.
[0053] Using the above Equation 2, the co-efficient value determining module 223 may determine co-efficient values of clothoid parameters of the measured clothoid point as well. In some embodiments, the co-efficient values of clothoid parameters of the ground truth clothoid point and the measured clothoid point represent state of the boundaries of the lane along which the autonomous vehicle 101 is moving during the training phase.
100541 Thereafter, the Kalman filter determining module 225 may be trained to determine Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane. In some embodiments, the Kalman filter determining module 225 may determine the Kalman filter parameters using Long Short-Term Memory (LSTM) networks. The Kalman filter determining module 225 may determine the Kalman filter parameters using the one or more LSTM techniques. In some embodiments, the Kalman filter determining module 225 may initially provide the co-efficient values of the clothoid parameters determined for the measured clothoid point as an input to a first LSTM network. In some embodiments, the first LSTM network may be trained based on historical co-efficient values of the clothoid parameters determined for clothoid points formed using historical measured and ground truth sets. Thereafter, the Kalman filter determining module 225 may determine a measurement noise covariance matrix (R) using the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the first LSTM network. Upon determining "W., the Kalman filter determining module 225 may predict a state transition (Y0 of the co-efficient values of the clothoid parameters determined for the measured clothoid point, from one image frame to another image frame, based on velocity of the autonomous vehicle 101 moving along the lane and time difference between consecutive image frames, h) some embodiments, the state transition may be predicted using the matrix as shown below: v [0055] In the above matrix, "v" refers to velocity of the autonomous vehicle; and "At" refers to time difference between consecutive image frames.
[0056] Thereafter, the Kalman filter determining module 225 may determine a process noise covariance matrix (Q) using the predicted state transition as an input to a second LSTM network. In some embodiments, the second LSTM network is also trained using historical ego vehicle velocity values and time difference values. Using the determined process noise covariance matrix (Q), the Kalman filter determining module 225 may predict an error covariance (Pp) of the predicted state transition. Finally, the Kalman filter determining module 225 may determine the Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid point, based on the predicted state transition (Yp) and covariance (Pp), and the determined measurement noise covariance matrix (R) and the co-efficient values of the clothoid parameters determined for the measured clothoid point. In some embodiments, broadly, a Kalman filter may include, but not limited to, a state vector (y), a state transition matrix (A a state error covariance matrix (P), a process noise covariance matrix (0), a Kalman gain (K), a measurement noise covariance matrix (R), and a measurement (z) (also referred as measured values corresponding to the plurality of lane boundary detection points) at time (t). As discussed above, Kalman filter determining module 225 may learn Q and R using the first and second LSTIVI networks, respectively. The below given Equations 1-5 indicate determination of rest of the Kalman filter parameters. Equations 1 and 2 are related to prediction of Kalman filter Parameters and Equations 3-5 are related to updation of Kalman filter parameters.
[0057] In the below Equation 1, jit denotes predicted state vector at time "t" and f(9it-t) is a state transition matrix Equation 1 [0058] In the below Equation 2, P', denotes predicted state error covariance matrix at time "t" "P't_i" denotes state error covariance matrix determined at time (t-1), "F" is a matrix representation of "I", and "Q" denotes process noise covariance matrix (Q) Equation 2 [0059] In the below Equation 3, P', denotes predicted state error covariance matrix at time "t" "R" denotes measurement noise covariance matrix, "Kr" denotes Kalman gain at time "t" Equation 3 [0060] In the below Equation 4, "lit denotes updated state vector at time "t", Irit denotes predicted state vector at time "t", "Kr" denotes Kalman gain at time t", and "z" is a measurement corresponding to the plurality of lane boundary detection points at time "t" Equation 4 100611 In the below Equation 5, P' denotes predicted state error covariance matrix at time "1", P, denotes updated state error covariance matrix at time "t", "Kt" denotes Kalman gain at time "t", and "1" denotes identity matrix.
Equation 5 [0062] Upon determining the Kalman filter parameters, the reconstructing module 227 may be trained to update the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters As an example, the co-efficient values of the clothoid parameters updated based on the Kalman filter parameters may be as shown below: 100631 In the above example, KF may refer to Kalman Filter Parameter and the clothoid parameters suffixed with "ICF-indicated that the determined Kalman filter parameters are applied to the clothoid parameters for updating the co-efficient values of the clothoid parameters.
[0064] In some embodiments, upon updating the co-efficient values of the clothoid parameters determined for the measured clothoid point, the reconstructing module 227 may be trained to reconstruct the measured clothoid point using the corresponding updated co-efficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point may enable the lane tracking system 103 to track the lane boundaries for the autonomous vehicle 101. In some embodiments, the reconstructing module 227 may add an initial lateral offset between the lane boundaries and the ego vehicle, to the reconstructed measured clothoid point. As an example, exemplary tracked and modelled lane 237 formed based on the reconstructed clothoid points is as shown in the F1G.2D. In a further example, FIG.2E shows plurality of exemplary lanes tracked, belonging to a road on which the autonomous vehicle 101 is moving. In FIG.2E, line indicated by referral numeral 235 represents initial measured values corresponding to the exemplary plurality of lane boundary detection points 235, and line indicated by referral numeral 237 represents exemplary tracked and modelled lane 237 formed based on reconstructed measured clothoid points. Therefore, in the present disclosure, the lane tracking system 103 not only tracks the ego lanes, which means left boundary lane and right boundary lane of the lane in which the autonomous vehicle 101 is moving, but also tracks other lanes of the road as shown in the FIG.2E.
[0065] Further, the learning module 229 may determine a training error by computing a difference between the reconstructed clothoid point and the corresponding ground truth set. By computing difference between the reconstructed clothoid point and the corresponding ground truth set, the learning module 229 would be able to enhance the error minimization, thereby leading to accurate determination of reconstructed measured clothoid points for performing lane tracking of the autonomous vehicle 101 when deployed in a dynamic environment. During the training phase, the learning module 229 may minimize the training error determined in each cycle, until the training error is below a predefined threshold.
[0066] In some embodiments, the lane tracking system 103 thus trained may be used in the dynamic environment when the autonomous vehicle 101 is moving on the road.
[0067] In some embodiments in the dynamic environment, the receiving module 221 may receive measured values corresponding to the plurality of lane boundary detection points from the lane boundary detecting system 105. The plurality of lane boundary detection points correspond to the left lane boundary and the right lane boundary of the lane along which the autonomous vehicle 101 is currently moving.
[0068] Thereafter, the co-efficient value determining module 223 may generate a measured clothoid point based on a measured set comprising a subset of continuous lane boundary detection points and the corresponding measured values. In some embodiments, the coefficient value determining module 223 may dynamically select the subset of continuous lane boundary detection points to form the measured set, as the autonomous vehicle 101 is moving on road. The co-efficient value determining module 223 may then determine coefficient values of clothoid parameters for the measured clothoid point, to model lane boundaries of the lane.
100691 Further, in some embodiments, the Kalman filter determining module 225 may determine Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid point, to track the lane boundaries of the lane. In some embodiments, the Kalman filter determining module 225 may determine the Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid point, based on the predicted state transition (Yp) and covariance (Pp), the determined measurement noise covariance matrix (R) and the co-efficient values of the clothoid parameters determined for the measured clothoid point.
[0070] Thereafter, the reconstructing module 227 may update the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the Kalman filter parameters. Finally, the reconstructing module 227 may reconstruct the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point may enable the lane tracking system 103 to track the lane boundaries for the autonomous vehicle 101.
100711 FIG.3A shows a flowchart illustrating a method of training a lane tracking system for an autonomous vehicle in accordance with some embodiments of the present disclosure.
[0072] As illustrated in FIG.3A, the method 300a includes one or more blocks illustrating a method of training a lane tracking system for an autonomous vehicle 101 The method 300a may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform functions or implement abstract data types [0073] The order in which the method 300a is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300a Additionally, individual blocks may be deleted from the methods without departing from the spirit and scope of the subject matter described herein. Furthermore, the method 300a can be implemented in any suitable hardware, software, firmware, or combination thereof [0074] At block 301, the method 300a may include receiving, by a processor 107 of a lane tracking system 103 during a training phase, ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points, from a lane boundary detecting system 105 associated with the lane tracking system 103. In some embodiments, the plurality of lane boundary detection points are determined using at least one of lane data received from one or more sensors 113 configured in the vehicle 101 and an image frame of the lane received in real-time. In some embodiments, the image frame of the lane is received from an image capturing device 115 associated with the autonomous vehicle 101. In some other embodiments, at least one of the lane data and the image frame of the lane are retrieved from a database configured to store the lane data and the image frame captured in real-time. The database may be associated with the lane tracking system 103.
[0075] At block 303, the method 300a may include determining, by the processor 107 during the trailing phase, co-efficient values of clothoid parameters for ground truth clothoid point and measured clothoid point formed using a ground truth set and a measured set, respectively, to model lane boundaries of a lane. In some embodiments, the processor 107 may select a subset of continuous lane boundary detection points and the corresponding ground truth values to form the ground truth set, and a subset of continuous lane boundary detection points and the corresponding measured values to form the measured set. In some embodiments, the number of continuous lane boundary detection points selected to form the ground truth set and the measured set may be predefined. However, the selection is performed by the processor 107 in real-time. In some other embodiments, the number of continuous lane boundary detection points selected to form the ground truth set and the measured set may be decided as per requirement by the processor 107 for each image frame In some embodiments, the clothoid parameters may include, but not limited to, initial curvature of lane boundary (co), curvature rate (ci) of the lane boundary, and heading angle (8) with respect to vehicle's driving direction.
[0076] At block 305, the method 300a may include, determining, by the processor 107 during the training phase, Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane. In some embodiments, the processor 107 may determine the Kalman filter parameters using Long Short-Term Memory (LSTM) networks.
100771 At block 307, the method 300a may include updating, by the processor 107 during the training phase, the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters.
100781 At block 309, the method 300a includes, reconstructing, by the processor 107 during the training phase, the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point enables the lane tracking system 103 to track the lane boundaries for the autonomous vehicle 101.
100791 At block 311, the method 300a includes, minimizing, by the processor 107 during the training phase, a training error based on a difference between the reconstructed measured clothoid point and the corresponding ground truth set, in each cycle, until the training error is below a predefined threshold. This way for each cycle, the processor 107 reduces the error involved in tracking lanes using the lane tracking system 103 which is being trained. In some embodiments, the processor 107 may add an initial lateral offset between the lane boundaries and the autonomous vehicle 101, to the reconstructed measured clothoid point.
[0080] FIG.311 shows a flowchart illustrating a method oflane tracking for an autonomous vehicle in accordance with some embodiments of the present disclosure.
[0081] As illustrated in FIG.3A, the method 300b includes one or more blocks illustrating a method of lane tracking for an autonomous vehicle 101 The method 300b may be described in the general context of computer executable instructions Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform functions or implement abstract data types [0082] The order in which the method 300b is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300b. Additionally, individual blocks may be deleted from the methods without departing from the spirit and scope of the subject matter described herein. Furthermore, the method 300b can be implemented in any suitable hardware, software, firmware, or combination thereof [0083] At block 313, the method 300b may include receiving, by a processor 107 of a lane tracking system 103 in the dynamic environment when the autonomous vehicle 101 is moving on road, measured values corresponding to the plurality oflane boundary detection points, from a lane boundary detecting system 105 associated with the lane tracking system 103. In some embodiments, the plurality of lane boundary detection points are determined using at least one of lane data received from one or more sensors 113 configured in the autonomous vehicle 101 and an image frame of the lane received in real-time. In some embodiments, the processor 107 may receive image frame of the lane from an image capturing device 115 associated with the autonomous vehicle 101. In some other embodiments, the processor 107 may retrieve at least one of the lane data and the image frame of the lane from a database configured to store the lane data and the image frame captured in real-time. The database may be associated with the lane tracking system 103.
100841 At block 315, the method 3006 may include determining, by the processor 107 in the dynamic environment, co-efficient values of clothoid parameters for a measured clothoid point formed using a measured set, respectively, to model lane boundaries of a lane. In some embodiments, the processor 107 may select a subset of continuous lane boundary detection points and the corresponding measured values to form the measured set. In some embodiments, the number of continuous lane boundary detection points selected to form the measured set may be predefined. However, the selection is performed by the processor 107 in real-time. In some other embodiments, the number of continuous lane boundary detection points selected to form the measured set may be decided as per requirement by the processor 107 for each image frame. In some embodiments, the clothoid parameters may include, but not limited to, initial curvature of lane boundary (co), curvature rate (c1) of the lane boundary, and heading angle (/3) with respect to vehicle's driving direction.
100851 At block 317, the method 3006 may include, determining, by the processor 107 in the dynamic environment, Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane. In some embodiments, the processor 107 may determine the Kalman filter parameters using Long Short-Term Memory (LSTM) networks.
100861 At block 319, the method 300b may include updating, by the processor 107 in the dynamic environment, the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters.
[0087] At block 321, the method 300b includes, reconstructing, by the processor 107 in the dynamic environment, the measured clothoid point, using the corresponding updated coefficient values of the clothoid parameters. In some embodiments, each reconstructed measured clothoid point enables the lane tracking system 103 to track the lane boundaries for the autonomous vehicle 101.
[0088] FIG.4 is a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
[0089] In some embodiments, FIG.4 illustrates a block diagram of an exemplary computer system 400 for implementing embodiments consistent with the present invention. In some embodiments, the computer system 400 can be a lane tracking system 103 for lane tracking for an autonomous vehicle, as shown in the FIG.4. The computer system 400 may include a central processing unit ("CPU" or "processor") 402. The processor 402 may include at least one data processor for executing program components for executing user or system-generated business processes. A user may include a person, a person using a device such as those included in this invention, or such a device itself The processor 402 may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc. [0090] The processor 402 may be disposed in communication with input devices 411 and output devices 412 via I/0 interface 401. The I/O interface 401 may employ communication protocols/methods such as, without limitation, audio, analog, digital, stereo, IEEE-1394, serial bus, Universal Serial Bus (USB), infrared, PS/2, BNC, coaxial, component, composite, Digital Visual Interface (DVI), high-definition multimedia interface (HDMI), Radio Frequency (RF) antennas, S-V deo, Video Graphics Array (VGA), IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., Code-Division Multiple Access (CDMA), High-Speed Packet Access (HSPA+), Global System For Mobile Communications (GSM), Long-Term Evolution (LTE), Wi Max, or the like), etc. [0091] Using the 1/0 interface 401, computer system 400 may communicate with input devices 411 and output devices 412.
[0092] In some embodiments, the processor 402 may be disposed in communication with a communication network 409 via a network interface 403. The network interface 403 may communicate with the communication network 409. The network interface 403 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), Transmission Control Protocol/Internet Protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. Using the network interface 403 and the communication network 409, the computer system 400 may communicate with a lane boundary detecting system 105, one or more sensors 113, and an image capturing device 115. In some embodiments, the lane tracking system 103 may also be associated with a database (not shown in the FIG.4). The communication network 409 can be implemented as one of the different types of networks, such as intranet or Local Area Network (LAN), Closed Area Network (CAN) and such within the autonomous vehicle. The communication network 409 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), CAN Protocol, Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other. Further, the communication network 409 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc. The one or more sensors 113 may include, but not limited to, Light Detection and Ranging (LIDAR) system, Global Positioning System (GPS), Laser and the like. In some embodiments, the processor 402 may be disposed in communication with a memory 405 (e.g., RAM, ROM, etc. not shown in FIG.4) via a storage interface 404. The storage interface 404 may connect to memory 405 including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as Serial Advanced Technology Attachment (SATA), Integrated Drive Electronics (IDE), IEEE-1394, Universal Serial Bus (USB), fibre channel, Small Computer Systems Interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc. [0093] The memory 405 may store a collection of program or database components, including, without limitation, a user interface 406, an operating system 407, a web browser 408 etc. In some embodiments, the computer system 400 may store user/application data such as the data, variables, records, etc. as described in this invention, Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or Sybase.
[0094] The operating system 407 may facilitate resource management and operation of the computer system 400. Examples of operating systems 407 include, without limitation, APPLE MACINTOSH® OS X"', UNW, UNIX-like system distributions (E.G., BERKELEY SOFTWARE DISTRIBUTION' (BSD), FREEBSD', NETBSD®, OPENBSD, etc.), LINUX' DISTRIBUTIONS (E.G., RED HAT®, UBUNTIJ®, KUBUNTU", etc.), IBM"OS/2", MICROSOFT' WINDOWS' (XV', VISTA'/7/8, 10 etc.), APPLE" IOS", GOOGLE1 m ANDROID 11", BLACKBERRY' OS, or the like. The User interface 406 may facilitate display, execution, interaction, manipulation, or operation of program components through textual or graphical facilities. For example, user interfaces 406 may provide computer interaction interface elements on a display system operatively connected to the computer system 400, such as cursors icons, checkboxes, menus, scrollers, windows, widgets, etc. Graphical User Interfaces (GUIs) may be employed, including, without limitation, Apple Macintosh' operating systems' Aqua', IBM® OS/2®, Microsoft' Windows® (e g Aero, Metro, etc.), web interface libraries (e g, ActiveX®, Jav4 Javascripeu, AJAX, HTML, Adobe® Flash®, etc.), or the like.
100951 In some embodiments, the computer system 400 may implement the web browser 408 stored program components. The web browser 408 may be a hypertext viewing application, such as MICROSOFT' INTERNET EXPLORER®, 000GLETM CHROMETm MOZILLA® FIREFOX", APPLE® SAFARI®, etc. Secure web browsing may be provided using Secure Hypertext Transport Protocol (HTTPS), Secure Sockets Layer (SSL), Transport Layer Security (TLS), etc. Web browsers 408 may utilize facilities such as AJAX, DHTML, ADOBE® FLASH®, JAVASCRIPT®, JAVA®, Application Programming Interfaces (APIs), etc. In some embodiments, the computer system 400 may implement a mail server stored program component. The mail server may be an Internet mail server such as Microsoft Exchange, or the like. The mail server may utilize facilities such as Active Server Pages (ASP), ACTIVEX', ANSI® C++/C#, MICROSOFT', .NET, CGI SCRIPTS, JAVA®, JAVASCRWT®, PERU®, PHP, PYTHON®, WEBOBJECTS®, etc. The mail server may utilize communication protocols such as Internet Message Access Protocol (IMAP), Messaging Application Programming Interface (MAPI), MICROSOFT' exchange, Post Office Protocol (POP), Simple Mail Transfer Protocol (SMTP), or the like. In some embodiments, the computer system 400 may implement a mail client stored program component. The mail client may be a mail viewing application, such as APPLE® MAIL, MICROSOFT® ENTOURAGE®, MICROSOFT" OUTLOOK®, MOZILLA® THUNDERBIRD®, etc. [0096] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present invention A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instmctions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term "computer-readable medium" should be understood to include tangible items and exclude carrier waves and transient signals, i e non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, non-volatile memory, hard drives, Compact Disc (CD) ROMs, Digital Video Disc (DVDs), flash drives, disks, and any other known physical storage media.
100971 A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention. When a single device or article is described herein, it will be apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself 100981 The specification has described a method and a system for lane tracking for an autonomous vehicle 101. The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that on-going technological development will change the manner in which particular functions are performed These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words "comprising," "having," "containing," and "including," and other similar forms are intended to be equivalent in meaning and be open-ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms "a," "an, and "the" include plural references unless the context clearly dictates otherwise.
100991 Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
Referral numerals
Reference Number Description
Architecture 101 Autonomous vehicle 103 Lane tracking system Lane boundary detecting system 107 Processor 109 I/0 interface 111 Memory 113 One or more sensors Image capturing device 203 Data 205 Modules 207 Training data 209 Clothoid points data 211 Kalman filter parametric data 213 Reconstructed data 215 Other data 221 Receiving module 223 Co-efficient value determining module 225 Kalman filter determining module 227 Reconstructing module 229 Learning module 231 Other modules 233 Exemplary plurality of lane boundary detection points corresponding to measured values 235 Exemplary plurality of lane boundary detection points corresponding to ground truth values 237 Exemplary lane tracked and modelled based on reconstructed clotho d points 400 Exemplary computer system 401 110 Interface of the exemplary computer system 402 Processor of the exemplary computer system 403 Network interface 404 Storage interface 405 Memory of the exemplary computer system 406 User interface 407 Operating system 408 Web browser 409 Communication network 411 Input devices 412 Output devices
Claims (1)
- CLAIMS1 A method of training a lane tracking system (103) for an autonomous vehicle (101), the method comprising: receiving, by a lane tracking system (103), ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points, from a lane boundary detecting system (105) associated with the lane tracking system (103); determining, for ground truth clothoid point and measured clothoid point formed using a ground truth set and a measured set, respectively, by the lane tracking system (103), co-efficient values of clothoid parameters, to model lane boundaries of a lane, wherein the ground truth set comprises a subset of continuous lane boundary detection points and the corresponding ground truth values, and the measured set comprises a subset of continuous lane boundary detection points and the corresponding measured values; determining, by the lane tracking system (103), Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane, wherein the Kalman filter parameters are determined using Long Short-Term Memory (LSTM) networks, updating, by the lane tracking system (103), the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters; reconstructing, by the lane tracking system (103), the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters, wherein each reconstructed measured clothoid point enables the lane tracking system (103) to track the lane boundaries for the autonomous vehicle (101); and minimizing, by the lane tracking system (103), a training error based on a difference between the reconstructed measured clothoid point and the corresponding ground truth set, in each cycle, until the training error is below a predefined threshold 2 The method as claimed in claim 1, wherein the plurality of lane boundary detection points correspond to a left lane boundary and a right lane boundary of the lane 3 The method as claimed in claim 1 comprises dynamically selecting, by the lane tracking system (103), the ground truth set and the measured set required for generating the ground truth clothoid point and the measured clothoid point, respectively.4 The method as claimed in claim 1 comprises determining the plurality of lane boundary detection points using at least one of lane data received from one or more sensors (113) configured in the vehicle and an image frame of the lane received in real-time, wherein the image frame of the lane is received from an image capturing device (115) associated with the autonomous vehicle (101).The method as claimed in claim 4 comprises retrieving the at least one of the lane data and the image frame of the lane from a database configured to store the lane data and the image frame captured in real-time.6 The method as claimed in claim 1, wherein the clothoid parameters comprise initial curvature of lane boundary (co), curvature rate (c 0 of the lane boundary, and heading angle (10 with respect to vehicle's driving direction.7. The method as claimed in claim 1, wherein determining the Kalman filter parameters using the one or more LSTM techniques comprises: providing, by the lane tracking system (103), the co-efficient values of the clothoid parameters determined for the measured clothoid point as an input to a first LSTM network, wherein the first LSTM network is trained based on historical co-efficient values of the clothoid parameters determined for clothoid points formed using historical measured and ground truth sets; determining, by the lane tracking system (103), a measurement noise covariance matrix (R) using the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the first LSTM network; predicting, by the lane tracking system (103), a state transition (Yp) of the coefficient values of the clothoid parameters determined for the measured clothoid point, from one image frame to another image frame, based on velocity of the autonomous vehicle (101) moving along the lane and time difference between consecutive image frames; determining, by the lane tracking system (103), a process noise covariance matrix (Q) using the predicted state transition as an input to a second LSTM network, wherein the second LSTM network is trained using historical autonomous vehicle (101) velocity values and time difference values; predicting, by the lane tracking system (103), an error covariance (Pp) of the predicted state transition using the determined process noise covariance matrix (Q); and determining, by the lane tracking system (103), the Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid point, based on the predicted state transition (Yp) and covariance (Pp), the determined measurement noise covariance matrix (R) and the co-efficient values of the clothoid parameters determined for the measured clothoid point.8. The method as claimed in claim 1 further comprises adding, by the lane tracking system (103), an initial lateral offset between the lane boundaries and the autonomous vehicle (101), to the reconstructed measured clothoid point.9 A method of lane tracking for an autonomous vehicle (101), the method comprising: receiving, by a lane tracking system (103), measured values corresponding to the plurality of lane boundary detection points, from a lane boundary detecting system (105) associated with the lane tracking system (103); determining, for a measured clothoid point formed using a measured set, by the lane tracking system (103), co-efficient values of clothoid parameters, to model lane boundaries of a lane, wherein the measured set comprises a subset of continuous lane boundary detection points and the corresponding measured values; determining, by the lane tracking system (103), Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane, wherein the Kalman filter parameters are determined using Long Short-Term Memory (LSTM) networks, updating, by the lane tracking system (103), the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters; and reconstructing, by the lane tracking system (103), the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters, wherein each reconstructed measured clothoid point enables the lane tracking system (103) to track the lane boundaries for the autonomous vehicle (101) A lane tracking system (103) for an autonomous vehicle (101), the lane tracking system (103) comprising: a processor (107), and a memory (111) communicatively coupled to the processor (107), wherein the memory (111) stores the processor (107) instructions, which, on execution, causes the processor (107) to train the lane tracking system (103), wherein for training, the processor (107) is configured to: receive ground truth values corresponding to a plurality of lane boundary detection points and measured values corresponding to the plurality of lane boundary detection points, from a lane boundary detecting system (105) associated with the lane tracking system (103); determine, for ground truth clothoid point and measured clothoid point formed using a ground truth set and a measured set, respectively, co-efficient values of clothoid parameters, to model lane boundaries of a lane, wherein the ground truth set comprises a subset of continuous lane boundary detection points and the corresponding ground truth values, and the measured set comprises a subset of continuous lane boundary detection points and the corresponding measured values, determine Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane, wherein the Kalman filter parameters are determined using Long Short-Term Memory (LSTM) networks; update the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters; reconstruct the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters, wherein each reconstructed measured clothoid point enables the lane tracking system (103) to track the lane boundaries for the autonomous vehicle (101); and minimize a training error based on difference between the reconstructed measured clothoid point and the corresponding ground truth set, in each cycle, until the training error is below a predefined threshold.11 The lane tracking system (103) as claimed in claim 10, wherein the plurality of lane boundary detection points correspond to a left lane boundary and a right lane boundary of the lane 12. The lane tracking system (103) as claimed in claim 10, wherein the processor (107) selects the ground truth set and the measured set required for generating the ground truth clothoid point and the measured clothoid point, respectively, dynamically.13. The lane tracking system (103) as claimed in claim 10, wherein the plurality of lane boundary detection points are determined using at least one of lane data received from one or more sensors (113) configured in the vehicle and an image frame of the lane received in real-time, wherein the image frame of the lane is received from an image capturing device (115) associated with the autonomous vehicle (101).14. The lane tracking system (103) as claimed in claim 13, wherein the processor (107) retrieves at least one of the lane data and the image frame of the lane from a database configured to store the lane data and the images frame captured in real-time 15. The lane tracking system (103) as claimed in claim 10, wherein the clothoid parameters comprise initial curvature of lane boundary (c.), curvature rate (ci) of the lane boundary, and heading angle (fl) with respect to vehicle's driving direction.16. The lane tracking system (103) as claimed in claim 10, wherein to determine the Kalman filter parameters using the one or more LSTM techniques, the processor (107) is configured to: provide the co-efficient values of the clothoid parameters determined for the measured clothoid point as an input to a first LSTM network, wherein the first LSTIV1 network is trained based on historical co-efficient values of the clothoid parameters determined for clothoid points formed using historical measured and ground truth sets, determine a measurement noise covariance matrix (R) using the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the first L S TM network; predict a state transition (Ye) of the co-efficient values of the clothoid parameters determined for the measured clothoid point, from one image frame to another image frame, based on velocity of the autonomous vehicle (101) moving along the lane and time difference between consecutive image frames; determine a process noise covariance matrix (Q) using the predicted state transition as an input to a second LSTM network, wherein the second LSTM network is trained using historical autonomous vehicle (101) velocity values and time difference values; predict an error covariance (Pe) of the predicted state transition using the determined process noise covariance matrix (Q); and determine the Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid point, based on the predicted state transition (Yp) and covariance (Pp), the determined measurement noise covariance matrix (R) and the co-efficient values of the clothoid parameters determined for the measured clothoid point 17. The lane tracking system (103) as claimed in claim 10, wherein the processor (107) is further configured to add an initial lateral offset between the lane boundaries and the autonomous vehicle (101), to the reconstructed measured clothoid point.18. A lane tracking system (103) for an autonomous vehicle (101), the lane tracking system (103) comprising: a processor (107); and a memory (111) communicatively coupled to the processor (107), wherein the memory (111) stores the processor (107) instructions, which, on execution, causes the processor (107) to: receive measured values corresponding to the plurality of lane boundary detection points, from a lane boundary detecting system (105) associated with the lane tracking system (103); determine, for a measured clothoid point formed using a measured set, coefficient values of clothoid parameters, to model lane boundaries of a lane, wherein the measured set comprises a subset of continuous lane boundary detection points and the corresponding measured values; determine Kalman filter parameters for the co-efficient values of the clothoid parameters determined for the measured clothoid points, to track the lane boundaries of the lane, wherein the Kalman filter parameters are determined using Long Short-Term Memory (LSTM) networks; update the co-efficient values of the clothoid parameters determined for the measured clothoid point, using the corresponding Kalman filter parameters and reconstruct the measured clothoid point, using the corresponding updated co-efficient values of the clothoid parameters, wherein each reconstructed measured clothoid point enables the lane tracking system (103) to track the lane boundaries for the autonomous vehicle (101).
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2022/076759 WO2023046975A1 (en) | 2021-09-27 | 2022-09-27 | Method and system for lane tracking for an autonomous vehicle |
EP22800098.0A EP4409527A1 (en) | 2021-09-27 | 2022-09-27 | Method and system for lane tracking for an autonomous vehicle |
JP2024518850A JP2024538583A (en) | 2021-09-27 | 2022-09-27 | Method and system for lane tracking in an autonomous vehicle |
KR1020247014220A KR20240090266A (en) | 2021-09-27 | 2022-09-27 | Lane tracking method and system for autonomous vehicles |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN202141043726 | 2021-09-27 |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202117061D0 GB202117061D0 (en) | 2022-01-12 |
GB2611117A true GB2611117A (en) | 2023-03-29 |
Family
ID=79270344
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2117061.8A Withdrawn GB2611117A (en) | 2021-09-27 | 2021-11-26 | Method and system for lane tracking for an autonomous vehicle |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117980949A (en) |
GB (1) | GB2611117A (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117020244B (en) * | 2023-09-28 | 2024-01-12 | 季华实验室 | Processing state monitoring method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6718259B1 (en) * | 2002-10-02 | 2004-04-06 | Hrl Laboratories, Llc | Adaptive Kalman filter method for accurate estimation of forward path geometry of an automobile |
JP2016057750A (en) * | 2014-09-08 | 2016-04-21 | 株式会社豊田中央研究所 | Estimation device and program of own vehicle travel lane |
US20200216076A1 (en) * | 2019-01-08 | 2020-07-09 | Visteon Global Technologies, Inc. | Method for determining the location of an ego-vehicle |
-
2021
- 2021-11-26 GB GB2117061.8A patent/GB2611117A/en not_active Withdrawn
-
2022
- 2022-09-27 CN CN202280064485.3A patent/CN117980949A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6718259B1 (en) * | 2002-10-02 | 2004-04-06 | Hrl Laboratories, Llc | Adaptive Kalman filter method for accurate estimation of forward path geometry of an automobile |
JP2016057750A (en) * | 2014-09-08 | 2016-04-21 | 株式会社豊田中央研究所 | Estimation device and program of own vehicle travel lane |
US20200216076A1 (en) * | 2019-01-08 | 2020-07-09 | Visteon Global Technologies, Inc. | Method for determining the location of an ego-vehicle |
Non-Patent Citations (3)
Title |
---|
HUSEYIN COSKUN ET AL: "Long Short-Term Memory Kalman Filters:Recurrent Neural Estimators for Pose Regularization", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 6 August 2017 (2017-08-06), XP080951672 * |
LOOSE H ET AL: "Kalman Particle Filter for lane recognition on rural roads", INTELLIGENT VEHICLES SYMPOSIUM, 2009 IEEE, IEEE, PISCATAWAY, NJ, USA, 3 June 2009 (2009-06-03), pages 60 - 65, XP031489816, ISBN: 978-1-4244-3503-6 * |
TANG JIGANG ET AL: "A review of lane detection methods based on deep learning", PATTERN RECOGNITION, ELSEVIER, GB, vol. 111, 15 September 2020 (2020-09-15), XP086395682, ISSN: 0031-3203, [retrieved on 20200915], DOI: 10.1016/J.PATCOG.2020.107623 * |
Also Published As
Publication number | Publication date |
---|---|
GB202117061D0 (en) | 2022-01-12 |
CN117980949A (en) | 2024-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109606263B (en) | Method for monitoring blind spot of monitoring vehicle and blind spot monitor using the same | |
JP7032387B2 (en) | Vehicle behavior estimation system and method based on monocular video data | |
US11921212B2 (en) | Long range lidar-based speed estimation | |
US10585436B2 (en) | Method and system for real-time generation of reference navigation path for navigation of vehicle | |
US10694105B1 (en) | Method and system for handling occluded regions in image frame to generate a surround view | |
US20200225662A1 (en) | Method and system of navigating an autonomous vehicle at an intersection of roads | |
EP4042318A1 (en) | System and method of generating a video dataset with varying fatigue levels by transfer learning | |
US10837789B2 (en) | Method and system for determining an optimal path for navigation of a vehicle | |
US10324473B2 (en) | Method and system for generating a safe navigation path for navigating a driverless vehicle | |
EP3696718A1 (en) | Method and system for determining drivable road regions for safe navigation of an autonomous vehicle | |
US20190197730A1 (en) | Semiconductor device, imaging system, and program | |
KR102141646B1 (en) | Method and apparatus for detecting moving object from image recorded by unfixed camera | |
GB2611117A (en) | Method and system for lane tracking for an autonomous vehicle | |
EP3575911B1 (en) | Method and system for correcting velocity of autonomous vehicle to navigate along planned navigation path | |
US11531842B2 (en) | Invertible depth network for image reconstruction and domain transfers | |
US12067788B2 (en) | Method and system for detecting and classifying lanes | |
WO2023046975A1 (en) | Method and system for lane tracking for an autonomous vehicle | |
US11474203B2 (en) | Method and system for determining correctness of Lidar sensor data used for localizing autonomous vehicle | |
CN117269951B (en) | Target tracking method with multi-view information enhancement in air and ground | |
US20240367656A1 (en) | Method for lane detection | |
JP7360304B2 (en) | Image processing device and image processing method | |
US20250074475A1 (en) | Method and system for predicting gesture of subjects surrounding an autonomous vehicle | |
US20230105148A1 (en) | Systems and methods for landing and terrain flight assistance | |
CN113808208A (en) | Functional safety train positioning method, system, electronic device and storage medium | |
CN116772870A (en) | Lane level positioning method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
COOA | Change in applicant's name or ownership of the application |
Owner name: CONTINENTAL AUTONOMOUS MOBILITY GERMANY GMBH Free format text: FORMER OWNER: CONTINENTAL AUTOMOTIVE GMBH |
|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |