CN115027482B - Fusion positioning method in intelligent driving - Google Patents
Fusion positioning method in intelligent driving Download PDFInfo
- Publication number
- CN115027482B CN115027482B CN202210761697.9A CN202210761697A CN115027482B CN 115027482 B CN115027482 B CN 115027482B CN 202210761697 A CN202210761697 A CN 202210761697A CN 115027482 B CN115027482 B CN 115027482B
- Authority
- CN
- China
- Prior art keywords
- curve
- vehicle
- curvature
- point
- lane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
- B60W40/072—Curvature of the road
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/35—Data fusion
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/40—High definition maps
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Mathematical Physics (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
Abstract
The invention relates to a fusion positioning method in intelligent driving, which comprises the following steps: outputting a boundary fitting curve; calculating a lane line fitting curve; obtaining a course angle calculation curve under a global coordinate system; obtaining corresponding line-shaped point information in the lane; obtaining a lane line curve; obtaining a first discrete point set; obtaining a second discrete point set; the curvature first curvature value corresponding to the first discrete point is obtained; obtaining a first curvature set; a second curvature value corresponding to the second discrete point is obtained; obtaining a second curvature set; outputting a datum point; correcting the longitudinal position of the vehicle; correcting the lateral position of the vehicle; and correcting the heading angle of the vehicle. According to the invention, a more accurate fitting model in the prior art is obtained, and the obtained correction points can more accurately correct the abscissa of the current position of the vehicle; estimating longitudinal position correction information of the vehicle which is not provided in the prior art; further correcting the current heading angle of the vehicle.
Description
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a fusion positioning method in intelligent driving.
Background
High-precision positioning is one of the bottom technologies essential for developing intelligent automatic driving; at present, no theory exists on how to overcome the technical limitation of the current stage in the technical field of intelligent automatic driving and ensure the continuity, the integrity and the high availability of high-precision positioning. The current mainstream view focuses on both the visual localization and radar sensor categories.
Firstly, it is clear that the common knowledge of human beings for intelligent automatic driving at the present stage is shown in the following four performance indexes: precision: the degree of coincidence between the measured value and the true value; integrity: the ability to raise an alarm when a service is not available; continuity: informing the client of the continuous capability of the normal operation of the system; availability of: providing a percentage of index-compliant location services;
In the prior art, visual positioning refers to a positioning mode of shooting an environment image by using a vehicle-mounted camera, comparing the environment image with a known map element or calculating the position of a vehicle in a recursive manner, and can be divided into absolute positioning and relative positioning; wherein:
the sources of absolute positioning materials are mainly three types: the ground printed matter comprises lane lines, zebra lines, diversion belts, ground characters, ground icons and the like printed on the road ground by the road administration department, and the semantic features are very stable as long as construction correction or use abrasion is avoided; the air semantic objects comprise road signs, traffic lights and the like above roads, and the positions of the devices are basically fixed, the semantic information is clear, and the device is very suitable for positioning; street view is relatively free of the first two methods.
The relative positioning, namely vSLAM (visual synchrony positioning and mapping) and VO (visual odometer) are now popular. These two words often appear together, the former including the latter, and being replaced by vSLAM in the general discussion, the main feature being to provide a loop and optimization of the rear end, but when the vehicle is running normally, it will hardly return to the place where it was previously removed in a short time after it was removed, the purpose of the loop is not so great, so that it is mainly VO that is used in visual localization.
VSLAM and VO are based on multi-view geometry, and it can be seen from the above that the images taken by the camera from different positions of the same object must be similar and slightly different. Through the image processing method, the feature points corresponding to one another in the two images can be found. When the number of the matched characteristic points is enough, the rotation and translation relation between the two cameras can be obtained by solving a homography matrix or an essential matrix, and the rotation and translation are combined to form a conversion. When the data continuously collected by the camera form a video sequence, solving the transformation between every two frames and combining the two frames to obtain a track from the initial position to the current position. Because the relative track is obtained, SLAM itself cannot directly complete the positioning task and needs to be fused with absolute positioning. The data of other sensors can be put into the SLAM framework as additional constraints, and the local relations of the visual observations or odometers can be output to other positioning frameworks as constraints.
Whichever way, the general flow of visual localization can be divided into four steps: the camera acquires images, pre-processes the images, extracts image features or extracts semantics, and solves the pose by using a multi-view geometric and optimization method. Cameras in the visual positioning task need to take into account a variety of hardware factors. For example, in order to enable the vision algorithm to run at night, an infrared camera, a starlight camera and even a thermal imaging camera can be selected; in order to cover different visual field ranges, a wide-angle lens, a fisheye lens, a full-view camera and the like can be selected; the vehicle-mounted camera has various mounting positions and various number configurations, and is mainly used for positioning tasks, namely single front view or double front view.
The defects of the prior art visual positioning are as follows:
1. The small view angle is not beneficial to positioning because the image capturing range is small;
2. The large view angle is beneficial to seeing more things, but for CCD target surfaces with the same size, if a lens with a large view angle is loaded, the size of each object in the image is much smaller, so that the large view angle is not beneficial to completing certain visual tasks;
3. if a monocular camera is used, there is a further disadvantage in that the dimensions of the object cannot be resolved.
In the prior art, radar sensor technology is various, and laser radar is currently mainly used for vehicle positioning. The two-dimensional laser radar is commonly used in AGV or robot positioning navigation, the positioning principle can be simply understood as that a beam of laser irradiates downwards above, a mirror rotates continuously to convert the laser into transverse scanning below, the laser strikes objects with different distances and returns at different times, and accordingly the outline of the peripheral environment can be obtained on the scanning plane. However, in the field of autopilot, the most used is three-dimensional lidar.
The principle of the three-dimensional laser radar is as follows: the emitting tubes and the receiving tubes of the multiple beams of laser are arranged according to different angles, the middle is provided with a separation plate, the mutual interference is avoided by staggered emission according to a certain sequence, and after the light source and the receiver assembly rotate, the multi-line scanning result of the surrounding environment can be obtained, and a set of points in a three-dimensional coordinate system, which is called point cloud, is formed.
In the prior art, laser radar positioning can be divided into two types, namely mapping positioning and mapping-free positioning; wherein:
The mapping is divided into two steps of mapping and mapping: and when the map is built, the point clouds are overlapped on the running track of the vehicle frame by frame, so that a point cloud map can be obtained. The track can be a track output by a high-precision combined inertial navigation system or a track output by a point cloud SLAM.
The non-image positioning is similar to the visual odometer, and a point cloud odometer can be constructed to realize relative positioning after two frames of the point cloud are matched and combined, for example, a point cloud positioning module in open source software Autoware. And plane characteristics and angular point characteristics of the point cloud can be extracted for matching, and a point cloud characteristic odometer is constructed to realize positioning, for example, an open source algorithm LOAM.
The drawbacks of radar localization techniques are:
1. Because the direct mapping of the point cloud is likely to generate a point cloud file with a particularly huge volume, the original point cloud map is not suitable for being used in a large range;
2. The price is high, the service life is short, and the mechanical laser radar cannot pass the vehicle gauge and cannot be truly used in the industrial field.
For this reason, the prior art also considers positioning by fusion;
The fusion positioning fuses all positioning modes on the current market, including GPS, base station positioning, wifi positioning, bluetooth positioning and sensor positioning.
The fusion positioning technology is realized mainly by cooperation of a third party location service manufacturer, such as Gooder, and a chip manufacturer, so that the fusion positioning technology is integrated in the hardware and system level of the automatic driving automobile.
The fusion positioning technology has the following advantages:
1. the positioning effect is not affected by the surrounding environment. The GPS positioning can not be used in indoor areas, tunnels and other areas, the base station and the WiFi positioning are also limited by network signals, and the fusion positioning can be automatically positioned by a sensor in an auxiliary manner under the environment, so that the influence of the environmental factors is avoided;
2. Because the fusion positioning technology integrates positioning logic and key positioning technology into the chip, the positioning capability can be implanted into the operation control system of the vehicle along with the chip, and the system can automatically assign a positioning strategy according to the hardware condition;
3. because the GPS positioning consumes a great amount of electricity, the fusion positioning can automatically position in a low-power consumption mode, such as network positioning and a sensor, under the condition of allowing the condition, thereby greatly saving the electricity consumption;
4. the geofence function is truly realized;
5. The fusion positioning technology comprises the vehicle motion state information obtained by the acceleration sensor, so that the current motion state of the automatic driving vehicle can be judged;
6. The track of the vehicle can be recorded in a low-power-consumption mode by combining a positioning technology, and the position of the vehicle can be tracked more continuously without depending on a network and GPS signals, so that the track of the vehicle is smoother and saves electricity compared with the prior technology when the track of the vehicle user is revisited.
However, the fusion positioning technology in the prior art is still imperfect; typical fusion positioning technology is named as a vehicle fusion positioning system and method, and the following technical scheme is disclosed in China application with the application number of CN 202111055356.1:
Firstly, acquiring motion information of a vehicle through an inertial measurement module, a satellite navigation information receiving module and a wheel speed information acquisition module which are arranged on the intelligent driving vehicle, and meanwhile, dynamically acquiring lane line information in front of the vehicle through a lane line acquisition module which is arranged in front of the vehicle, processing the acquired lane line information, identifying the situation of the lane line in front of the vehicle, calculating the relative distance between the vehicle and the lane line, calculating absolute position information or partial absolute position information of the vehicle according to the specific situation of the acquired lane line and the specific position of the lane line in a lane line map, and then carrying out second fusion positioning calculation to obtain a corrected fusion positioning result and a correction amount of a system error, thereby completing a period fusion positioning process.
The technical idea of the invention is as follows: and carrying out fusion filtering on the lane line information of the current lane under the world coordinate system and the position information of the vehicle under the world coordinate system by utilizing the lane line distance information of the two sides of the vehicle, which is output by the visual sensor, so as to obtain more accurate position information.
The fusion positioning technology in the prior art has the following defects:
1. as the lidar technology is as described above, it cannot be actually applied in industry, so the existing fusion positioning technology mostly uses visual sensors, and thus the above drawbacks of the visual positioning technology are inherited inevitably;
2. The chinese invention of application number CN202111055356.1 is that only the lateral distance of the vehicle in the lane can be corrected, and the longitudinal position cannot be constrained. In a scene of curve driving, accurate position information cannot be provided, so that the control accuracy of the intelligent driving vehicle is affected.
Disclosure of Invention
Aiming at the problems, the invention provides a fusion positioning method in intelligent driving, which aims to obtain a fitting model which teaches the prior art more accurately, and the obtained correction points can correct the abscissa of the current position of the vehicle more accurately; estimating longitudinal position correction information of the vehicle which is not provided in the prior art; the current course angle of the vehicle is further corrected, so that the accuracy of fusion positioning is higher than that of the previous step.
In order to solve the problems, the technical scheme provided by the invention is as follows:
A fusion positioning method in intelligent driving comprises the following steps:
s100, outputting boundary fitting curves of two sides of a lane where a vehicle to be positioned is currently located through a visual sensor; the boundary fitting curve is positioned in a vehicle coordinate system;
s200, calculating a lane line fitting curve about a lane center line according to the boundary fitting curve;
S300, performing curve fitting on linear points in a lane recorded in a high-precision map to obtain a course angle calculation curve under a global coordinate system; converting the lane center line recorded in the high-precision map into the vehicle coordinate system to obtain corresponding lane center line point information; the lane center line shaped point information comprises the lane center line shaped point;
S400, outputting the high-precision map and converting the high-precision map to obtain linear points in the lane, and performing curve fitting to obtain a lane line curve;
s500, sampling the lane line fitting curve from the vision sensor according to a manually preset sampling interval to obtain a first discrete point set; the first discrete point set comprises a plurality of sampling to obtain first discrete points;
Sampling the lane line curve from the high-precision map according to the sampling interval with the same value to obtain a second discrete point set; the second discrete point set is formed by sampling a plurality of contained points;
S600, intercepting all first discrete points with the distance from the locomotive being in a first trusted distance range; the first trusted distance range is preset manually; then extracting curvature information of the position of each first discrete point on the lane line fitting curve one by one; then, taking curvature information corresponding to each first discrete point as a first curvature value; packaging all the first curvature values to obtain a first curvature set;
s700, intercepting all second discrete points with the distance from the locomotive being within a second trusted distance range; the second trusted distance range is preset manually; then extracting curvature information of the position of each second discrete point on the lane line one by one; then, taking curvature information corresponding to each second discrete point as a second curvature value; packaging all the second curvature values to obtain a second curvature set;
S800, finding a point on the lane line curve according to the first curvature set and the second curvature set, and enabling curvature error values of the first curvature set relative to the second curvature set to be minimum when the point is met as a reference; and then outputting the point as a reference point;
s900, subtracting the ordinate of the point closest to the locomotive in the first trusted distance range from the ordinate of the reference point extracted in the S800 to obtain the ordinate of the current vehicle position, so as to realize correction of the longitudinal position of the vehicle;
S1000, respectively cutting out points with the ordinate of the current vehicle position obtained in S900 on the lane line fitting curve and the lane line curve; outputting a point obtained by intercepting the lane line fitting curve as a first correction point; outputting a point obtained by intercepting the lane line curve as a second correction point;
S1100, finding out a point with the minimum distance from the first discrete point set to the current vehicle position as a transverse calibration point; then, the first correction point is replaced by the transverse correction point to obtain the abscissa of the current vehicle position, so that the transverse position of the vehicle is corrected;
s1200, calculating an included angle value between the lane line fitting curve and a y axis on the lane line fitting curve to be used as a first included angle; then calculating absolute course angle information of the lane center line in the course angle calculation curve to serve as a second included angle; then, according to the first included angle and the second included angle, calculating to obtain a course angle of the current vehicle position, and correcting the course angle of the vehicle;
S1300, packaging and outputting the corrected ordinate of the current vehicle position, the corrected abscissa of the current vehicle position and the corrected course angle of the current vehicle position, namely the final output result of the positioning method.
Preferably, the lane line fitting curve is expressed as follows:
x=c3y3+c2y2+c1y+c0
Wherein: x is the abscissa, under the vehicle coordinate system; y is the ordinate, in the vehicle coordinate system; the vehicle coordinate system is as follows: taking the front of a vehicle as an x axis, and the direction of the head of the vehicle as a positive direction, wherein the vehicle coordinate system meets a right hand rule; c 3 is 6 times the curvature change rate at the intersection of the lane-line fitting curve and the y-axis on the vehicle coordinate system; c 2 is 2 times the curvature at the intersection of the lane line fitting curve and the y-axis on the vehicle coordinate system; c 1 is the curvature at the intersection of the lane line fitting curve and the y-axis on the vehicle coordinate system; and c 0 is the intercept of the lane line fitting curve with the y-axis on the vehicle coordinate system.
Preferably, the lane center line is expressed as:
x=c3y3+cy2+c1y+c'0
wherein: c' 0 is the intercept of the lane centerline with the y-axis on the vehicle coordinate system.
Preferably, the lane line curve is expressed as follows:
x=b3y3+b2y2+b1y+b0
Wherein: b 3 is 6 times the rate of change of curvature at the intersection of the lane-line curve and the y-axis on the vehicle coordinate system; b 2 is 2 times the curvature at the intersection of the lane line curve and the y-axis on the vehicle coordinate system; b 1 is the curvature at the intersection of the lane line curve and the y-axis on the vehicle coordinate system; b 0 is the intercept of the lane-line curve with the y-axis on the vehicle coordinate system.
Preferably, the sampling interval is 20cm.
Preferably, the curvature error value is expressed as follows:
M=(r'p-r1)2+(r'p+1-r1)2+...+(r'p+n-1-r1)2(1≤p≤m-n+1)
Wherein: m is the curvature error value; r 1∈[r1,r2,r3,r4,r5…rn ], wherein [ r 1,r2,r3,r4,r5…rn ] is the first set of curvatures, n is a counter ;r'p∈[r'1,r'2,r'3,r'4,r'5...r'm],r'p+1∈[r'1,r'2,r'3,r'4,r'5…r'm],...r'p+n-1∈[r'1,r'2,r'3,r'4,r'5...r'm], of the first set of curvatures, wherein [ r' 1,r'2,r'3,r'4,r'5…r'm ] is the second set of curvatures, m is a counter of the second set of curvatures, and n < m.
Preferably, the first included angle is expressed by the following formula:
wherein: alpha is the first included angle; and the result of deriving the lane line fitting curve is expressed by the following formula:
Wherein: y 1 is a longitudinal coordinate value of the position after the horizontal-longitudinal position correction.
Preferably, the second included angle is expressed by the following formula:
Wherein: beta is the second included angle; for the result of deriving the course angle calculation curve, the following expression is adopted:
Wherein: y 2 is a longitudinal coordinate value of the position after the correction of the horizontal and longitudinal positions in the global coordinate system; e 3 is 6 times the curvature change rate at the intersection point of the course angle calculation curve and the y-axis on the vehicle coordinate system; e 2 is 2 times the curvature at the intersection of the course angle calculation curve and the y-axis on the vehicle coordinate system; e 1 is the curvature at the intersection of the course angle calculation curve and the y-axis on the vehicle coordinate system.
Preferably, the heading angle of the current vehicle position is expressed as follows:
θ=α+β
Wherein: θ is the heading angle of the current vehicle position.
Preferably, the distance from the last point in the first discrete point to the headstock is not more than the maximum value of the first trusted distance range;
the first trusted distance range is no more than 20m from the headstock;
the second trusted distance range is no more than 60m from the vehicle head.
Compared with the prior art, the invention has the following advantages:
1. The invention adopts the technical proposal that the vision sensor outputs the polynomial of the lane center line of the current lane, the corresponding linear point information in the lane under the strategy coordinate system output by the high-precision map, the linear point in the lane output by the map is subjected to curve fitting to obtain the corresponding curve equation, and the lane line fitting curve output by the vision sensor and the lane line curve output by the high-precision map are subjected to equidistant sampling to obtain two groups of discrete points, thereby obtaining a fitting model which teaches the prior art more accurately, and the obtained correction points can correct the transverse coordinates of the current position of the vehicle more accurately;
2. On the basis of the prior art, the confidence coefficient is higher at the position closer to the vehicle by adopting the cubic polynomial output by the camera, so that a group of points which are at the beginning of the distance from the vehicle head position to the point s 0 are taken, the curvature information of each point is extracted, the curvatures of a series of linear points in the lane can be obtained from the high-precision map, the group of curvatures output by the camera are subjected to sliding matching on the curvatures output by the map, the minimum error is calculated, and finally, the starting point of the shape point in the high-precision map is obtained according to the minimum error, so that the longitudinal position correction information of the vehicle is calculated, but the correction is not provided in the prior art;
3. The invention can obtain more accurate abscissa of the current position of the vehicle, and can calculate the ordinate correction information which cannot be obtained by the prior art, so that the current course angle of the vehicle can be further corrected on the basis, and the accuracy of fusion positioning is improved by one step, which is not possessed by the prior art.
Drawings
FIG. 1 is a flow chart of a fusion positioning method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of lane line fitting curves, lane line curves, and trusted distance selection in a vehicle coordinate system according to an embodiment of the present invention;
Fig. 3 is a schematic diagram of a process of correcting a lateral distance after correcting a longitudinal position according to an embodiment of the present invention.
Detailed Description
The present application is further illustrated below in conjunction with specific embodiments, it being understood that these embodiments are meant to be illustrative of the application and not limiting the scope of the application, and that modifications of the application, which are equivalent to those skilled in the art to which the application pertains, fall within the scope of the application defined in the appended claims after reading the application.
As shown in fig. 1, a fusion positioning method in intelligent driving includes the following steps:
S100, outputting boundary fitting curves of two sides of a lane where a vehicle to be positioned is currently located through a visual sensor; the boundary fit curve is located in the vehicle coordinate system.
S200, as shown in fig. 2, calculating a lane line fitting curve about a lane center line according to the boundary fitting curve.
In this embodiment, the lane line fitting curve is expressed by the formula (1):
x=c3y3+c2y2+c1y+c0 (1)
wherein: x is the abscissa, in the vehicle coordinate system; y is the ordinate, in the vehicle coordinate system; the vehicle coordinate system is: taking the front of the vehicle as an x axis, the direction of the vehicle head is a positive direction, and the vehicle coordinate system meets the right hand rule; c 3 is 6 times of the curvature change rate at the intersection point of the lane line fitting curve and the y axis on the vehicle coordinate system; c 2 is 2 times the curvature at the intersection of the lane line fitting curve and the y-axis on the vehicle coordinate system; c 1 is the curvature at the intersection of the lane line fitting curve and the y-axis on the vehicle coordinate system; c 0 is the intercept of the lane line fitting curve with the y-axis on the vehicle coordinate system.
In this specific embodiment, the lane center line is obtained by translating a lane line fitting curve in a vehicle coordinate system, and is expressed according to formula (2):
x=c3y3+cy2+c1y+c'0 (2)
Wherein: c' 0 is the intercept of the lane centerline with the y-axis on the vehicle coordinate system.
S300, performing curve fitting on linear points in a lane recorded in a high-precision map to obtain a course angle calculation curve under a global coordinate system; converting the lane center line recorded in the high-precision map into a vehicle coordinate system to obtain corresponding lane center line point information; the lane center line shaped point information comprises lane center line shaped points;
s400, outputting the high-precision map and converting the high-precision map to obtain linear points in the lane, and performing curve fitting to obtain a lane line curve.
In this embodiment, the lane line curve is expressed by the formula (3):
x=b3y3+b2y2+b1y+b0 (3)
Wherein: b 3 is 6 times the rate of change of curvature at the intersection of the lane-line curve and the y-axis on the vehicle coordinate system; b 2 is 2 times the curvature at the intersection of the lane line curve and the y-axis on the vehicle coordinate system; b 1 is the curvature at the intersection of the lane line curve and the y-axis on the vehicle coordinate system; b 0 is the intercept of the lane line curve with the y-axis on the vehicle coordinate system.
S500, sampling a lane line fitting curve from a visual sensor according to a manually preset sampling interval to obtain a first discrete point set; the first discrete point set includes a plurality of samples to obtain a first discrete point.
Sampling a lane line curve from the high-precision map according to the sampling interval of the same value to obtain a second discrete point set; the second discrete point set is formed by integrating a plurality of containing samples.
In this embodiment, the sampling interval is 20cm.
It should be noted that, to this step position, two sets of discrete points, i.e., a first discrete point set and a second discrete point set, are obtained; the following steps are all to correct the position and heading angle of the vehicle based on selecting a proper point from the discrete points so as to obtain a result which relatively accords with a true value.
S600, intercepting all first discrete points with the distance from the locomotive being in a first trusted distance range; the first trusted distance range is preset manually; then extracting curvature information of the positions of the first discrete points on the lane line fitting curve one by one; then, curvature information corresponding to each first discrete point is used as a first curvature value; and then packaging all the first curvature values to obtain a first curvature set.
In this embodiment, the distance from the last point in the first discrete points to the headstock does not exceed the maximum value of the first trusted distance range.
In this particular embodiment, the first trusted distance range is no more than 20m from the vehicle head.
In this particular embodiment, the second trusted distance range is no more than 60m from the vehicle head.
Note that the third order polynomial output by the vision sensor has a high confidence only in the vicinity of the vehicle, and therefore, a set of points [ x 1,x2,x3,x4,x5…,xn ] starting from the point s 0 are taken as points s 0 from the vehicle head position.
It should be further noted that, the position s n of the last point x n from the vehicle head needs to be within a higher confidence level range of the lane line output by the camera, that is, within a first confidence range.
S700, intercepting all second discrete points with the distance from the locomotive being within a second trusted distance range; the second trusted distance range is preset manually; then extracting curvature information of the position of each second discrete point on the lane line one by one; then, using curvature information corresponding to each second discrete point as a second curvature value; and then packaging all the second curvature values to obtain a second curvature set.
S800, finding a point on the lane line curve according to the first curvature set and the second curvature set, and enabling the point to be used as a reference, wherein the curvature error value of the first curvature set relative to the second curvature set is minimum; this point is then output as a reference point.
In this embodiment, the curvature error value is expressed by the formula (4):
M=(r'p-r1)2+(r'p+1-r1)2+...+(r'p+n-1-r1)2 (1≤p≤m-n+1) (4)
Wherein: m is a curvature error value; r 1∈[r1,r2,r3,r4,r5…rn ], wherein [ r 1,r2,r3,r4,r5…rn ] is a first set of curvatures, n is a counter ;r'p∈[r'1,r'2,r'3,r'4,r'5...r'm],r'p+1∈[r'1,r'2,r'3,r'4,r'5…r'm],...r'p+n-1∈[r'1,r'2,r'3,r'4,r'5…r'm], of the first set of curvatures, wherein [ r' 1,r'2,r'3,r'4,r'5...r'm ] is a second set of curvatures, m is a counter of the second set of curvatures, and n < m.
Note that the logic of S800 is as follows:
first, a curvature error value M is defined.
Then, try-by-try according to equation (4) will necessarily find a value p such that M at value p is minimal.
S900, subtracting the ordinate of the point closest to the headstock in the first trusted distance range from the ordinate of the reference point extracted in the S800 to obtain the ordinate of the current vehicle position, and correcting the longitudinal position of the vehicle.
The principle of S900 is as follows:
In the high-precision map, the value p corresponds to the first point of the lane line fitting curve output by the vision sensor; thus, the difference between the point s p at p from the head position and the ordinate distance from the head position at point s 0 is the ordinate of the current vehicle position.
As shown in fig. 3, S1000, points with ordinate coordinates of the current vehicle position obtained in S900 are respectively cut off on the lane line fitting curve and the lane line curve; outputting a point obtained by intercepting a lane line fitting curve as a first correction point; and outputting a point obtained by intercepting the lane line curve as a second correction point.
Note that the abscissa of the first correction point is c 0 in the formula (1), and the abscissa of the second correction point is b 0 in the formula (3).
S1100, finding out a point with the minimum distance from the current vehicle position in the first discrete point set as a transverse calibration point; and then, the transverse calibration point is used for replacing the first correction point, so that the abscissa of the current vehicle position is obtained, and the transverse position of the vehicle is corrected.
The principle of S1100 is as follows:
a point D closest to the vehicle can be found near point s p; then, a point closest to the vehicle can be found on the lane line fitting curve, and a point closest to the vehicle can be found on the lane line curve; for these two points, the lateral distance to the vehicle is calculated, designated herein as d 1 and d 2, respectively; then, because the distance from the vehicle is closer, the confidence is higher, and the lateral offset difference between the lane line fitting curve and the lane line curve is smaller than d 2 on the lateral offset by d 1, the lateral offset difference can be just used for performing the lateral distance correction, so that the distance of the line segment on the line segment where d 2 is positioned can be reduced to d 1, and the lateral correction point is obtained.
S1200, calculating an included angle value between the lane line fitting curve and a y axis on the lane line fitting curve to serve as a first included angle; then calculating absolute course angle information of the lane center line in a course angle calculation curve to serve as a second included angle; and then, according to the first included angle and the second included angle, calculating to obtain the course angle of the current vehicle position, and correcting the course angle of the vehicle.
In the specific embodiment, the first included angle is obtained by deriving a curve polynomial fitted to the lane line output by the vision sensor, and expressed according to the formula (5):
wherein: alpha is a first included angle; the result of deriving the lane line fitting curve is expressed by the following formula (6):
Wherein: y 1 is a longitudinal coordinate value of the position after the horizontal-longitudinal position correction.
In this specific embodiment, the second included angle is expressed according to formula (7) by deriving a curve polynomial fitted to the lane line output by the high-precision map:
Wherein: beta is a second included angle; The result of deriving the course angle calculation curve is expressed by the following formula (8):
Wherein: y 2 is a longitudinal coordinate value of the position after the correction of the horizontal and longitudinal positions in the global coordinate system; e 3 is 6 times the curvature change rate at the intersection point of the course angle calculation curve and the y-axis on the vehicle coordinate system; e 2 is 2 times the curvature at the intersection of the course angle calculation curve and the y-axis on the vehicle coordinate system; e 1 is the curvature at the intersection of the course angle calculation curve and the y-axis on the vehicle coordinate system.
In this embodiment, the heading angle of the current vehicle position is expressed as formula (9):
θ=α+β (9)
wherein: θ is the heading angle of the current vehicle position.
S1300, packaging and outputting the corrected ordinate of the current vehicle position, the corrected abscissa of the current vehicle position and the corrected course angle of the current vehicle position, and obtaining a final output result of the positioning method.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. As will be apparent to those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or".
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.
Claims (9)
1. A fusion positioning method in intelligent driving is characterized in that: comprises the following steps:
s100, outputting boundary fitting curves of two sides of a lane where a vehicle to be positioned is currently located through a visual sensor; the boundary fitting curve is positioned in a vehicle coordinate system;
s200, calculating a lane line fitting curve about a lane center line according to the boundary fitting curve;
S300, performing curve fitting on linear points in a lane recorded in a high-precision map to obtain a course angle calculation curve under a global coordinate system; converting the lane center line recorded in the high-precision map into the vehicle coordinate system to obtain corresponding lane center line point information; the lane center line shaped point information comprises the lane center line shaped point;
S400, outputting the high-precision map and converting the high-precision map to obtain linear points in the lane, and performing curve fitting to obtain a lane line curve;
s500, sampling the lane line fitting curve from the vision sensor according to a manually preset sampling interval to obtain a first discrete point set; the first discrete point set comprises a plurality of sampling to obtain first discrete points;
Sampling the lane line curve from the high-precision map according to the sampling interval with the same value to obtain a second discrete point set; the second discrete point set is formed by sampling a plurality of contained points;
S600, intercepting all first discrete points with the distance from the locomotive being in a first trusted distance range; the first trusted distance range is preset manually; then extracting curvature information of the position of each first discrete point on the lane line fitting curve one by one; then, taking curvature information corresponding to each first discrete point as a first curvature value; packaging all the first curvature values to obtain a first curvature set;
s700, intercepting all second discrete points with the distance from the locomotive being within a second trusted distance range; the second trusted distance range is preset manually; then extracting curvature information of the position of each second discrete point on the lane line one by one; then, taking curvature information corresponding to each second discrete point as a second curvature value; packaging all the second curvature values to obtain a second curvature set;
S800, finding a point on the lane line curve according to the first curvature set and the second curvature set, and enabling curvature error values of the first curvature set relative to the second curvature set to be minimum when the point is met as a reference; and then outputting the point as a reference point;
s900, subtracting the ordinate of the point closest to the locomotive in the first trusted distance range from the ordinate of the reference point extracted in the S800 to obtain the ordinate of the current vehicle position, so as to realize correction of the longitudinal position of the vehicle;
S1000, respectively cutting out points with the ordinate of the current vehicle position obtained in S900 on the lane line fitting curve and the lane line curve; outputting a point obtained by intercepting the lane line fitting curve as a first correction point; outputting a point obtained by intercepting the lane line curve as a second correction point;
S1100, finding out a point with the minimum distance from the first discrete point set to the current vehicle position as a transverse calibration point; then, the first correction point is replaced by the transverse correction point to obtain the abscissa of the current vehicle position, so that the transverse position of the vehicle is corrected;
s1200, calculating an included angle value between the lane line fitting curve and a y axis on the lane line fitting curve to be used as a first included angle; then calculating absolute course angle information of the lane center line in the course angle calculation curve to serve as a second included angle; then, according to the first included angle and the second included angle, calculating to obtain a course angle of the current vehicle position, and correcting the course angle of the vehicle;
S1300, packaging and outputting the corrected ordinate of the current vehicle position, the corrected abscissa of the current vehicle position and the corrected course angle of the current vehicle position, namely the final output result of the positioning method.
2. The fusion positioning method in intelligent driving according to claim 1, characterized in that: the lane line fitting curve is expressed as follows:
x=c3y3+c2y2+c1y+c0
Wherein: x is the abscissa, under the vehicle coordinate system; y is the ordinate, in the vehicle coordinate system; the vehicle coordinate system is as follows: taking the front of a vehicle as an x axis, and the direction of the head of the vehicle as a positive direction, wherein the vehicle coordinate system meets a right hand rule; c 3 is 6 times the curvature change rate at the intersection of the lane-line fitting curve and the y-axis on the vehicle coordinate system; c 2 is 2 times the curvature at the intersection of the lane line fitting curve and the y-axis on the vehicle coordinate system; c 1 is the curvature at the intersection of the lane line fitting curve and the y-axis on the vehicle coordinate system; and c 0 is the intercept of the lane line fitting curve with the y-axis on the vehicle coordinate system.
3. The fusion positioning method in intelligent driving according to claim 2, characterized in that: the lane center line is expressed as follows:
x=c3y3+c2y2+c1y+c'0
wherein: c' 0 is the intercept of the lane centerline with the y-axis on the vehicle coordinate system.
4. A fusion positioning method in intelligent driving according to claim 3, characterized in that: the lane line curve is expressed as follows:
x=b3y3+b2y2+b1y+b0
Wherein: b 3 is 6 times the rate of change of curvature at the intersection of the lane-line curve and the y-axis on the vehicle coordinate system; b 2 is 2 times the curvature at the intersection of the lane line curve and the y-axis on the vehicle coordinate system; b 1 is the curvature at the intersection of the lane line curve and the y-axis on the vehicle coordinate system; b 0 is the intercept of the lane-line curve with the y-axis on the vehicle coordinate system.
5. The fusion positioning method in intelligent driving according to claim 4, wherein: the sampling interval is 20cm.
6. The fusion positioning method in intelligent driving according to claim 5, wherein: the first included angle is expressed as follows:
wherein: alpha is the first included angle; and the result of deriving the lane line fitting curve is expressed by the following formula:
Wherein: y 1 is a longitudinal coordinate value of the position after the horizontal-longitudinal position correction.
7. The fusion positioning method in intelligent driving according to claim 6, wherein: the second included angle is expressed as follows:
Wherein: beta is the second included angle; for the result of deriving the course angle calculation curve, the following expression is adopted:
Wherein: y 2 is a longitudinal coordinate value of the position after the correction of the horizontal and longitudinal positions in the global coordinate system; e 3 is 6 times the curvature change rate at the intersection point of the course angle calculation curve and the y-axis on the vehicle coordinate system; e 2 is 2 times the curvature at the intersection of the course angle calculation curve and the y-axis on the vehicle coordinate system; e 1 is the curvature at the intersection of the course angle calculation curve and the y-axis on the vehicle coordinate system.
8. The fusion positioning method in intelligent driving according to claim 7, characterized in that: the heading angle of the current vehicle position is expressed as follows:
θ=α+β
Wherein: θ is the heading angle of the current vehicle position.
9. The fusion positioning method in intelligent driving according to claim 8, wherein: the distance between the last point in the first discrete points and the locomotive does not exceed the maximum value of the first trusted distance range;
the first trusted distance range is no more than 20m from the headstock;
the second trusted distance range is no more than 60m from the vehicle head.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210761697.9A CN115027482B (en) | 2022-06-29 | 2022-06-29 | Fusion positioning method in intelligent driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210761697.9A CN115027482B (en) | 2022-06-29 | 2022-06-29 | Fusion positioning method in intelligent driving |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115027482A CN115027482A (en) | 2022-09-09 |
CN115027482B true CN115027482B (en) | 2024-08-16 |
Family
ID=83128623
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210761697.9A Active CN115027482B (en) | 2022-06-29 | 2022-06-29 | Fusion positioning method in intelligent driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115027482B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115993137B (en) * | 2023-02-22 | 2023-06-13 | 禾多科技(北京)有限公司 | Vehicle positioning evaluation method, device, electronic device and computer readable medium |
CN115950441B (en) * | 2023-03-08 | 2023-07-07 | 智道网联科技(北京)有限公司 | Fusion positioning method and device for automatic driving vehicle and electronic equipment |
CN116630928B (en) * | 2023-07-25 | 2023-11-17 | 广汽埃安新能源汽车股份有限公司 | Lane line optimization method and device and electronic equipment |
CN118485983A (en) * | 2024-05-29 | 2024-08-13 | 东风商用车有限公司 | Lane line confidence calculation method, device, electronic device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103940434A (en) * | 2014-04-01 | 2014-07-23 | 西安交通大学 | Real-time lane line detecting system based on monocular vision and inertial navigation unit |
CN110969837A (en) * | 2018-09-30 | 2020-04-07 | 长城汽车股份有限公司 | Road information fusion system and method for automatic driving vehicle |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9283967B2 (en) * | 2014-07-16 | 2016-03-15 | GM Global Technology Operations LLC | Accurate curvature estimation algorithm for path planning of autonomous driving vehicle |
JP2016045144A (en) * | 2014-08-26 | 2016-04-04 | アルパイン株式会社 | Traveling lane detection device and driving support system |
JP6948202B2 (en) * | 2017-09-26 | 2021-10-13 | 株式会社Subaru | Vehicle travel control device |
CN109017780B (en) * | 2018-04-12 | 2020-05-05 | 深圳市布谷鸟科技有限公司 | Intelligent driving control method for vehicle |
KR102442230B1 (en) * | 2018-09-30 | 2022-09-13 | 그레이트 월 모터 컴퍼니 리미티드 | Construction method and application of driving coordinate system |
CN111516673B (en) * | 2020-04-30 | 2022-08-09 | 重庆长安汽车股份有限公司 | Lane line fusion system and method based on intelligent camera and high-precision map positioning |
CN113682313B (en) * | 2021-08-11 | 2023-08-22 | 中汽创智科技有限公司 | Lane line determining method, determining device and storage medium |
CN113602267B (en) * | 2021-08-26 | 2023-01-31 | 东风汽车有限公司东风日产乘用车公司 | Lane keeping control method, storage medium, and electronic apparatus |
CN114002725A (en) * | 2021-11-01 | 2022-02-01 | 武汉中海庭数据技术有限公司 | Lane line auxiliary positioning method and device, electronic equipment and storage medium |
-
2022
- 2022-06-29 CN CN202210761697.9A patent/CN115027482B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103940434A (en) * | 2014-04-01 | 2014-07-23 | 西安交通大学 | Real-time lane line detecting system based on monocular vision and inertial navigation unit |
CN110969837A (en) * | 2018-09-30 | 2020-04-07 | 长城汽车股份有限公司 | Road information fusion system and method for automatic driving vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN115027482A (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115027482B (en) | Fusion positioning method in intelligent driving | |
US11675084B2 (en) | Determining yaw error from map data, lasers, and cameras | |
CN108802785B (en) | Vehicle self-positioning method based on high-precision vector map and monocular vision sensor | |
CN110057373B (en) | Method, apparatus and computer storage medium for generating high-definition semantic map | |
CN104573733B (en) | A kind of fine map generation system and method based on high definition orthophotoquad | |
CN109143207B (en) | Laser radar internal reference precision verification method, device, equipment and medium | |
US10210401B2 (en) | Real time multi dimensional image fusing | |
EP3650814B1 (en) | Vision augmented navigation | |
CN111551186B (en) | Real-time vehicle positioning method and system and vehicle | |
CN107084727B (en) | Visual positioning system and method based on high-precision three-dimensional map | |
CN114705199A (en) | Lane-level fusion positioning method and system | |
US11692830B2 (en) | Real-time localization error correction of autonomous vehicle | |
CN113673386B (en) | Marking method for traffic signal lamp in prior inspection map | |
CN113312403B (en) | Map acquisition method and device, electronic equipment and storage medium | |
US11477371B2 (en) | Partial image generating device, storage medium storing computer program for partial image generation and partial image generating method | |
CN111833443A (en) | Landmark location reconstruction in autonomous machine applications | |
CN113390422B (en) | Automobile positioning method and device and computer storage medium | |
CN116699620A (en) | Vehicle-road co-location method based on laser radar | |
CN113822932B (en) | Device positioning method, device, nonvolatile storage medium and processor | |
US20240304001A1 (en) | Systems and methods for generating a heatmap corresponding to an environment of a vehicle | |
US20230262303A1 (en) | Methods and systems for determination of boresight error in an optical system | |
CN117809285A (en) | Traffic sign board ranging method and system applied to port external collector card | |
JP7241582B2 (en) | MOBILE POSITION DETECTION METHOD AND MOBILE POSITION DETECTION SYSTEM | |
CN109964132A (en) | Method, apparatus and system for the sensors configured in moving object | |
CN117953046A (en) | Data processing method, device, controller, vehicle and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |