1. Introduction
With the advance of computer science and 5G technology, intelligent transportation systems, autonomous vehicles, and connected vehicles are developing rapidly. An essential function of these systems is accurate positioning. Their effectiveness heavily depends on high accuracy navigation results. The positions of the vehicles must be highly reliable to prevent failures of these systems during operation. Integrated navigation technology is commonly used for these systems, which can fully use the complementary characteristics of different navigation subsystems and greatly improve the accuracy and reliability of the integrated navigation system. The integration of the Global Navigation Satellite System (GNSS) and the Inertial Navigation System (INS) is a traditional integrated navigation system. With the development of Light Detection And Ranging (LiDAR) technology and autonomous driving, LiDAR also becomes more and more cost-effective, and becomes required equipment for autonomous vehicles for obstacle avoidance and positioning.
GNSS can obtain absolute positions, and mainly determines the positioning accuracy of the integrated system. Besides GPS, the BeiDou Navigation Satellite System (BDS) has been on the rise in recent years and showed great potential for positioning and timing [
1]. GNSS Real Time Kinematic (RTK) is a wildly applied high-accuracy kinematic positioning technology. It can weaken or eliminate the systematic errors (orbital error, clock error, tropospheric delay, and ionospheric delay, etc.) by differential process and get centimeter-level position results by carrier phase observations with correct ambiguity resolution (AR) [
2,
3,
4]. The key issue of RTK is AR, which is usually solved in a two-step procedure [
5,
6,
7,
8,
9]. The floating ambiguity is solved by the pseudorange observations, and then the integer ambiguity is fixed. The most popular method to fix ambiguity is the Least-squares AMBiguity Decorrelation Adjustment (LAMBDA) [
10]. Although, combining multiple satellite systems and multiple frequencies observations can improve the geometry of satellites, and increase the fix rate of AR and the accuracy of positioning [
11], when GNSS signals are blocked by high buildings or viaduct, the geometry of the observations would be too weak to obtain fixed solutions with correct AR. The AR fixed rate and the accuracy of positioning results will be degraded severely [
12].
The Inertial Navigation System (INS) is an attractive augmentation of GNSS [
13]. Besides position and velocity estimation, INS is able to provide attitude estimation. When combined with GNSS, INS’s error can be compensated using position and velocity updates from GNSS, and it can support consistent positioning even when loss-of-lock occurs in GNSS. The performance of INS is closely related to the quality of the inertial measurement unit (IMU), which involves a gyroscope and an accelerometer. The IMU suffers from drift errors that grow unbounded regarding operation time. With the advances of sensor technology, the IMU becomes more and more cost-effective, and even an IMU based on a Micro-Electro-Mechanical System (MEMS) can achieve high performance. In a GNSS/INS integrated navigation system, a high-grade IMU can improve the attitude accuracy significantly, but is not able to maintain the position accuracy for a long time. So the position accuracy of the INS/GNSS integrated system highly depends on the GNSS [
14].
Apart from GNSS and INS, LiDAR has also become common in intelligent transportation systems, autonomous vehicles and connected vehicles. Many existing studies have explored the potential of LiDAR in positioning for autonomous driving [
15,
16,
17,
18,
19,
20,
21]. According to whether the map is pre-built, the LiDAR-based positioning can be divided into relative positioning and absolute positioning. The positioning error of relative positioning will accumulate with the running distance. Although the absolute positioning can obtain high precision position results, it needs a prepared map, which will take time and money and it will be difficult to cover a large area in a short timeframe.
By exploiting the information coming from the above methods, the integrated navigation system has made achievements [
22,
23,
24]. However, most existing works only consider one vehicle in applications. In recent years, connected vehicles have garnered great interest. With 5G technology and multivehicular wireless communication capabilities, navigation information can be shared among vehicles, and cooperative positioning has been an attractive topic [
25,
26,
27,
28]. The main goal of cooperative positioning is to improve all the vehicles’ positioning accuracy by sharing their information and locations.
There are two major existing methods for cooperative positioning: cooperative map matching (CMM) and positioning correction by relative distance constraint among vehicles. Assuming that all vehicles travel within lanes, CMM makes the positions of a group of vehicles best fit the map so that the positions are corrected. In 2015, Rohani et al. proposed a CMM method by which vehicles could apply the road constraints of other vehicles in their own map matching process and gain a significant improvement in their positioning [
29]. Shen et al. further proved the capability of CMM to mitigate the biases of position results of a group of vehicles [
26]. However, the positioning accuracy and robustness of CMM have a strong dependence on road constraints [
30].
Besides CMM, connected vehicles are capable of measuring distances between each other, which provides relative distance constraint and improve the positioning accuracy of the vehicles. With the development of Dedicated Short-Range Communication (DSRC), the DSRC device becomes a required addition to connected vehicles and can measure distance without incurring expenses. Moreover, DSRC devices are able to identify vehicles, and it is easy to associate their received information and range measurements [
28]. With these benefits, many researchers have explored the DSRC-based cooperative positioning using Ratio Signal Strength (RSS) ranging, time-based ranging and a range-rating method based on Carrier Frequency Offset (CFO). Alam et al. used CFO-based range-rating measurements based on the Doppler shift of the carrier of DSRC signals to improve GPS estimates [
31]. Liu et al. further integrated CFO-based range-rating measurements of DSRC and GNSS pseudorange observations by a cubature Kalman filter for positioning improvements [
27]. In addition to absolute positioning, Alam et al. also used DSRC distance measurements and GNSS pseudorange observations by Kalman Filter to improve relative positioning between two vehicles [
32]. Wang proposed a tight integration of DGPS, Ultra-Wide Band (UWB) ranging radio observations and simulated bearing observations in a small network of vehicles or infrastructure points to improve relative positioning accuracy [
33].
However, few researchers have investigated the performance of RTK AR by fusing sensor data from connected vehicles [
34]. GNSS RTK is the most cost-effective way for a high-precision position if the observation condition is good. For connected vehicles, even if one vehicle’s GNSS signal is partially blocked and fails in AR, GNSS observation from multiple vehicles can be complementary. Fusing these GNSS observations can greatly improve the geometry of the GNSS observation equation, improve the fixed rate of ambiguity, and improve the positioning accuracy of all the connected vehicles. For example, in
Figure 1 the red and green vehicles are moving below a viaduct and the red one can only receive satellite signals coming from the left half of the sky and the green one receives signals coming from the right half of the sky, so that the two vehicles share complementary GNSS observation.
In this work, we focus on the cooperative positioning with GNSS, INS and LiDAR data from multiple vehicles to improve the GNSS-RTK AR and achieve precise positions for all connected vehicles. The main contributions of our work include the following:
(1) proposing a high-precision relative positioning method between vehicles based on a window Occupancy Likelihood Map (OLM) and a scan-to-map matching method by LiDAR;
(2) proposing a method for fixing the relative ambiguity between vehicles based on their relative positions;
(3) proposing a joint RTK ambiguity fixing method for connected vehicles to improve their absolute positioning based on (1) and (2).
In the below sections, we elaborate on our methods and conduct field experiments to evaluate our system.
2. Methods
In this research, GNSS RTK is treated as the main method for high-precision positioning in our navigation system and the correct RTK AR is an important requirement. Compared to the traditional RTK AR, which uses GNSS observations from one rover receiver, we propose a joint RTK method to enhance the AR performance by fusing the observations from connected vehicles. Firstly, the relative positions between the vehicles can built the position constraints between the vehicles, which are calculated based on LiDAR OLM method and attitude information from INS. Taking advantage of this precise baseline, the double differential (DD) carrier phase measurements from the common satellites of the connected vehicles are utilized to solve their relative DD ambiguities by a Damped LAMBDA method, which is considered as the ambiguity constraints. Finally, a joint RTK is applied for final position estimations of the connected vehicles with the relative positions and ambiguities constraint.
Figure 2 illustrates the main procedures of our system. More details about INS mechanization, relative positioning by LiDAR based on a window OLM, relative ambiguity resolution and joint RTK will be introduced in the following subsections.
Table 1 outlines the main processing steps in our methods by pseudocode, where
M,
F,
К, and
H denote the procedures of generating the OLM, calculating relative position, calculating relative DD ambiguities, and calculating absolute positions, respectively.
Our method involves multiple coordinate systems:
(1) INS Navigation System (n-frame)
The n-frame is a local geodetic frame with platform mass center as the coordinate origin. The X axis pointing towards the East, the Y axis pointing to the North, and the Z axis completing a right-handed coordinate system. It is also called an east-north-up (ENU) system, forming a right-handed coordinate system.
(2) Earth-Centered Earth-Fixed Coordinate System (e-frame)
The e-frame rotates with the Earth. Its coordinate origin is the Earth’s centroid, the X axis points to the intersection of the equator and the prime meridian. The Z axis is the Earth’s rotation axis and points to the North Pole. Then, the Y axis is perpendicular to the X–Z plane, forming a right-handed coordinate system.
(3) World Coordinate System (w-frame)
The w-frame is used to express the GNSS positioning results. The initial GNSS position is taken as the origin. The X-Y plane is the local horizontal plane, with the X axis pointing East and the Y axis pointing North. The Z axis is perpendicular to the X-Y plane, forming a right-handed coordinate system. The w-frame is parallel to the n-frame.
(4) IMU Coordinate System (b-frame)
The b-frame is dynamic and moves with the vehicle, and the coordinate origin is the IMU barycenter, The X axis points to the right along the IMU horizontal axis, the Y axis points forward along the IMU longitudinal axis, and the Z axis is perpendicular to the X–Y plane, forming a right-handed coordinate system.
(5) LiDAR Coordinate System (l-frame)
The l-frame moves with the vehicle, with the LiDAR measurement center as the coordinate origin, the X axis pointing to the right along the LiDAR horizontal axis, the Y axis pointing forward along the LiDAR longitudinal axis, and the Z axis being perpendicular to the X–Y plane, forming a right-handed coordinate system.
(6) Map Coordinate System (m-frame)
The map coordinate system is used by LiDAR OLM. This frame is the l-frame of the point where the map starts.
More details about the reference systems can be seen in [
35,
36]. The
b-frame and
l-frame are precalibrated to achieve spatial synchronization. All the coordinate transformations in this paper are rigid body transformations. The rigid body transformation
from the
i-frame to
j-frame involves translation
and quaternion
. The rigid body transformation operation is defined as [
35]:
where
is the direction cosine matrix of
and
is a vector in the
i-frame.
2.1. INS Modelling
Firstly, the inertial system motion model is introduced based on an INS error model [
5]:
where
,
, and
are the error vectors of position, velocity, and attitude, respectively. The
and
are the error vectors of the accelerometer and gyroscope of the IMU, respectively.
is the specific force,
is gravity uncertainty,
is the rotation rate of the earth, and
is the craft rate.
In this research, a state vector
with 15 states for this INS error model is constructed as:
where the superscript
n means
n-frame and
T represents transpose. An error propagation model works alongside the system motion model to correct the IMU errors. A first-order Taylor series expansion is applied, and the error state vector and state-space model are defined as:
where
t means time step,
is the dynamic matrix,
is a design matrix,
is the forcing vector of white noise, and
is the driven response of the input white noise. After the error state vector is solved, the predicted position and attitude in
n-frame at time
t is obtained.
2.2. Relative Positioning by INS aiding LiDAR Based on a Window Occupancy Likelihood Map
To achieve the multi-vehicle cooperative positioning solution, the relative position of the vehicles needs to be estimated. Simultaneous position and mapping (SLAM) based on environmental perception is a widely used method for accurate three-dimensional relative position information. Existing research on SLAM mostly focuses on the relative position between epochs. Here we draw on the idea of SLAM and propose a method for relative positioning between vehicles based on a scan-to-map matching of a window OLM.
Scan-to-map is a front-end matching method in SLAM. It mainly stores historical observation information in the form of grid maps and achieves relative positioning through current scan and map matching. Compared with the scan-to-scan methods, such as Iterative Closed Point [
37], Iterative Closed Line [
38] and Polar Scan Matching [
39], the scan-to-map method has better robustness. Each vehicle can store a short-term window OLM map which has a small cumulative error and then use a scan-to-map method between vehicles to obtain the relative position.
Our laser scan-to-map matching method with OLM is based on Maximum Likelihood Estimation (MLE) and aims to find the optimal rigid-body transformation in the
m-frame. More details can be seen in our previous work in [
40]. The MLE matches current scan
to a grid-based OLM
, which is generated by all the previous scans from time step 1 to
t−1 and stores the likelihood value of each grid cell in the 3D space region. According to Bayes rules, assuming that the scan points in
are independent, the sum likelihood value of
is calculated as:
where
is the probability that the scan point
drops at that location according to the map
. Due to the dynamic movement of the platform,
l-frame varies over time. To match
against
, a best rigid-body transformation
is needed to transform
from the current
l-frame to the
m-frame of the OLM which is calculated by maximizing the likelihood value of the
according to:
where
is the set of
transformed by the rigid-body transformation
.
To find
, we apply a Gauss-Newton approach for (11). Assume the rigid transformation is
where
is translation and
is the attitude angles. Given a starting estimate of
, we want to estimate
, which optimizes the error measurement as:
where
means the
ith scan point in
n points and
are the coordinates of the scan points after transformation
and can be expressed as:
where
are the coordinates of the scan point in the
l-frame. Solving for
yields the Gauss-Newton equation for the minimization problem:
where
is the map gradient. Then
. The INS modeling can also calculate a
from
Section 2.1. An extended Kalman filter (EKF) model is used to fuse the INS model and the LiDAR scan-to-map matching. EKF includes two parts: state prediction based on the system model and state update based on the measurements. In our work, the INS acts as the prediction model because the output rate of IMU is generally higher than LiDAR. When LiDAR observation information is obtained, the state error vector is updated by EKF and fed back to the INS model for estimating a final
. Many works have proved the advantages of the fusion of INS model and LiDAR model, and more details can be seen in [
41].
In the next step,
is updated to
by
using a likelihood value determination method. The new likelihood values of 27 grid cells in the OLM space around each scan point in 3D are calculated by:
where
,
, and
are the
x coordinate,
y coordinate and
z coordinate of the center of each grid cell around the scan point. The sign
means distance, and
is the standard deviation of measurement noise of LiDAR sensor. The corresponding likelihood value of each grid cell is updated as:
where
is the grid cell’s prior likelihood value in
. After all the scan points update the grid cells around them as Equations (13) and (14), new OLM
is generated.
In our relative position, we generate a window OLM which stores the LiDAR information in a time window for each vehicle at each epoch, for example, the window OLM includes historical scans in the last one second so that our m-frame is dynamic, which can reduce the accumulation error and computation. Assuming we have , , , , , and superscript 1 and 2 represent vehicle 1 and vehicle 2, respectively. and .
We do Equation (9) for
and
to calculate a
in the
m-frame of vehicle 1. With
and
we can calculate the relative position
of vehicle 1 and vehicle 2 in the
m-frame of vehicle 1:
The relative position
can be transformed to the
n-frame as:
where
is the attitude of the origin of the
m-frame of vehicle 1 in the
n-frame. Similarly, with
and
, we can calculate the relative position
in the
n-frame of vehicle 2. The mean value of the two relative positions is:
is thought to be as the final relative position of vehicle 1 and vehicle 2 in the n-frame, and the superscript n of is omitted for the simplicity of the following equations in the following subsections. The relative position in the w-frame is equal to that in the n-frame, because the axis directions of them are almost same in a small region.
The cross matching of LiDAR data from the vehicles can calculate relative position between the vehicles. In our work, the scan-to-map matching method is superior to scan-to-scan method for the cross matching. The scan-to-scan requires that the vehicles observe the same features simultaneously, which is difficult in the outdoor environment. The strategy that matches one vehicle’s laser scan to another vehicle’s window OLM makes feature matching more robust. If the OLM includes the same feature with the laser scan in the time window, the scan-to-map method can work successfully. Here we use a window OLM to ensure the efficiency and accuracy of the scan-to-map method. It is possible to store all the historical scan information in the OLM, but it takes time to update the OLM and match the laser scan to the large-size map. The OLM may accumulate errors over running distance; therefore, a window OLM might reduce the cumulative errors with a short time window to improve the accuracy of relative positioning.
2.3. Relative Ambiguity Resolution with Relative Position Constraint
After relative position between the vehicle 1 and vehicle 2 is determined, we integrate this constraint with the carrier phase measurements of GNSS by RTK method. It is noted that in this subsection we assume that either vehicle 1 or vehicle 2 is a moving reference receiver and another one is a rover receiver in our RTK method. An accurate position of the reference receiver is unnecessary and their relative position gets more interest. The effect of the reference receiver’s absolute positioning error on the baseline solution is [
33]:
where
means the baseline,
is the absolute positioning error of the reference receiver. If the reference receiver has a five-kilometer positioning error using GPS standalone positioning; the maximum resulting relative positioning error on a 500 m baseline is about 10
−8 m, which is negligible.
The carrier phase measurement is derived from counting the carrier phase cycles of the incoming carrier. In a general form, the DD pseudorange and carrier phase measurement model of RTK can be written as:
where the
notation refers to a double differentiation,
and
are the pseudorange observable and the carrier phase measurements respectively, subscript
r and
s mean the rover receiver and the reference receiver respectively, superscript
i and
j mean the
ith and
jth satellites,
refers to carrier wavelength,
is the geometrical range between the satellite and receiver,
refers to the orbital error of the satellite,
is the tropospheric error,
is the ionospheric error,
is the integer carrier ambiguity cycles, and
represents measurement error. DD can mitigate spatially correlated errors
and
.
,
and
are only a few millimeters over a one kilometer baseline so that they are neglected in our work as we apply short-range positioning applications,
includes the residual of multipath error and observation error, which is assumed to be white noise and it is considered in the stochastic model. Equation (19) is simplified to:
which can be linearized as:
where the superscript
means the reference satellite in DD and we assume that either vehicle 1 or vehicle 2 is the reference station in our method,
,
and
are baseline corrections in the
e-frame, the symbols
l,
m, and
n are the unit vectors on the line-of-sight (LOS) from the receiver to satellite, and
is expressed as:
According to the initial position of the reference station
and
in the
e-frame, the Equation (21) can be transformed to the
w-frame:
where
is the baseline correction in the
w-frame. Assuming that the receivers of vehicle 1 and vehicle 2 have
n common satellites with the reference receiver, we can construct
n−1 equations like Equation (23).
Through the LiDAR scan-to-map matching described in the above subsection, the accurate relative position between the connected vehicles in the
w-frame is calculated and we involve this information into our least-squares adjustment to improve the accuracy by a Damped LAMBDA method. A prior weight matrix
for
,
,
is used in the normal equation of least-squares adjustment:
where
is baseline components and
is DD ambiguities in
n−1 dimension,
and
are the coefficient matrices for
and
according to Equation (23),
includes all
in
n−1 dimension, and
is the weight matrix of the DD measurements in the least-squares. Supposing that the standard deviations (STD) of the original pseudo range and carrier phase observables of each satellite are the same, which are
and
respectively, then
is:
where
means
or
, and
is the variance of unit weight.
,
and
are calculated by LiDAR scan-to-map matching, and their precision is known, which are
,
and
.
is expressed as:
Then according to the least squares, we can get the real-values solutions of DD carrier phase ambiguities
and their covariance matrix. Based on the float solution yields, integer ambiguities are estimated afterwards by LAMBDA method, which defines a search space by the information derived from the covariance matrix and search for the integer ambiguities within this space. A main feature of LAMBDA method is that it includes a decorrelation procedure to transform the original float ambiguity estimations to a new set of ambiguity and preserve the integer nature of the ambiguity at the same time. The detailed implementation of the LAMBDA method can be seen in [
10].
2.4. Joint RTK Algorithm for Connected Vehicles
In this part, both the vehicle 1 and vehicle 2 are rover receivers and one RTK reference station is chosen. DD observation equations are formed for all visible satellites for vehicle 1 and reference station and those for vehicle 2 and reference station to jointly solve their absolute position estimations. The relative position and relative DD ambiguities described in the above subsections are considered as constraints in our joint RTK.
Figure 3 illustrates the GNSS satellites shown in
Figure 2, the relative position, and the relative ambiguity between the connected vehicles. It is noted that the numbers and distribution of satellites in
Figure 2 and
Figure 3 are for illustration. In the real-world practice the numbers and distribution of satellites can be different.
Supposing that the satellite 4 is our reference satellite, and the DD pseudo range and carrier phase observation equations in the
e-frame are formed as:
where subscript
means RTK reference station. Considering the relative position
between vehicles in the
w-frame, Equation (27) is reformulated in the
w-frame as:
where
and
are the initial positions of vehicle 1 and vehicle 2 in the
w-frame, respectively. Then, we apply constrained indirect least squares for Equation (28) with relative ambiguity constraints:
where
is relative ambiguity. Equation (29) is formed into
shape, and Equation (24) is changed to:
where
is the Lagrange multiplier. According to the rules of constrained indirect least squares, the float solution of absolute positions together with their variance-covariance matrix
, and float carrier phase ambiguities for both vehicle 1 and vehicle 2 are calculated. The LAMBDA method is utilized again to fix all the ambiguities. Float position solutions are corrected by the fixed ambiguities shown as:
where
and
mean fixed solutions of absolute positions and ambiguities.
In order to validate the fixed ambiguity, the ratio test is used, and the success rate, which is an output from LAMBDA method, is used to assess the probability of correct position solution. The ratio test aims to decide whether the integer ambiguity solution is sufficiently more likely than any other integer candidate.
is the integer minimizer of
. Assume
is the integer candidate that returns the second smallest
, then the ratio is computed as
. Thus, only if the ratio is sufficiently large, the fixed solutions will be accepted. Otherwise, the fixed solutions are rejected in favor of the float solutions. The success rate of AR is determined by the strength of the underlying GNSS model; the stronger the model, the higher the success rate [
42].
3. Experimental Results
To evaluate our methods, we conduct field experiments. Our vehicles are shown in
Figure 4 in red frames, and GNSS and LiDAR are mounted on each of the vehicles and their equipment types we used are shown in
Figure 4.
Table 2 shows the IMU parameters of the two types of IMU. For both vehicles, GNSS runs at 1 Hz and LiDAR runs at 10 Hz. IMU with the Honeywell device in the white vehicle (denoted as “vehicle 1” in below) runs at 600 Hz, and the IMU with Novatel SPAN device in the blue vehicle (denoted as “vehicle 2” in below) runs at 100Hz. The frequency of our ambiguities resolutions and position solutions is in accordance with that of GNSS as 1Hz. Experiments are done in an open area (shown in
Figure 4a) and below a viaduct (shown in
Figure 4b). The vehicles below a viaduct can share complementary GNSS observations, which is suitable for evaluating our method. It is hard to get continually accurate positions below the viaduct, which makes it difficult to evaluate the performance of our integrated system, so we only analyze the geometrical distribution of satellites above the vehicles’ horizon in the experiments below the viaduct, then we choose measurements from the same satellites collected from the open area to simulate the observation condition below the viaduct, and the RTK results in the open area are assumed to be the ground truth of the absolute positions of the two vehicles.
Figure 5 shows the distributions of satellites for the two vehicles in the open area and below the viaduct. In our work, G19 and C22 are the reference satellites for GPS and BDS respectively. The viaduct in
Figure 4 is oriented directly south to north and the two vehicles move on both sides of the road below the viaduct. Because the viaduct blocks satellite signal from some angles of arrival, vehicle 1 in
Figure 5b cannot receive satellite signal when satellite elevation angle is less than 60 degrees and yaw angle is from 15 degrees to 165 degrees, and vehicle 2 in
Figure 5d cannot receive satellite signal when elevation angle is less than 60 degrees and yaw angle is between 195 degrees and 345 degrees. From
Figure 5b,d we can see that they have seven common satellites including two reference satellites at this epoch. In the whole duration of our experiments below the viaduct, the number of common nonreference satellites of vehicle 1 and vehicle 2 varies from five to eight. The lower the number is, the more difficult to solve the relative position and ambiguities between the vehicles generally.
Figure 5f shows all the visible satellites for vehicle 1 and vehicle 2, which combines
Figure 5b and
Figure 5d and makes sufficient satellites available in our joint RTK solution.
We first evaluate the performance of the LiDAR scan-to-map matching method in our work by comparing the relative position between the vehicles calculated by LiDAR scan-to-map matching method and ground truth. In our work, we did not use the scan-to-scan matching method, because the scan-to-scan method requires that the two vehicles should observe the same features simultaneously, which is difficult in practice. To improve the availability and reliability of the feature matching, the environment information observed in the last second was stored in the OLM. If the two vehicles observed the same features in the last second, they could be matched successfully. It was possible to store all the historical environment information in the OLM, but for efficiency of feature matching, a window OLM was generated using the LiDAR data in the previous 1s which included ten laser scans as the frequency of LiDAR sensors was 10 Hz. Moreover, the window OLM helped keep precision of relative positioning because OLM accumulated errors over distance, and a short time window could reduce cumulative error.
Figure 6 shows the LiDAR point clouds of the window OLM of vehicle 1 and the current laser scan of vehicle 2 at one epoch. As it is hard to visualize the three-dimensional OLM, the LiDAR point clouds of the window OLM are shown in the figure. We can see that the vehicle 1 travelled on the right side of the viaduct piers and the vehicle 2 travelled on the left side of the viaduct piers. The current laser scan of vehicle 2 could be matched successfully to the window OLM of vehicle 1, as they had common features of the viaduct piers.
Figure 7 shows the time series of bias of relative positioning in the
n-frame with east, north and up directions. From
Figure 7 we can see that the relative positioning is highly accurate with a bias of a few centimeters in east, north and up directions, which is beneficial to our high-precision absolute positioning of the two connected vehicles. It is noted that in our work the precision of yaw angles of vehicles are about 0.1 degree and our baseline’s length is about 20 m so that the yaw angle error might cause relative positioning error of about 2 cm.
Regarding relative ambiguity resolution in the above subsection
C, we compare the results of using relative position constraint from LiDAR scan-to-map matching (denoted as “with LiDAR” in
Figure 8) and not using this constraint (denoted as “without LiDAR” in
Figure 8).
Figure 8 shows the ratio and success rate of the fixed relative ambiguities and the number of common nonreference satellites for vehicle 1 and vehicle 2 in four cases: the operation is in the open area without LiDAR, below the viaduct without LiDAR, in the open area with LiDAR, and below the viaduct with LiDAR. From
Figure 8 we can see that in the open area the vehicles share more common satellites so that the success rate values for the two cases in open area cases (represented by the black line and green line in
Figure 8) are high and close to 1. But the ratio values for the case in the open area without LiDAR constraint are much smaller than the ratio values for the case with LiDAR constraint, which means that the LiDAR constraint can help fix the ambiguities and makes AR more reliable, and that even in the open area with sufficient available satellites this constraint is able to largely increase the ratio values. This is reasonable, because if the relative position constraint is not considered, vehicles’ predicted positions are calculated by GNSS pseudorange measurements which include large errors so that the float solutions of relative ambiguities may not be reliable and the ratio values are small.
When vehicles move below the viaduct, the number of their visible common satellites takes half. From the blue lines and red lines in the
Figure 8 we can see that if relative position constraint is considered, both the success rate and ratio values increase significantly. The mean ratio value of the case below the viaduct without LiDAR constraint is only 0.23. In our work, we set the threshold of the ratio test to 3. Therefore, for this case at most of the epochs the relative ambiguities are not fixed correctly. For another case with LiDAR constraint, the mean ratio value is 14 and all ratio values from all epochs are larger than 3. In the first a few epochs, because only five common nonreference satellites are available, the success rate is about 0.8 for the case with LiDAR constraint and about 0.02 for the case without constraint. These results prove that our relative positioning method can largely improve the AR even when there are only a few common satellites for the connected vehicles.
Figure 9 shows the ratio values and success rate of AR for absolute positioning by our joint RTK algorithm. Six cases are tested and analyzed in this part: joint RTK without considering relative position constraint
or relative ambiguities constraint
in the open area, joint RTK with
but without
in the open area, joint RTK with both
and
in the open area, and those three settings below the viaduct instead of in the open area. From
Figure 9, we can see that in the open area with sufficient visible satellites (shown in
Figure 5e), success rate significantly increases if either
or
is considered, especially in the first few epochs when the number of visible satellites is relatively less. The success rate is 1 for most of the epochs if both
and
are considered and is slightly larger than that of the case where only
is considered. For ratio values of cases in the open area, the difference among the three cases is also obvious, with the case considering both
and
gets larger ratio values, the case considering only
gets intermediate ratio values and the case without considering
or
gets the smallest values. The three cases below the viaduct (right columns in
Figure 9) show a similar pattern as above cases in the open area. This phenomenon is reasonable. In absolute positioning, when considering the
and
, the accurate correlation among relative ambiguities of the common satellites of the two vehicles can help establish a certain precision correlation for the non-common satellites when doing DD, which can improve the geometric conditions of the observation equation. The relative position and relative ambiguities constraints can also improve the accuracy of the floating solution. These advantages can improve the precision of the fixed ambiguities.
Once reliably fixed, the integer ambiguities make the carrier phase observables become a precise pseudorange. The fixed positioning solution is then achieved. The biases between the fixed positioning solutions and the ground truth in east, north and height directions are plotted in
Figure 10 for vehicle 1 and in
Figure 11 for vehicle 2. In these figures, there are spikes which have large biases. These spikes mean that the ambiguities are not fixed (ratio is less than 3) at these epochs and the incorrect float solutions have large biases. In the open area, the case without considering
or
gets the worst positioning results, and the case with
constraint and the case with both
and
constraints have nearly equal biases in the three directions. This is because in
Figure 9 both cases achieve a high success rate and all the ratio values are larger than 3, which means the fixed ambiguities are reliable and can produce high-precision positioning solutions. For most of the epochs, these cases can achieve centimeter-level precision. For the cases below the viaduct, the case without considering
or
still gets the worst positioning results. At certain epochs the case only considering
worsens the position results comparing to the case considering both constraints, but at most epochs these two cases can achieve centimeter-level precision in east and north while the case considering no constraint gets very large biases. To compare the performances of our joint RTK method and single positioning by RTK, we conducted a single positioning experiment for vehicle 1, and results are shown in
Figure 12. The results of the single positioning by RTK are quite similar to the results of the case with no constraint in
Figure 10, which should be expected.
Table 3 shows the statistical positioning precisions and fixed rates of these six cases and single positioning for the two vehicles. The fixed rates in
Table 3 prove that our approach can largely increase the fixed rates below the viaduct where satellite observation condition is worse.
4. Discussion
Connected vehicles build a network where vehicles can share information and sensor data. Compared with a single vehicle, the observation information from the network is greatly increased, and the positioning accuracy of the connected vehicles can be significantly improved. An important prerequisite of our approach is the precise relative position between the connected vehicles. Even though our OLM-based matching method does not require the vehicles observe the same environmental characteristics at the same time, the vehicles are still required to observe the same features within a time window. In the future work, the geometric shapes of features in the environment, such as cylindrical and rectangular buildings, will be considered, so that even if different parts of the features are observed by vehicles, the scan-to-map matching method in our approach can still work successfully via geometric constrains.
The relative position between the vehicles should be transformed from the m-frame to the w-frame, which needs the attitude of the LiDAR sensor calculating by INS. In our experiments, IMUs with upper grade gyroscopes are used, therefore, the error of attitude accumulated slowly and neglected. In our work, the precision of yaw angles of vehicles is about 0.1 degree and our baseline’s length is about 20 m, so that the yaw angle error might cause relative positioning error of about 2 cm. In future work, we will try a lower-grade IMU and test our method further.
Our cooperative GNSS-RTK ambiguity resolution model for two connected vehicles is relatively simple, and a more complete model will be investigated in the future to address the multi-vehicle networking environment. Relative positions will be calculated by our INS aiding LiDAR OLM method for multiple vehicles in the network, which will increase computation costs. Our relative ambiguity resolution method with relative position constraint and joint RTK method are easy to be expanded to a multivehicle networking environment, but the number of parameters will increase along with the number of vehicles. Therefore, if there are large quantities of vehicles in the network, solving Equation (23) and Equation (28) will become difficult. In the future, we will also focus on this issue.
5. Conclusions
In this study, we proposed a cooperative GNSS-RTK AR method by sharing the sensor data between the connected vehicles. A LiDAR scan-to-map matching method based on a window OLM was applied to the cross matching of LiDAR data from the vehicles, based on which the precise relative position between the vehicles can be calculated. With this relative position constraint, a relative ambiguity resolution is proposed for the common satellites of the vehicles. Finally, a joint RTK method is applied to fuse the GNSS measurements from the connected vehicles and their relative ambiguities. Comparing to the standard RTK AR method of a single vehicle, our approach can improve the performance of RTK AR in terms of ratio values, success rate and fixed rate and improve positioning accuracy for the vehicles. Especially, even in the high occlusion environment, when the vehicles share complementary GNSS observations our method can still obtain satisfactory results. We investigate the performances of our approach below an open sky and below a viaduct, which is a typical urban occlusion environment. The following conclusions can be drawn based on the experimental results.
(1) The proposed cross matching method based on the window OLM can calculate high-precision relative position between the connected vehicles with a bias of a few centimeters in east, north and up directions, and this method does not require that the vehicles observe the same features simultaneously, which is more reliable and robust.
(2) This baseline constraint helps to fix the relative ambiguities by improving the success rate and ratio values. This approach works very well even when the number of common satellites of the connected vehicles is a few.
(3) In the absolute positioning, taking advantage the constraints of the intervehicle position and ambiguities, the geometric condition of the GNSS RTK observation equation is improved by our joint RTK algorithm in term of ratio values, success rate and fixed rate, so that the absolute positioning precision is improved for both vehicles. With our method, both vehicles achieved centimeter-level position accuracy with 98.61% fixed rate in the open area, while the standard GPS+BDS RTK obtained decimeter-level accuracy with 95.83% fixed rate. In the high occlusion environment below a viaduct, the vehicles still achieved centimeter-level accuracy with a fixed rate of 97.22%, comparing with a decimeter-level accuracy and a fixed rate of 35.42% of the standard GPS+BDS RTK method.