[go: up one dir, main page]

0% found this document useful (0 votes)
52 views8 pages

Vision-Aided Inertial Navigation For PPL

1) The document presents a vision-aided inertial navigation algorithm for precise planetary landing that fuses camera and IMU measurements. 2) Results from a sounding rocket test mimicking a planetary landing scenario showed estimated errors of 0.16 m/s in velocity and 6.4 m in position at touchdown, vastly improving current non-vision based methods. 3) The algorithm processes two types of visual features - mapped landmarks with known 3D positions from a surface map, and opportunistic features that can be tracked across images but have unknown positions - in an EKF to optimally estimate pose.

Uploaded by

meng775928yu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views8 pages

Vision-Aided Inertial Navigation For PPL

1) The document presents a vision-aided inertial navigation algorithm for precise planetary landing that fuses camera and IMU measurements. 2) Results from a sounding rocket test mimicking a planetary landing scenario showed estimated errors of 0.16 m/s in velocity and 6.4 m in position at touchdown, vastly improving current non-vision based methods. 3) The algorithm processes two types of visual features - mapped landmarks with known 3D positions from a surface map, and opportunistic features that can be tracked across images but have unknown positions - in an EKF to optimally estimate pose.

Uploaded by

meng775928yu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Vision-Aided Inertial Navigation for Precise

Planetary Landing: Analysis and Experiments


Anastasios I. Mourikis, Nikolas Trawny, Stergios I. Roumeliotis Andrew Johnson and Larry Matthies
Dept. of Computer Science & Engineering, Jet Propulsion Laboratory,
University of Minnesota, Minneapolis, MN 55455 Pasadena, CA 91125.
Email: {mourikis|trawny|stergios}@cs.umn.edu Email: {aej|lhm}@jpl.nasa.gov

Abstract—In this paper, we present the analysis and experi-


120
mental validation of a vision-aided inertial navigation algorithm
for planetary landing applications. The system employs tight 100
integration of inertial and visual feature measurements to com- ← Apogee
pute accurate estimates of the lander’s terrain-relative position, 80

Altitude (km)
attitude, and velocity in real time. Two types of features are
60
considered: mapped landmarks, i.e., features whose global 3D po-
sitions can be determined from a surface map, and opportunistic 40
features, i.e., features that can be tracked in consecutive images,
but whose 3D positions are not known. Both types of features are 20
processed in an extended Kalman filter (EKF) estimator and are ← Drogue Chute Deploy
0
optimally fused with measurements from an inertial measurement −107
unit (IMU). Results from a sounding rocket test, covering the −106.8 Main Chute Deploy →
← Touchdown
dynamic profile of typical planetary landing scenarios, show −106.6 ← Launch
estimation errors of magnitude 0.16 m/s in velocity and 6.4 m −106.4
in position at touchdown. These results vastly improve current −106.2
33
33.5
state of the art for non-vision based EDL navigation, and meet 32.5
32
Longitude (deg) Latitude (deg)
the requirements of future planetary exploration missions.

I. I NTRODUCTION Fig. 1. Sounding Rocket Experiment 41.068

Space missions involving Entry, Descent and Landing


(EDL) maneuvers require high-accuracy position and attitude landing site [2], and (ii) Opportunistic Features (OFs), i.e.,
(pose) determination to enable precise trajectory control. On features that can be reliably detected in image sequences, but
solar system bodies other than Earth, this is challenging due not in a map of the planet’s surface [3]. We note that the term
to the absence of artificial navigation aids such as GPS or “map” refers to a co-registered pair of a digital elevation map
radio-beacons. To date, robotic lander missions have used (DEM) and satellite image, which is available a priori.
integration of acceleration and rotational velocity measure- For precise vision-based EDL, observations of MLs are nec-
ments from an inertial measurement unit (IMU), augmented essary, since they provide absolute pose information. However,
by velocity and altitude information from Doppler radar. due to the finite resolution of satellite imagery, MLs cannot
In these cases, integration of noise and biases, as well as typically be extracted during the last stages of the descent,
errors in initialization, result in relatively large errors in when the lander is very close to the ground. To address this
the touchdown position estimate (e.g., for Mars Pathfinder problem, in our work we employ OFs for improving the
and the Mars Exploration Rovers in the order of 100 km). accuracy of pose estimation in these stages. Even though OFs
However, several future planetary exploration missions will cannot provide absolute pose information, they can be thought
require meter-level landing accuracy, to facilitate the study of of as providing velocity information. Intuitively, viewing a
geological features of scientific interest [1]. In the absence static feature from multiple camera poses provides geometric
of radiometric navigation beacons, the most attractive option constraints involving all these poses. The proposed EKF-
for increasing the navigation accuracy during EDL is to based estimation algorithm (cf. Section IV-E) utilizes all the
use camera measurements. Cameras operate in almost any constraints from the OF and ML measurements optimally,
environment, are small, lightweight, and consume little power, while maintaining computational complexity only linear in
and are therefore ideal sensors for space applications. the number of features. These characteristics enable high-
The Extended Kalman Filter (EKF)-based estimation algo- accuracy pose estimation in real-time. The results from a
rithm presented in this paper combines inertial measurements sounding-rocket experiment (cf. Fig. 1), which are presented
from an IMU with camera observations of two types of in Section V, demonstrate the filter’s robustness and superior
features: (i) Mapped Landmarks (MLs), i.e., features whose accuracy compared to existing state-of-the-art EDL navigation
global 3D coordinates can be determined from a map of the algorithms, which do not utilize vision information.
II. R ELATED W ORK

Only a few recursive estimation approaches that utilize


measurements of a priori known features have been proposed
in the literature. In [4], a statistical (zero-acceleration) model
is employed for propagating the pose estimate between ML
observations. However, the use of a statistical model (rather
than inertial measurements), limits the applicability of such
an approach to maneuvers with slow dynamics that occur, for
example, during spacecraft rendezvous and docking. In [5],
[6], inertial measurements are fused with observations of arti-
ficial rectangular targets, and with heading measurements from
a magnetometer. In their work, the authors use measurements Fig. 2. ML algorithm concept: by matching templates between descent
images and a visual map of the landing site, the algorithm produces mea-
both of the coordinates of a target’s projection and of the area surements of the image projections of features with known 3D coordinates
of this projection. Area measurements, though, may be difficult (i.e., MLs).
or unreliable when dealing with real planetary imagery, where
visual features are less structured. Finally, in [7] inertial III. I MAGE P ROCESSING
measurements are fused with bearing measurements to MLs,
Several techniques can be applied for feature extraction in
but the spacecraft’s attitude is assumed to be perfectly known,
planetary landing applications. For example, Cheng et al. [13]
which is not a valid assumption for EDL.
propose using craters as landmarks for navigation. Craters are
In addition to processing measurements to MLs, in this work abundant on most bodies of interest in our solar system, and
we also utilize measurements to features whose position in this makes them a very useful feature. However, there exist
the global frame is not known in advance (OFs). The standard sites (e.g., in the polar regions of Mars) where craters are
method of treating such features is to include their positions not present. In this case, more general feature types can be
in the state vector, and to estimate them along with the camera used, such as SIFT keypoints [14], and Harris corners [15].
trajectory. This is the well-known Simultaneous Localization In our work, the image processing module relies on Harris
and Mapping (SLAM) problem, for which numerous ap- corners and normalized correlation, because (i) corner features
proaches that employ vision and inertial sensing have recently can be extracted more efficiently than SIFT keys, since they
been proposed (e.g., [8] and references therein). However, the do not require a search over image scale, (ii) Harris corners
need to maintain the landmark estimates in SLAM results in have been shown to be more robust to illumination changes
increased computational complexity (quadratic in the number than SIFT [16], and (iii) image correlation is a field-tested
of features for EKF-SLAM). Moreover, the main benefit of technology, which has already been employed in EDL appli-
performing SLAM is the ability to achieve “loop closing” cations [17].
when revisiting an area. Due to the nature of EDL trajectories The ML algorithm applies normalized correlation to match
loop closing is not an important consideration, and thus the templates from the descent images to the map (cf. Fig. 2). Each
quadratic computational complexity of SLAM does not appear selected template is warped prior to correlation, so as to have
to be justified in the context of EDL. In contrast, our algorithm the same scale and orientation as the map. This enables us to
attains complexity only linear in the number of OFs. reduce matching to a 2-D search, which can be carried out
In our work, OF measurements are employed for impos- very efficiently. We note that the size of the search windows
ing constraints between multiple camera poses. In a similar for correlation depends on the accuracy of the camera pose
spirit, in [7], [9], a history of only the latest two camera estimate. When the camera pose is very uncertain (e.g., when
poses is considered, and visual measurements are utilized the first images are recorded), the search windows are very
for imposing constraints between them. Moreover, pairwise large, and directly applying correlation search for all features is
relative-pose constraints are employed for pose estimation in computationally expensive. In that case, an FFT map-matching
several approaches that maintain a history of multiple camera step is employed prior to ML matching. During this step,
poses (e.g., [10] and references therein). Contrary to that, the a rough alignment of the image with the map is obtained,
proposed algorithm does not use the measurements for de- by frequency-domain correlation of a single large template.
riving relative-pose estimates. This reduces the computational Using this initial estimate, the dimensions of the search areas
burden, avoids possible correlations between the displacement for ML matching are significantly reduced. For OF tracking,
measurements [11], and is more robust to non-linearities. we perform pairwise correlation matching between images.
Finally, a sliding window of poses is also maintained in the To reduce the effects of the changing camera viewpoint, the
VSDF algorithm [12]. However, the VSDF is tailored for homography that is induced by the camera motion between
cases where no motion model is available (in EDL the IMU consecutive images is accounted for during correlation. For a
measurements provide such a model), and its computational detailed description of the image-processing algorithms, the
complexity is at least quadratic in the number of features. reader is referred to [18].
Algorithm 1 Vision-aided Inertial Navigation Algorithm A. Structure of the EKF state vector
Propagation: For each IMU measurement received, propagate The evolving state of the EKF is described by the vector:
the state and covariance estimates (cf. Section IV-B). £ T ¤T
XE = B G q̄ bg T G vB T ba T G pTB (1)
Image acquisition: Every time a new image is recorded, where B G q̄ is the unit quaternion [19] describing the rotation
• augment the state and covariance matrix with a copy of from the global frame to the body frame, G pB and G vB are
the current camera pose estimate (cf. Section IV-A). the position and velocity of the body expressed with respect to
• image processing module begins processing the new the global frame, and finally bg and ba are 3 × 1 vectors that
image. describe the biases affecting the gyroscope and accelerometer
measurements, respectively. The IMU biases are modeled as
Update: When the ML and OF measurements of a given image random walk processes, driven by the white Gaussian noise
become available, perform an EKF update using vectors nwg and nwa , respectively.
• all ML measurements of this image (cf. Section IV-D) Given the definition of the evolving state in Eq. (1), the
• the OFs that are no longer detected in the image sequence error-state vector for XE is defined accordingly, as:
(cf. Section IV-E). h iT
Xe E = δθ T b eT Gv e T eT Gp
b e T (2)
B g B a B

For the position, velocity, and biases, the standard additive


IV. E STIMATOR D ESCRIPTION
error definition is used (i.e., the error in the estimate x̂
The proposed estimation algorithm (cf. Algorithm 1) em- of a quantity x is defined as x e = x − x̂). However, for
ploys an EKF to estimate the 3D pose of the body frame {B}, the quaternion a different error definition is employed. In
which is affixed on the spacecraft’s IMU, with respect to a particular, if q̄ˆ is the estimated value of the quaternion q̄, then
global frame of reference {G}. In this work, {G} is selected as the orientation error is described by the error quaternion δ q̄,
a planet-centered, planet-fixed frame of reference that rotates which is defined by the relation q̄ = δ q̄ ⊗ q̄ˆ. In this expression,
with the planet. The IMU measurements are processed im- the symbol ⊗ denotes quaternion multiplication. The error
mediately as they become available, for propagating the EKF quaternion is
state and covariance estimates, as discussed in Section IV-B. £1 T ¤T
δ q̄ ' 2 δθ 1 (3)
On the other hand, each time an image is recorded, the
current camera pose estimate is appended to the state vector, Since attitude corresponds to 3 degrees of freedom, using δθ to
and the covariance matrix is appropriately augmented. State describe the attitude errors results in a minimal representation.
augmentation is necessary for two reasons: First, due to the Assuming that N camera poses are included in the EKF
processing delays introduced by the image processing module, state vector at time-step k, this vector has the following form:
the camera measurements cannot be processed immediately1 . h iT
X̂k = X̂TE C 1ˆ
T G T CN ˆT G T (4)
Second, maintaining a window of camera poses enables the k G q̄ p̂C 1
. . . G q̄ p̂CN

processing of OF measurements (cf. Section IV-E). Therefore, where C iˆ G


G q̄ and p̂Ci , i = 1 . . . N are the estimates of the
at any time instant the EKF state vector comprises (i) the camera attitude and position, respectively. The EKF error-state
evolving state, XE , which describes the current state of the vector is defined accordingly:
spacecraft, and (ii) copies of up to N past poses of the camera. h iT
The maximum length of the camera pose history, N , is selected Xek = X eT δθ T G eT
p . . . δθ T G eT
p (5)
Ek C1 C1 CN CN
by pre-flight testing, and is chosen to be equal to the maximum
number of images through which an OF can be tracked. B. Propagation
Every time the image processing module produces ML To derive the filter propagation equations, we employ
and/or OF measurements, an EKF update takes place. In our discretization of the continuous-time IMU system model, as
implementation, the ML measurements of the latest image are outlined in the following:
processed immediately as they become available. On the other 1) Continuous-time system modeling: The system model
hand, OF updates occur whenever an OF that has been tracked describing the time evolution of the evolving state is [20]:
in a number of images is no longer detected in the latest image. B˙ 1
¡ ¢B
G q̄(t) = 2 Ω ω(t) G q̄(t), ḃg (t) = nwg (t)
At that time, all the measurements of this feature are processed (6)
using the method presented in Section IV-E. In the following G
v̇B (t) = G a(t), ḃa (t) = nwa (t), G ṗB (t) = G vB (t)
sections we present the various steps of the algorithm in detail.
In these expressions G a is the body acceleration in the global
1 Consider that at time-step k an image is recorded, and that the image
frame, ω = [ωx ωy ωz ]T is the body rotational velocity
measurements become available at time-step k + d. During the time interval expressed in the body frame, and
 
[k, k + d] IMU measurements are processed normally for state propagation. · ¸ 0 −ωz ωy
When, at time step k + d, the measurements that occurred at time-step k −bω ×c ω
become available, applying an EKF update is possible, because the camera Ω(ω) = , bω ×c =  ωz 0 −ωx 
−ω T 0
pose at time-step k is included in the state vector. −ωy ωx 0
The gyroscope and accelerometer measurements, ω m and am Numerical integration is carried out for the time interval
respectively, are given by: (tk , tk +T ), with initial condition PEE k|k . The state transition
matrix Φ(tk + T, tk ) is similarly computed by numerical
ω m = ω + C(B
G q̄)ω G + bg + ng
integration of the differential equation
am = C(B G G G
G q̄)( a − g + 2bω G ×c vB + bω G ×c
2 G
pB )
Φ̇(tk + τ, tk ) = FE Φ(tk + τ, tk ), τ ∈ [0, T ] (11)
+ ba + na
where C(·) denotes the rotational matrix corresponding to the with initial condition Φ(tk , tk ) = I15 .
quaternion argument, and ng and na are zero-mean, white C. State Augmentation
Gaussian noise processes. It is important to note that, since
When a new image is recorded, the camera pose estimate
the frame {G} is not inertial, but rather planet-fixed, the IMU
is computed from the body pose estimate as follows:
measurements incorporate the effects of the planet’s rotation,
ω G . Moreover, the accelerometer measurements include the Cˆ C Bˆ
G q̄ = B q̄ ⊗ G q̄ (12)
gravitational acceleration, G g, expressed in the local frame. G
p̂C = p̂B + CTq̂ B pC
G
(13)
Applying the expectation operator in the state propagation
equations (Eq. (6)) we obtain the equations for propagating C
where B q̄
is the quaternion expressing the rotation between the
the estimates of the evolving state: body and camera frames, and B pC is the position of the origin
Bˆ ˙ 1 Bˆ ˙ of the camera frame with respect to {B}, both of which are
G q̄ = 2 Ω(ω̂)G q̄ , b̂g = 03×1 , known. This camera pose estimate is appended to the state
G˙ vector, and the covariance matrix of the EKF is augmented
v̂B = CTq̂ â − 2bω G ×c G v̂B − bω G ×c2 G
p̂B + G g (7)
accordingly [3].
˙
b̂a = 03×1 , G
p̂˙ B = G v̂B D. Measurement Model for ML Observations
where for brevity we have denoted Cq̂ = C(B ˆ We now describe the EKF measurement model for treating
G q̄ ), â = am − b̂a
and ω̂ = ω m − b̂g − Cq̂ ω G . The linearized continuous-time visual observations of MLs. Consider that feature j, whose
model for the evolving error state is given by: global coordinates are known a priori, is observed from
the i-th camera pose included in the EKF state vector. In
Xė E = FE X e E + GE nIMU (8) normalized image coordinates, this observation is described
£ T T T T T
¤ by the equation:
where nIMU = ng nwg na nwa is the system noise. ·C ¸
The covariance matrix of nIMU , QIMU , depends on the IMU (j) 1 i
Xj (j)
zi = Ci + ni (14)
noise characteristics and is computed off-line during sensor Zj Ci Yj
calibration. Finally, the values of the jacobians FE and GE , (j)
which appear in Eq. (8), are given in [18]. where ni is the 2 × 1 image noise vector, with covariance
(j) 2
2) Discrete-time implementation: The IMU samples the matrix Ri = σim I2 . The feature position expressed in the
Ci
signals ω m and am with a period T , and these measurements camera frame, pj , is given by:
are used for state propagation in the EKF. Every time a C 
i
Xj
new IMU measurement is received, the IMU state estimate is Ci
pj =  Ci Yj  = C(C i G G
G q̄)( p`j − pCi ) (15)
propagated using 5th order Runge-Kutta numerical integration Ci
Zj
of Eqs. (7). Moreover, the covariance matrix of the EKF
state has to be propagated. For this purpose, we introduce where G p`j is the known coordinate of the landmark in the
the following partitioning for the covariance matrix: (j)
global frame. The expected value of the measurement zi is
· ¸ computed using the state estimates:
PEEk|k PECk|k
Pk|k = (9) Ci 
PTECk|k PCCk|k ·C ¸ X̂j
(j) 1 i
X̂j  Ci Ŷj  = C(Ci q̄ˆ)(G p`j − G p̂Ci )
where PEEk|k is the 15×15 covariance matrix of the evolving ẑi = Ci with G
Ci Ẑ
j Ŷj Ci
state, PCCk|k is the 6N ×6N covariance matrix of the camera Ẑj
pose estimates, and PECk|k is the correlation between the (16)
errors in the evolving state and the camera pose estimates.
From Eqs. (14) and (16) we can compute the residual of this
With this notation, the covariance matrix of the propagated (j) (j) (j)
ML measurement, rMLi = zi − ẑi . By linearization of
state is given by: (j)
· ¸ Eq. (14), rMLi is written as:
PEE k+1|k Φ(tk + T, tk )PEC k|k
Pk+1|k = (j) (j) (j) (j) e (j)
PTEC k|k Φ(tk + T, tk )T PCC k|k rMLi ' Hδθi δθ Ci +H(j)
pi p
G
e Ci +ni = HMLi X+ni (17)
where the covariance of the evolving state at time-step k +1 is where
computed by numerical integration of the Lyapunov equation: 1 h i
(j)
Hδθi = I2
(j)
−ẑi bC(C iˆ G G
G q̄ )( p`j − p̂Ci ) ×c
ṖEE = FE PEE + PEE FTE + GE QIMU GTE (10) Ci Ẑ
j
1 h i (j) e (j) e
H(j)
pi = − C
(j) C
I2 −ẑi C(Gi q̄ˆ) ' UT HX X + UT n(j) = HOF X + n(j)
o (19)
i Ẑ
j
" (j)
# It is worth noting that r(j) and H(j) , can be computed without
(j) 0 3×15 03×6 . . . [Hδθi H(j) p ] ... 03×6 OF OF
HMLi = | {z i } explicitly evaluating U. Instead, these projections of r and
(j) (j)
i−th camera block HX on the nullspace of Hf can be computed very efficiently
The residual defined in Eq. (17) is employed for performing using Givens rotations [21]. The covariance matrix of the noise
(j) 2
EKF updates, as described in Section IV-F. vector no can be easily shown to be equal to σim I2Mj −3 .
(j)
E. Measurement Model for OF Observations The residual ro is independent of the errors in the feature
coordinates, and thus EKF updates can be performed based
We present the OF measurement model for the case of a
on it. Eq. (19) defines a linearized constraint between all
single feature, fj , that is observed from a set of Mj poses,
the camera poses from which the feature fj was observed.
Sj . Each observation of this feature is described by the
This residual expresses all the available information that the
measurement model of Eq. (14). Since the global coordinates (j)
measurements zi provide for the Mj states, and thus the
of fj are not known in advance, in order to compute the
resulting EKF update is optimal, except for the inaccuracies
expected value of the measurements, we obtain an estimate
caused by linearization.
of the position of the observed feature, G p̂`j , by employing a
least-squares minimization algorithm. Once this estimate has F. EKF Updates
been computed, the expected value of each of the feature In the preceding sections we presented the measurement
measurements can be evaluated, similarly to Eq. (16), with models that we employ for treating ML and OF observations.
the sole difference that the estimate of the landmark position Once all the ML and OF measurements that must be processed
is used, instead of an a priori known value. at a given time-step are determined (as described in Algo-
Linearization yields the following expression for the resid- rithm 1), the corresponding residual vectors and measurement
(j) (j) (j)
ual, ri = zi − ẑi , of the i-th measurement: Jacobians (Eqs. (17) and (19)) are created. Stacking all these
(j)
ri
(j)
' Hδθi δθ Ci + H(j) G
e Ci + Hfi
(j) G (j)
e `j + ni together yields the following residual vector:
pi p p
(j)
e +H (j) G (j) e +n
r = HX (20)
= HXi X fi e `j + ni
p
(j) (j)
Note that, in contrast to the case of ML observations, the where r is a block vector with elements rMLi and rOF , H
measurement residual in this case is also affected by the error (j) (j)
is a block matrix with elements HMLi and HOF , and n is
in the estimate of the landmark position. In the last expression, a noise vector of dimension L (equal to the length of r),
(j) (j)
Hfi = −Hpi is the Jacobian of the residual with respect to with covariance matrix R = σim 2
IL . Once the residual, r, and
G
p̂`j , and the measurement Jacobian matrix, H of Eq. (20) have been
" (j)
# computed, the EKF update proceeds according to the standard
(j) 03×15 03×6 . . . [Hδθi H(j) pi ] . . . 03×6 equations [22]. In our work, we employ the QR decomposition
HXi = | {z }
i−th camera block of the matrix H to reduce the computational complexity of
By stacking the residuals corresponding to all the observations EKF updates [7]. At the end of the update step, the oldest
of this feature, we obtain: camera pose is marginalized out of the EKF state, to allow
(j) e (j)
for the inclusion of the next one.
r(j) ' HX X + Hf G p
e `j + n(j) (18)
V. E XPERIMENTS
(j) (j)
where r(j) , HX , Hf , and n(j) are block vectors or ma- In order to validate the algorithm’s performance in con-
(j) (j) (j) (j)
trices with elements ri , HXi , Hfi , and ni , for i ∈ Sj . ditions as close to actual planetary landing as possible, a
Assuming that feature observations in different images are sounding rocket experiment was conducted in April 2006, at
independent, the covariance matrix of the noise vector n(j) White Sands Missile Range (WSMR), New Mexico.
is R(j) = σim2
I2Mj .
It should be clear that the residual derived in Eq. (18) A. Hardware Description
cannot be directly used for performing EKF updates, since A commercially available analog camera (Pulnix TM-9701)
the landmark position error, G p e `j , is correlated with the state was added to an existing mission payload consisting of a
errors (recall that G p̂`j is computed using the state estimates GLN-MAC IMU and a Thales G12 GPS, onboard a Terrier
(j)
and the measurements zi in a least-squares minimization Orion Sounding Rocket (cf. Fig. 1 for the experimental
routine). To overcome this problem, we define a residual rOF ,
(j) setup). The nadir-pointing camera provided descent imagery
(j) from parachute deployment to landing at 30 frames/s with a
by projecting r(j) on the left nullspace of the matrix Hf .
resolution of 768 × 484 pixels, 8 bits/pixel, and a field of
Specifically, if we let U denote the unitary matrix whose
(j) view (FOV) of 38◦ × 24◦ . A common GPS time tag from a
columns form the basis of the left nullspace of Hf , we
commercial timecode generator was used to synchronize 50
obtain:
Hz IMU, 10 Hz GPS, and 30 Hz image data. Images, IMU
(j)
rOF = UT (z(j) − ẑ(j) ) data, and GPS measurements were downlinked in real-time
Parameter Sounding Rocket Mars Landing
Parachute Deploy Alt. 4200 m above ground 2000 m above ground
7000 GPS
Vertical Velocity 10 m/s at touchdown 1 m/s at td. IMU−integration
6000 VISINAV
Horizontal Velocity 3 m/s at touchdown < 0.5 m/s at td.
Off nadir angle ≤ 12◦ < 20◦ 5000

Altitude (m)
Off nadir angular rate ≤ 19 ◦ /s < 60 ◦ /s
4000 Main Chute Deploy →
Roll Rate ≤ 360 ◦ /s < 60 ◦ /s
3000
TABLE I
DYNAMICS C OMPARISON BETWEEN S OUNDING ROCKET AND M ARS EDL. 2000

1000

during flight over an S-band telemetry channel and recorded 0 ← Touchdown


on the ground. −106.5
−106.4 33.2
33.3
The data collected during this experiment were processed −106.3 33.1

off-line. We should note, however, that our algorithm is capa- Longitude (deg) Latitude (deg)
ble of real-time operation. The FFT correlation-based feature
Fig. 3. Zoomed-in view of trajectory after main parachute deployment.
matching is predicted to run at 5-20 Hz in an FPGA-based
implementation currently under development at JPL, and the
100
current C++ implementation of the pose estimator runs at 20
Hz on a 2 GHz CPU, with the number of stored poses set to 80

Number of MLs
N = 20. 60

B. Experiment Profile and Relevance 40

The rocket reached an apogee altitude of 123 km, followed 20

by drogue and main parachute opening at 28 km and 4.2 km 0


4000 3000 2000 1000 0
altitude, respectively. After a total flight time of 805 s, 376 s of Altitude (m)
which on the parachute, the vehicle landed 78 km downrange
from the launch pad. The dynamics encountered during the Fig. 4. Number of detected MLs vs. altitude.
parachuted phase of the sounding rocket flight are similar
to those during an actual Mars landing, as shown by the caused by a pose correction maneuver during powered descent.
comparison in Table I. Once images become available after this intermediate open-
Fig. 1 shows the rocket’s trajectory superimposed on a 3D loop phase, a significant change in image scale has occurred.
map of the area. A zoomed-in view of the flight path after main Despite this, the ML algorithm is capable of successfully
parachute deployment is depicted in Fig. 3. Pure integration matching MLs, and the EKF estimates’ uncertainty drops
of the IMU measurements (blue dashed line) yielded fairly sharply within a few seconds after resuming ML updates (cf.
accurate results until right after the deployment of the main Figs. 5, 7 at 1600 m altitude).
parachute, but then quickly diverged. The opening of the The 3D ground coordinates for the MLs were obtained from
parachute caused the rocket’s motion to be extremely jerky USGS 1 Meter Digital Orthoimagery Quarter Quadrangles
for several seconds. Integrating the large acceleration mea- taken in 2001, combined with 1 arc second finished elevation
surements recorded in this period, using the attitude estimates data from the Shuttle Radar Topography Mission [23]. For
that had error accumulated over the preceding 431 s of flight, feature matching in the first set, the entire 7 × 8 km map
resulted in large position errors. Note that up to this point image was resampled to ∼ 7 m/pixel, while for the second
no images were available. Once the first few images were set the map was cropped to 2 × 2 km and used at its base
processed, the VISINAV algorithm corrected the accumulated resolution of 1 m/pixel.
error, and the estimated trajectory became virtually indistin- At some point during descent, the number of features within
guishable from that measured by the GPS. the FOV becomes too small, and the difference in resolution
As shown in Table II, ML measurements were processed between camera and map too significant to allow successful
during two separate phases of flight, one between 3800 m ML correspondences to be established. In this experiment, we
and 3100 m, using images at 3 Hz, and the second between emulated this behavior by stopping ML updates at 230 m
1600 m and 230 m above ground, processing only one frame altitude. To compensate, starting at 330 m, the filter began
per second (1 Hz), and yielding up to 80 MLs per image to perform OF updates at a frequency of 3 Hz. Even though
(cf. Fig. 4). The artificial gap, during which the filter had OFs do not make the system observable (as opposed to
to rely on open-loop IMU integration, was introduced to images containing at least three MLs), they allow for precise
emulate a temporary failure of ML detection. In EDL, this estimation of linear and rotational velocity, resulting in very
could arise, for example, due to large off-nadir viewing angles small error growth during the final 230 m of descent.
C. Algorithm Performance Altitude (m) Time (s) Position Velocity
Ground-truth for position and velocity was obtained from Error (m) Error (m/s)
GPS measurements. Figs. 5 and 7 show the resulting errors and Beg. 1st ML set 3800 454 2722.3 10.45
the corresponding 3σ-bounds for velocity and position in the End 1st ML set 3100 514 16.9 0.18
local North-East-Down (NED) frame. Table II gives the error
Beg. 2nd ML set 1600 647 78.2 1.38
norms for the position and velocity estimates at the beginning
End 2nd ML set 230 781 5.1 0.23
and end of the different update phases.
a) First ML set: When processing the first MLs, the Beg. OFs 330 771 3.7 0.15
algorithm converges within about 5 seconds from the large Touchdown 0 805 6.4 0.16
error (∼ 2700 m) accumulated during IMU integration to Touchdown
0 805 9169.5 32.70
within 18 m of GPS ground-truth (cf. Figs. 3, 7). During (IMU-only)
the open-loop integration phase between 3100 m to 1600 m
TABLE II
altitude, the pose uncertainty is again increasing. C ONDITIONS FOR THE DIFFERENT EKF UPDATE PHASES .
b) Second ML set and OFs: The pose estimates almost
instantaneously converge close to ground truth once ML
updates resume at 1600 m. ML- and OF-updates (the latter 10
5
starting at 330 m) reduce the position uncertainty bounds to

v (m/s)
0
−5
approximately ±4 m, and the velocity uncertainty to less than −10

x
1st ML set IMU Integration 2nd ML set
±0.25 m/s along each axis (3σ). Notice that the actual error OFs
4000 3500 3000 2500 2000 1500 1000 500 0
at the beginning of OF updates is smaller than at the end
of ML processing ten seconds later. This can be attributed 10
5

v (m/s)
0
to the already decreasing number of detected MLs within −5
−10
the FOV at this altitude, due to the difference in resolution y
1st ML set IMU Integration 2nd ML set
OFs
between satellite and camera image (cf. Fig. 4). With the 4000 3500 3000 2500 2000 1500 1000 500 0
discontinuation of ML processing at 230 m altitude, the pose 10
uncertainty begins to increase even more, although still at a 5
v (m/s)

0
very low rate (cf. the zoomed-in view of the errors for the −5
−10
z

final 300 m in Figs. 6 and 8). As predicted, this is the result 1st ML set IMU Integration 2nd ML set
OFs
of the system becoming unobservable. Table II shows the final 4000 3500 3000 2500 2000 1500 1000 500 0
Altitude (m)
velocity and position error magnitudes at touchdown, which
are approximately 6.4 m for position and 0.16 m/s for velocity.
Similar to position and velocity, the attitude uncertainty Fig. 5. Velocity error expressed in NED frame (blue solid line) and
corresponding 3σ-bounds (red dashed line). Note that x-y-z in the plots
bounds were decreased to ±0.15◦ accuracy along each axis corresponds to N-E-D.
(3σ) during processing of ML updates, with a temporary
increase to ±0.9◦ during open-loop IMU integration between
3100 m and 1600 m altitude. Due to the lack of ground- 0.5
truth, the attitude uncertainty was determined from the EKF
vx (m/s)

0.25
0
estimates of the covariance matrix. The figures for position −0.25
and velocity errors (cf. Figs. 5-8) show that the filter estimates −0.5
300 250 200 150 100 50 0
for these variables are consistent, indicating that the attitude
estimates are also consistent. The filter attitude estimate was 0.5
vy (m/s)

further verified through an independent measurement of the 0.25


0
final attitude at touchdown using a compass. −0.25
−0.5

VI. C ONCLUSION 300 250 200 150 100 50 0

In this paper, we have presented the analysis and experimen- 0.5


vz (m/s)

0.25
tal validation of a vision-aided inertial navigation algorithm for 0
planetary landing applications. Results from a sounding rocket −0.25
−0.5
test, covering the dynamic profile of typical planetary EDL 300 250 200 150 100 50 0
scenarios, showed estimation errors of magnitude 0.16 m/s in Altitude (m)

velocity and 6.4 m in position at touchdown. These results


vastly improve current state of the art for EDL navigation Fig. 6. Velocity error in NED frame (zoomed-in view of Fig. 5 before
without vision, and meet the requirements of future planetary touchdown).
exploration missions [1]. The algorithm tightly couples IMU
and camera measurements of mapped landmarks (MLs) and real-time capable fashion. It is thus able to provide very
opportunistic features (OFs), in a resource-adaptive and hence accurate, high-bandwidth estimates for precision guidance and
[2] N. Trawny, A. I. Mourikis, S. I. Roumeliotis, A. E. Johnson, and
500 J. Montgomery, “Vision-aided inertial navigation for pin-point landing
250
0 using observations of mapped landmarks,” Journal of Field Robotics,
x (m)

−250
−500
vol. 24, no. 5, pp. 357–378, May 2007.
1st ML set IMU Integration 2nd ML set [3] A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint Kalman
OFs
filter for vision-aided inertial navigation,” in Proceedings of the IEEE
4000 3500 3000 2500 2000 1500 1000 500 0
International Conference on Robotics and Automation, Rome, Italy,
500 April 2007, pp. 3565–3572.
250
0
[4] J. L. Crassidis, R. Alonso, and J. L. Junkins, “Optimal attitude and
y (m)

−250 position determination from line-of-sight measurements,” The Journal


−500
of the Astronautical Sciences, vol. 48, no. 2–3, pp. 391–408, April–Sept.
1st ML set IMU Integration 2nd ML set OFs 2000.
4000 3500 3000 2500 2000 1500 1000 500 0 [5] A. Wu, E. Johnson, and A. Proctor, “Vision-aided inertial navigation for
flight control,” in Proceedings of the AIAA Guidance, Navigation, and
500
250 Control Conference, no. AIAA 2005-5998, San Francisco, CA, Aug.
0 2005.
z (m)

−250
−500 [6] G. F. Ivey and E. Johnson, “Investigation of methods for simultaneous
1st ML set IMU Integration 2nd ML set OFs localization and mapping using vision sensors,” in Proceedings of the
4000 3500 3000 2500 2000 1500 1000 500 0 AIAA Guidance, Navigation, and Control Conference, no. AIAA 2006-
Altitude (m) 6578, Keystone, CO, Aug. 2006.
[7] D. S. Bayard and P. B. Brugarolas, “An estimation algorithm for vision-
based exploration of small bodies in space,” in Proceedings of the 2005
Fig. 7. Position error expressed in NED frame (blue solid line) and American Control Conference, vol. 7, Portland, Oregon, Jun. 2005, pp.
corresponding 3σ-bounds (red dashed line). 4589–4595.
[8] D. Strelow, “Motion estimation from image and inertial measurements,”
Ph.D. dissertation, Carnegie Mellon University, Nov. 2004.
10 [9] S. I. Roumeliotis, A. E. Johnson, and J. F. Montgomery, “Augmenting
inertial navigation with image-based motion estimation,” in IEEE Inter-
x (m)

0 national Conference on Robotics and Automation (ICRA), Washington


D.C., 2002, pp. 4326–33.
−10 [10] R. M. Eustice, H. Singh, and J. J. Leonard, “Exactly sparse delayed-state
300 250 200 150 100 50 0 filters for view-based SLAM,” IEEE Transactions on Robotics, vol. 22,
no. 6, pp. 1100–1114, Dec. 2006.
10
[11] A. I. Mourikis and S. I. Roumeliotis, “On the treatment of relative-pose
measurements for mobile robot localization,” in Proc. IEEE Int. Conf.
y (m)

0
on Robotics and Automation, Orlando, FL, May 15-19 2006, pp. 2277
−10
– 2284.
300 250 200 150 100 50 0 [12] P. McLauchlan, “The variable state dimension filter,” Centre for Vision,
Speech and Signal Processing, University of Surrey, UK, Tech. Rep.,
10 1999.
[13] Y. Cheng and A. Ansar, “Landmark based position estimation for pin-
z (m)

0 point landing on mars,” in Proceedings of the 2005 IEEE International


Conference on Robotics and Automation (ICRA), Barcelona, Spain, Apr.
−10
2005, pp. 4470–4475.
300 250 200 150 100 50 0
Altitude (m) [14] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,”
International Journal of Computer Vision, vol. 2, no. 60, pp. 91–110,
2004.
Fig. 8. Position error in NED frame (zoomed-in view of Fig. 7 before [15] C. Harris and M. Stephens, “A combined corner and edge detector,” in
touchdown). Proceedings of the 4th Alvey Vision Conference, 1988, pp. 147–151.
[16] A. Ansar, “2004 small body GN&C research report: Feature recognition
algorithms,” in Small Body Guidance Navigation and Control FY 2004
control. It should be pointed out that the proposed estimator is RTD Annual Report (Internal Document). Pasadena, CA: Jet Propulsion
modular and easily extendable to incorporate additional sensor Laboratory, 2004, no. D-30282 / D-30714, pp. 151–171.
information (e.g., doppler radar), as well as different image [17] A. Johnson, R. Willson, Y. Cheng, J. Goguen, C. Leger, M. SanMartin,
and L. Matthies, “Design through operation of an image-based velocity
processing algorithms that provide unit vector measurements estimation system for Mars landing,” International Journal of Computer
to corresponding features. Future work includes optimal se- Vision, vol. 74, no. 3, pp. 319–341, September 2007.
lection of a subset of the most informative image features to [18] A. I. Mourikis, N. Trawny, S. I. Roumeliotis, A. Johnson, A. Ansar,
and L. Matthies, “Vision-aided inertial navigation for spacecraft entry,
track. This would allow efficient use of camera information in descent, and landing,” Journal of Guidance, Control, and Dynamics,
the presence of limited computational resources. 2007, submitted.
[19] W. G. Breckenridge, “Quaternions proposed standard conventions,” Jet
ACKNOWLEDGEMENTS Propulsion Laboratory, Pasadena, CA, Interoffice Memorandum IOM
343-79-1199, 1999.
This work was supported by the University of Min- [20] A. B. Chatfield, Fundamentals of High Accuracy Inertial Navigation.
nesota (DTC), the NASA Mars Technology Program (MTP- Reston, VA: American Institute of Aeronautics and Astronautics, Inc.,
1263201), and the National Science Foundation (EIA- 1997.
[21] G. Golub and C. van Loan, Matrix computations. The Johns Hopkins
0324864, IIS-0643680). University Press, London, 1996.
[22] P. S. Maybeck, Stochastic Models, Estimation and Control. New York:
R EFERENCES Academic Press, 1979, vol. 1+2.
[1] NASA, “Solar system exploration roadmap,” http://solarsystem.nasa. [23] U.S. Geological Survey, “Seamless data distribution system,” http:
gov/multimedia/downloads/SSE RoadMap 2006 Report FC-A med. //seamless.usgs.gov/index.asp, 2006.
pdf, Sep. 2006.

You might also like