[go: up one dir, main page]

0% found this document useful (0 votes)
79 views8 pages

Building Mapping 6

This document presents a vision-based self-localization system for mobile robots. The system uses a MAP estimator to recursively estimate the robot's pose based on information from odometry and camera sensors. It models sensor uncertainties and propagates them using an unscented transform. Experiments demonstrate the system provides reliable and accurate self-localization.

Uploaded by

saifizi
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views8 pages

Building Mapping 6

This document presents a vision-based self-localization system for mobile robots. The system uses a MAP estimator to recursively estimate the robot's pose based on information from odometry and camera sensors. It models sensor uncertainties and propagates them using an unscented transform. Experiments demonstrate the system provides reliable and accurate self-localization.

Uploaded by

saifizi
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

A MAP Approach for Vision-based Self-localization of

Mobile Robot
WANG Ke1 WANG Wei1 ZHUANG Yan1
Abstract An on-the-fly, self-localization system is developed for mobile robot which is operative in a 3D environment with
elaborative 3D landmarks. The robot estimates its pose recursively through a MAP estimator that incorporates the information
collected from odometry and unidirectional camera. We build the nonlinear models for these two sensors and maintain that the
uncertainty manipulation of robot motion and inaccurate sensor measurements should be embedded and tracked throughout our
system. We describe the uncertainty framework in a probabilistic geometry viewpoint and use unscented transform to propagate
the uncertainty, which undergoes the given nonlinear functions. Considering the processing power of our robot, image features are
extracted in the vicinity of corresponding projected features. In addition, data associations are evaluated by statistical distance.
Finally, a series of systematic experiments are conducted to prove the reliable and accurate performance of our system.
Key words Vision-based self-localization, MAP estimation, multi sensor fusion, unscented transformation, uncertainty propaga-
tion
In retrospect of self-localization, many vision-based classes previously constructed from 3D environmental in-
self-localization systems have been developed for mobile formation, there exists an unknown random vector (robot
robot[1] . One choice is based on geometric reasoning meth- pose) evolved by a certain dynamic function with prior
ods, [2−3] applied Hough transform to global localization. PDF. When the vector travels in the parameter space, some
However, they are sensitive to sensor noise and may not conditional events will generate from certain PDF of the
provide an accurate pose estimate. Another framework is given classes. Therefore, the robot could integrate these
image retrieval system, and researchers have successfully available events containing the external information to es-
localized the robot based on the appearance[4−6] of the en- timate the a posterior probability likelihood of such un-
vironment or SVM method[7] . The drawback is that a large known vector. We choose MAP[23] (Maximum a posterior)
database has to be kept in robot memory for classification. method, which is computationally direct and has experi-
Experiments indicate that a reliable localization system mentally been shown to work well[24] .
should explicitly represent and maintain the uncertainty We consider the uncertainty inherent in the localization
inherent in the available information[8−9] . Therefore, local- process and give explicitly analytical solution using un-
ization system could be designed in a probabilistic manner. scented transform (UT)[25] . The UT applies the nonlinear
[10−14] summarize some popular probabilistic methods in function to a minimal set of sample points and yields more
which Markov[15] , Mont Carlo[16] , and particle filter[17−18] accurate statistical value than linear methods. Therefore,
hold the probability likelihood to represent the robot state information and its uncertainty can be propagated through
belief. However, calculation of the Bayesian integration re- nonlinear transform without calculating Jacobian matrices.
quires a huge computation, which may prevent their appli- Because of the limited resources, the system is developed
cations to robot vision. Another choice is extended Kalman based on concurrently hierarchical frame instead of ineffi-
filter[19−21] . This approach takes a linearization step with cient serial processing mode. The former fulfills on-the-fly
explicitly analytical Jacobian matrices. As known, in some localization when there are several collaborative subpro-
cases, the matrices are difficult to obtain due to the com- grams in localization task. Moreover, we carefully design
plicated nonlinear transform. Moreover, the linearization an effective map using line segment with directional infor-
might lead to the filter instability if the time step intervals mation to improve feature detection and data association.
are not sufficiently small[22] . This paper is organized as follows. We describe firstly the
In this paper, a vision-based self-localization system is map modeling, self-localization algorithm and logic frame-
developed based on our SmartROB2 mobile robot. Our work of our system. In Section 1, scene prediction is pre-
design pattern focuses on the following issues: sented based on the camera model. Section 2 expatiates
Problem 1. How to design a vision-based self- the manipulations of uncertainty related to our system, and
localization system in a new viewpoint? This system can how the uncertainty assists the image processing and data
provide an accurate pose estimate with a more reasonable association. Finally, the experimental results are given.
and efficient uncertainty manipulation.
Problem 2. Considering run-time coordinations among
1 Vision-based self-localization
subprograms of the localization task, how to make use of 1.1 Map modeling
the resource-constrained RTOS of an embedded robot plat- As shown in Fig. 1, in the world coordinate the global
form? map MW consists of a set of 3D line segments that describe
The new standpoint we emphasis on is that we do not the line features in a whole soccer field. The primitive ele-
describe the design of self-localization method as a tradi- ment, miW ∈ MW , is defined as a directional line segment,
tional state-estimate problem but as a statistical param- i.e.,
eter estimate problem. That is, given a set of training
miW = {EW
i
(ppsW , p eW ), VW
i
(vv iW , n iW )}, i = 1, 2, · · · , N (1)
Received May 24, 2006; in revised form September 24, 2007
Supported by National Natural Science Foundation of China where p sW = [xsW , yWs s T
, zW ] and p eW represent two end-
(60605023, 60775048), Specialized Research Fund for the Doctoral
Program of Higher Education (20060141006) points of mW , v W = [aW , biW , ciW ] is the directional vector
i i i

1. Research Center of Information and Control, Dalian University


of Technology, Dalian 116024, P. R. China
of miW , n iW describes the physical color transitions and is
DOI: 10.3724/SP.J.1004.2008.00159 orthogonal to v iW .
160 ACTA AUTOMATICA SINICA Vol. 34

Compared with the topological map[26] or the map that The problem is then viewed as searching the optimal
is defined only by 3D points[27] , our map can provide abun- ϕ, which maximizes the posterior probability
parameter ϕ̂
dant information and is more efficient in manipulation. ϕ|Dn ) based on the MAP estimator, i.e.,
p(ϕ

ϕ|Dn )) =
ϕ = arg max(p(ϕ
ϕ̂
ϕ
(6)
ϕ)p(Dn |ϕ
arg max(p(ϕ ϕ))
ϕ

Transform (6) into an equivalent optimization form

ϕ̂ ϕ)) − 2 ln(p(Dn |ϕ
ϕ = arg minϕ (−2 ln(p(ϕ ϕ))) (7)

where the first term −2 ln(p(ϕ


ϕ)) is

−2 ln(p(ϕ ϕ − ϕ k+1|k )T Pk+1|k


ϕ)) = (ϕ −1
ϕ − ϕ k+1|k ) + P (8)

Fig. 1 Global 3D map with directional information where variable P = ln |8π 3 Pk+1|k |. After optimizing (7),
1.2 Self-localization algorithm the covariance matrix P̂ of ϕ can be estimated according
From a statistical viewpoint, the robot pose ϕ = [x, y, θ]T to the regression analysis proposed by [29].
is regarded as an unknown parameter. We suppose this pa-
rameter is driven by a Gaussian process P̂ = 2H −1 (9)

ϕk , u k+1 ) + ν k+1
ϕ k+1|k = f (ϕ (2) where matrix H represents the Hessian of the objective
function. The robot pose and covariance can be updated
by fusing the information of odometry and visual observa-
ϕ, u ) is the dead-reckoning function[28] , which is
where f (ϕ tions.
used to model the differential odometer of our robot; u is
input and ν is additive Gaussian white noise. The associ- 1.3 System architecture
ated covariance Pk+1|k of prior estimate ϕ k+1|k is computed Here, we summarize the software framework of our
by unscented transform. self-localization system for SmartROB2 robot. The self-
We assume ϕ obey the distribution of the a priori Gaus- localization task is developed by Oberon system and im-
sian PDF, which evaluates the statistical distance between plemented on native XO/2 RTOS[30] . The system can be
ϕ and the prior estimate ϕ k+1|k , i.e., logically divided into three layers, and each layer imple-
ments as a concurrent and collaborative subtask.
ϕ ∼ N (ϕ
ϕk+1|k , Pk+1|k ) (3) 1) Perception layer (PL): In this underlying layer, a serv-
ing thread responds to detect of the sensors states, allo-
Estimation of ϕ also depends on the information Dn = cating sensor buffers, and online correcting the image dis-
{d1 , d2 , · · · , dn }, n < N available during process (2). Sam- tortion caused by the fish-eye camera. Another thread also
ple di evaluates the Mahalanobis distance between mea- keeps track of its communication between information pro-
surement Y j and its corresponding feature prediction Y i . cessing layer (IPL) and receives sate feedback from data
Y i is determined by the camera model, ϕ and map element fusion layer (DFL).
miW as follows 2) Information processing layer: The received informa-
ϕ, miw ) + ξ
Y i = (ϕ (4) tion from PL unfolds into image data, robot pose ϕ k+1|k ,
and its covariance Pk+1|k . Image processing unit (IPU)
where ξ ∼ N (0, R) is the additive noise. We will detail (4) then use the initial scene estimated by ϕ k+1|k and 3D map
later in Section 2. MW to extract the corresponding image features.
Assume that each observation sample di generates in- 3) Data Fusion Layer: As system kernel, it manipulates
dependently from a joint parametric conditional PDF the information from each layer and uses MAP estimator
p(Dn |ϕϕ)
 to perform multi-sensor fusion for the a posterior pose and
p(Dn |ϕ
ϕ) = p(di |ϕ
ϕ) (5) its covariance.

Fig. 2 System frame


No. 2 WANG Ke et al.: A MAP Approach for Vision-based Self-localization of Mobile Robot 161

Finally, our system adopts the motion compensation[31−32] Substituting (11) into (13), we obtain the formulation
procedure to compensate the robot movement for the describing the relationship between v iI and v iC as follows.
system-processing delay.
aiI (biC zC
j
− ciC yC
j
) = biI (aiC zC
j
− ciC xjC ) (14)
2 Scene prediction
When camera parameters and ϕ are given, the robot can After some functional decompositions and variable sub-
predict where in a captured image the map lines should be stitutions, the projected v iI can be formulated as
visible.
⎡ j

2.1 Generating model point (yW − y)(aiW cos α + ciW sin α cos θ)−
⎢ j
(xW − x)(biW cos α + ciW sin α sin θ)+ ⎥
As shown in Fig. 3, we assume that camera optical center   ⎢ ⎥
aiI ⎢ sin α(H − zWj
)(biW cos θ − aiW sin θ) ⎥
OC and ZR axes of robot are collinear, α is the pan angle, =⎢



xC axis is parallel to the ground. Function projp projects biI ⎢ ⎥
a random point pjW = [xjW , yW j j T
, zW ] of miW to the point ⎣ −ciW ((xjW − x) cos θ + (y j − y) sin θ))− ⎦
j
j
p I in the image plane: (H − zW )(aiW cos θ + biW sin θ)
(15)
p jI = projp (ϕ
ϕ, p jW ) (10) where H is the distance between OC and OR . Similarly, we
can obtain the vector n iI corresponding to n iW .
This function firstly transforms p jW to p jR in robot co-
2.3 Generating scene feature
ordinates using ϕ , then to p jC = [xjC , yC
j j T
, zC ] in camera
coordinates. If the parameters of the calibrated camera are The scene feature is represented in the Hough domain
given, the image point p jI can be computed. Notice that p jI using the above results as
may be defined by pjC if focal length f is provided, i.e.,
Y i = [ρ, ϑ]T = T (ppjI , v iI )
Ŷ (16)
p jI/C = [xjC f /zC
j j
, yC j
f /zC , −f ]T (11)

where p jI/C denotes the point definition of camera coordi- where function T transforms p jI and v iI into the line that is
defined by ρ and ϑ.
nates for p jI in the image plane.
2.2 Generating model vector 3 Uncertainty manipulations
Let line lIi be the projected image line of miW . lIi is de- We describe here the uncertainty manipulation for robot
fined by p jI = [xjI , yIj , 0]T and vector v iI = [aiI , biI , 0]T , which motion, image processing, and data association. As shown
can also be represented as v iI/C = [aiI , biI , 0]T in camera in Fig. 3, the uncertainty of robot motion may inevitably
coordinates. v iW ∈ miW will be sequentially transformed disturb the projected feature of line. In this case, both
to v iC = [aiC , biC , ciC ]T , and then into v iI . Because v iC lies the descriptions and coordinates of uncertainty are changed
in the interpretation plane[33] passing through OC and lIi , when the random vector is transformed by (2) and (4).
the normal vector N iI/C of this plane is equal to the cross We model these operations in a probability geometric
manner[32] and hold the Gaussian assumption owing to the
product of v iI/C and the vector defined by OC and p jI , i.e.,
central limit theorem[33] .
N iI/C = [biI f, −aiI f, aiI yIj − biI xjI ] (12) 3.1 Uncertainty propagation
The following equation verifies the orthogonality of N iI/C Instead of linearizing (2) and (4) with respect to ϕ , we
with respect to v iC . use unscented transform to capture the statistical proper-
ties of ϕ ∈ Rn×1 . The uncertainty propagation related to
aiC biI − biC aiI + ciC (aiI yIj − biI xjI )/f = 0 (13) (2) includes the following steps:

Fig. 3 Scene prediction


162 ACTA AUTOMATICA SINICA Vol. 34

• Select
√ sample σ j from jth row or column of matrix where Λm n
j and Λj correspond to uncertainties of the de-
± nPk . tected edge pixels.

• Construct the set of sigma points Xk = [ϕ


ϕk , ϕ k +
σ 1 , · · · , ϕ k + σ 2n ].

• Use (2) to transform each sigma point X jk into X jk+1 .


(c)
• Let Wj be the weighted scalar for jth
point, and compute the mean ϕ k+1|k =

2n (c) j
j=0 Wj X k+1 and associated covariance Pk+1|k =

2n T
(c) j j
j=0 Wj X k+1 − ϕ k+1|k X k+1 − ϕ k+1|k . (a) Feature detection

For propagating the uncertainty through (4) with respect


to ϕ , we observe the following rules:

• Select
sample κ j from jth row or column of matrix
± nPk+1|k .

• Construct the set of sigma points X = [ϕ


ϕ ,ϕ
ϕ +
κ 1 , · · · ,ϕ
ϕ + κ 2n ].
j
• Apply (4) to X j for the transformed sample point Ŷ
Yi
of map element miW . (b) Data association results
Fig. 4 Image processing and data association with uncertainty
(m)
• Let Wj be the weighted scalars for jth and local feature constraints
point, then predict the scene feature Ŷ Yi =

2n (m) j 4 MAP estimation
j=0 Wj Y
Ŷ i and its associated covariance Ri =

2n j j T Sample di is formulated as the statistical distance be-
(m)
j=0 Wj
Y i − Ŷ
Ŷ Y i Ŷ Y i − Ŷ
Yi . Y i and Y j as follows.
tween Ŷ
3.2 Image processing and data association
Y i − Y j )T (R
di = (Ŷ Ri + R j )−1 (Ŷ
Y i − Y j) (19)
We just process the gray images to reduce the data trans-
fer on bus and computation cost. In Fig. 4(a), the visible
Y i and R j for the corresponding
where R i is covariance for Ŷ
miW is projected to its corresponding feature ŶY i . Its covari- image edge Y j .
ance Σρθ with respect ϕ to is calculated by UT. We then use Let Ci,j = R i + R j . Then the likelihood of the indepen-
the search strategy within the area determined by σ ρ . Gen- dent information that is available to robot is
erally, there is a slight difference between ŶY i and the cor-
responding image edge Y j . The searching algorithms start  1 1
p(Dn |ϕ
ϕ) = exp(− di ) (20)
from a random point on the given line, and then each pixel 2π |Ci,j | 2
along direction n iI is convoluted to obtain the magnitude (i,j)

and direction of gradient. Once the magnitude is above


a certain threshold and the gradient is opposite to vector The second term of (7) is
n iI , the pixel is regarded as a candidate edge pixel. Search  
algorithms then change to the gradient direction (DG) and −2 ln(p(Dn |ϕ
ϕ)) = di + ln(|4π 3 Ci,j |) (21)
terminate until the gradient magnitude is below the given
threshold. Thus, real edge pixels are determined by select-
ing the one with the maximal magnitude from candidates. MAP estimation leads to an optimization problem,
When two points are found, the corresponding image edge which may be approached with various numerical tech-
Y j can be given. Fig. 4 (b) illustrates the data association niques. In this paper, Newton method[37] is used to esti-
results using our image processing method. mate the parameter ϕ by minimizing the objective function.
The direction of a detected edge is φ, a reasonable un- As shown in Fig. 5, the algorithm is initialized by ϕ k+1|k
certainty description[34−35] for the edge pixel is given as and covariance Pk+1|k . During the iteration, the estimated
follows ϕ generates new projected lines (dash dotted lines), which
  2   tend to approach the corresponding edges. Because the
cos φ − sin φ σ1 0 cos φ sin φ deviations between prediction and observation features are
Λ= 2
sin φ cos φ 0 σ2 − sin φ cos φ generally small, the algorithm may perform in the vicin-
(17) ity of the optimal state and rapidly converge to the mini-
where σ 1 > σ 2 , σ 1 is the standard variance along φ. σ 2 mum of objective function. In a systematic viewpoint, our
is the standard variance along the normal of this edge[36] . method folds each individual observation and fuses multi-
The uncertainty Rj for the extracted edge is computed by sensor information in a batch manner. Compared with se-
the transformation: quential EKF[38] , this method might avoid the filter diver-
gence caused by the integrating order of different accurate
Rj ← R(Λm n
j , Λj ) (18) observations.
No. 2 WANG Ke et al.: A MAP Approach for Vision-based Self-localization of Mobile Robot 163

caused by the heading uncertainty.


Table 1 Comparisons of localization performance between
MAP and SEKF

Position Method Real pose Estimated pose


(x, y, θ) (x, y, θ)
MAP (0.807,-0.068,-0.062)
(a) (0.84,-0.06,-0.087)
SEKF (0.807,-0.092,-0.120)
Fig. 5 MAP estimation using iterative optimization
MAP (1.994,-0.019,-0.048)
(b) (2.00, 0.00,-0.035)
5 Experiments SEKF (2.035,-0.038,-0.063)

5.1 Localization performance with single run (c)


MAP
(3.27,-0.10, 1.428)
(3.323,-0.125, 1.428)
SEKF (3.336,-0.120, 1.419)
The following experiments investigate the accuracy per-
formance of our method in a single run. The experimental
analysis is based on a series of performance comparisons 5.2 On-the-fly performance
between SEKF and our method. In this section, we conduct an experiment to evaluate
We set up the robot at three different positions. We ini- on-the-fly localization performance. The principle is that
tialize robot state to the value that is adjacent to the actual using our method, the robot could have sense of its states
position. In addition, the initial state covariance is set to and then could go to the given targets.
a very high level, which indicates that the belief of robot As shown in Fig. 7, the desired path consists of a set of
position is poor. Fig. 6 illustrates the projected model lines given targets. We firstly put the robot at ϕ = [1.95, 0, 0]T
generated by SEKF (white) and MAP (black), respectively. while the system state is set to ϕ 0 = [2.0, 0, 0]T . With our
In these cases, the resulting model lines estimated by SEKF method, the robot tries to locate itself and goes to the tar-
do not approximate the targets. However, this estimation gets. We illustrate parts of 2D trajectories generated by
is substantially improved by MAP. our method and dead reckoning. The state covariance is
The accuracy of the estimated pose also depends on the represented as error ellipse. Trajectory with black ellipses
number of features visible and their distances from the derives from our method, whereas the trajectory consisting
robot. The corresponding pose estimations of Fig. 6 are of gray ellipses is exclusively obtained through dead reck-
listed in Table 1. In Fig. 6 (a), the visible features are rel- oning. Without incorporation of visual information, the
atively far from the robot. In this case, the estimation of robot is unable to recover from the wrong positions where
x coordinate is less accurate due to the camera resolution. the gray ellipses assemble. In addition, the state covari-
However, the estimations of θ and y can be compensated by ance continuously diverges, which degrades the robot belief
the vertical goalpost. In the areas where precise robot lo- about its position. But, with our method, the robot may
cations are required (Figs. 6 (b) and (c)), there are enough correct its position and may approximately reach the given
features available for estimation so that a quite accurate targets.
pose estimation is possible. Another experiment is conducted to evaluate the local-
ization performance between SEKF and MAP. We firstly
put robot at starting point ϕ0 = [0.8, 0, 0]T and then make
it move along a fixed path. At regular intervals, the robot
pose and corresponding image are recorded. These data
are regarded as reference values. For comparisons, robot
starts at real position ϕ 0 while the initial state is set to
ϕ 0 = [0.85, 0.05, 0]T . Hence, when robot travels along the
original path, the captured image will be the same as the
one grabbed at corresponding reference location. Both lo-
(a) (b) calization algorithms are applied after 18 frames have been
taken. In this case, the robot motion uncertainty has ac-
cumulated to a certain degree.
Fig. 8 shows the images captured from frames 19 to 24.
The resulting projected lines will require several steps to
approximate the corresponding edges. Apparently, the er-
ror of y component is not corrected properly by SEKF.
The reason for this maybe the convergence rate when the
observations are sequentially fused.
Fig. 9 demonstrates the localization results of our
method. From frame 20, the resulting predicted features
(c) are much closer to its corresponding targets than those of
Fig. 6 Accuracy comparisons between SEKF and MAP at SEKF.
different locations Fig. 10 shows the derivations between reference and cor-
responding poses estimated by SEKF and MAP, respec-
Further data analysis on the above localization perfor- tively. The estimation of x by MAP approximates to that
mance between SEKF and MAP can be seen from Table by SEKF. This may be due to the camera resolution on the
1. The comparisons prove our method outweighs SEKF in depth of field. Map lines away from robot cannot provide
most cases. Especially, in the case of estimation of θ, MAP accurate references. However, from the estimation of y and
can provide better approximate results. This fact indicates θ, we can tell that MAP could generate better estimations
that our approach may cope with the rapid scene change than SEKF.
164 ACTA AUTOMATICA SINICA Vol. 34

Fig. 7 On-the-fly self-localization, MAP vs. Dead reckoning

(a) Frame 19 (b) Frame 20 (c) Frame 21

(d) Frame 22 (e) Frame 23 (f) Frame 24

Fig. 8 On-the-fly self-localization using SEKF

(a) Frame 19 (b) Frame 20 (c) Frame 21

(d) Frame 22 (e) Frame 23 (f) Frame 24

Fig. 9 On-the-fly self-localization using MAP


No. 2 WANG Ke et al.: A MAP Approach for Vision-based Self-localization of Mobile Robot 165

4 Jogan M, Leonardis A. Robust localization using an omnidi-


rectional appearance-based subspace model of environment.
Robotics and Autonomous Systems, 2003, 45(1): 51−72
5 Emanuele M, Takeshi M, Hiroshi I. Image-based memory for
robot navigation using properties of omnidirectional images.
Robotics and Autonomous Systems, 2004, 47(4): 251−267
6 Linåker F, Ishikawa M. Real-time appearance-based Monte
Carlo localization. Robotics and Autonomous Systems, 2006,
54(3): 205−220
7 Li Gui-Zhi, An Cheng-Wan, Yang Guo-Sheng, Tan Min,
Tu Xu-Yan. Scene recognition for mobile robot localization.
Robot, 2005, 27(2): 123−127 (in Chinese)
8 Thrun S, Beetz M, Bennewitz M, Cremers A B, Dellaert F,
Fox D. Probabilistic algorithms and the interactive museum
tour guide robot minerva. International Journal of Robotics
Research, 2000, 19(11): 972−999
9 Burgard W, Cremers A B, Fox D, Hähnel D, Lakemeyer
G, Schulz D. Experiences with an interactive museum tour
guide robot. Artificial Intelligence, 2000, 114(1-2): 3−55
Fig. 10 On-the-fly self-localization performance, MAP vs. 10 Li Mao-Hai, Hong Bing-Rong. Progress of probabilistic lo-
SEKF calization methods in mobile robots. Robot, 2005, 7(4):
380−384 (in Chinese)
5.3 Discussion
11 Li Qun-Ming, Xiong Rong, Chu Jian. Localization ap-
From above experimental results, we could now summa- proaches for indoor autonomous mobile robot: a review.
rize the features of our system as follows. Robot, 2003, 25(6): 560−567 (in Chinese)
Accuracy: This property originates from the MAP esti- 12 Fang Zheng, Tong Guo-Feng, Xu Xin-He. Study of au-
tonomous robot self-localization methods based on Bayesian
mator, which uses an iterative scheme to get the solution. If filter theory. Control and Decision, 2006, 21(8): 841−847
the initial pose is close to the true solution and enough en- (in Chinese)
vironmental features are available to the robot, our method 13 Durrant-Whyte H F, Bailey T. Simultaneous localization
can converge to the minimum quickly. In addition, the ap- and mapping: Part I. IEEE Robotics and Automation Mag-
plication of unscented enables robot to propagate the state azine, 2006, 13(2): 99−110
accurately. 14 Bailey T, Durrant-Whyte H F. Simultaneous localization
Robustness: This performance is due to the manipula- and mapping (SLAM): Part II. IEEE Robotics and Automa-
tion Magazine, 2006, 13(3): 108−117
tion of uncertainty. We provide a reasonable error propaga-
15 Adorni G, Cagnoni S, Enderle S, Kraetzschmar G K, Mor-
tion scheme using unscented transform. All these proper- donini M, Michael P. Vision-based localization for mobile
ties improve the flexibility of our system when dealing with robots. Robotics and Autonomous Systems, 2001, 36(2-3):
occasional errors. Thus, robot could recover from position 103−119
disturbance to a certain degree. 16 Fox D, Burgard W, Thrun S. Markov localization for mo-
bile robots in dynamic environments. Journal of Artificial
6 Conclusion Intelligence Research, 1999, 11: 391−427
17 Kwok C, Fox D, Meila M. Real-time particle filters. Proceed-
In this paper, we develop a vision-based self-localization ings of IEEE, 2004, 92(3): 469−484
system for our mobile robot in a statistical sense. To fuse 18 Kwok C, Fox D, Meila M. Adaptive real-time particle filters
the information from camera and odometry, we use MAP for robot localization. In: Proceedings of IEEE International
Conference on Robotics and Automation. Taipei, Taiwan:
estimator to generate relatively better estimate of system IEEE, 2003. 2836−2841
state. We developed hierarchical architecture to fulfill on- 19 Arras K O, Tomatis N, Jensen B T, Siegwart R. Multisen-
the-fly self-localization task. This enables us to profoundly sor on-the-fly localization: precision and reliability for appli-
understand how the multi-thread and real-time tasks work cations. Robotics and Autonomous Systems, 2001, 34(2-3):
cooperatively on an embedded RTOS. We consider the un- 131−143
certainty inherent in the sensor modeling, system state, and 20 Pan Liang-Chen, Chen Wei-Dong. Vision-based localization
measurements. In contrast to the traditional linearization of indoor mobile robot. Robot, 2006, 28(5): 504−509 (in
Chinese)
method, we manipulate the random error efficiently by us-
21 Wang Jing-Chuan, Chen Wei-Dong, Cao Qi-Xin. Omnivi-
ing unscented transform. This generates the accurate mean sion and odometer based self-localization for mobile robot.
and associated covariance without explicitly calculating the Robot, 2006, 27(1): 41−45 (in Chinese)
Jacobeans. Furthermore, we provide a fast-edge detection 22 Julier S J, Uhlmann J K, Durrant-Whyte H F. A new ap-
method and use the statistical Mahalanobis distance for proach for filtering nonlinear systems. In: Proceedings of
data association. We plan to provide a robust feature ex- American Control Conference. Seattle, USA: IEEE, 1995.
traction method and localization approach, which copes 1628−1632
with the ambiguity of data association. 23 Sergios T, Konstantinos K. Pattern Recognition (Second
Edition). Beijing: Publishing House of Electronics Industry,
References 2003. 31−34
24 Bing M, Lakshmanan S, Hero O A. Simultaneous detection
1 Guilherme N D, Avinash C K. Vision for mobile robot nav- of lane and pavement boundaries using model-based multi-
igation: a survey. IEEE Transactions on Pattern Analysis sensor fusion. IEEE Transactions on Intelligent Transporta-
and Machine Intelligence, 2002, 24(2): 237−267 tion Systems, 2000, 1(3): 135−147
2 Iocchi L, Nardi D. Hough localization for mobile robots 25 Julier S J, Uhlmann J K, Durrant-Whyet H F. A new method
in polygonal environments. Robotics and Autonomous Sys- for the nonlinear transformation of means and covariances in
tems, 2002, 40(1): 43−58 filters and estimator. IEEE Transactions on Automatic Con-
trol, 2000, 45(3): 477−482
3 Fang Fang, Ma Xu-Dong, Dai Xian-Zhong. Mobile robot 26 Neto G, Costelha H, Lima P. Topological navigation in con-
global localization based on model matchingin hough space. figuration space applied to soccer robots. Lecture Notes in
Robot, 2005, 27(1): 35−40 (in Chinese) Computer Science, 2004, 3020: 551−558
166 ACTA AUTOMATICA SINICA Vol. 34

27 Schmitt T, Hanek R, Beetz M, Buck S, Radig B. Cooperative 37 Burden R L, Faires J D. Numerical Analysis (Seventh Edi-
probabilistic state estimation for vision-based autonomous tion). Beijing: Higher Education Press, 2001. 611−628
mobile robots. IEEE Transactions on Robotics and Automa- 38 Neira J, Tardód os J D, Horn J, Schmidt G. Fusing range and
tion, 2002, 18(5): 670−684 intensity images for mobile robot localization. IEEE Trans-
28 Diosi A, Kleeman L. Advanced sonar and laser range actions on Robotics and Automation, 1999, 15(1): 76−84
finder fusion for simultaneous localization and mapping. In:
Proceedings of IEEE International Conference on Intelli-
gent Robots and Systems. Taipei, Taiwan: IEEE, 2004. WANG Ke Ph. D. candidate in Re-
1854−1859 search Center of Information and Control
29 Bengtsson O, Baerveldt A J. Robot localization based on at Dalian University of Technology. His
scan-matching-estimating the covariance matrix for the IDC research interest covers computer vision,
algorithm. Robotics and Autonomous Systems, 2003, 44(1): image processing, human machine interac-
29−40 tion, and estimation theory and their appli-
cations in robotics. Corresponding author
30 Zhuang Yan, Wang Wei, Wang Ke, Xu Xiao-Dong. Mobile of this paper.
robot indoor simultaneous localization and mapping using E-mail: qingshang00@163.com
laser range finder and monocular vision. Acta Automatica
Sinica, 2005, 31(6): 925−933 (in Chinese)
31 Nicola T. Hybrid, Metric-topological, Mobile Robot Naviga-
tion [Ph. D. dissertation], 2001 WANG Wei Professor in Research
32 Borges G A, Aldon M B. Robustified estimation algorithms Center of Information and Control at
for mobile robot localization based on geometrical environ- Dalian University of Technology. He re-
ment maps. Robotics and Autonomous Systems, 2003, 45(3- ceived his Ph. D. degree in control theory
4): 131−159 and control engineering from Northeastern
University in 1988. He was a post-doctor
33 Dhome M, Richetir M, Lapreste J T. Determination of the at Norwegian Science and Technology Uni-
attitude of 3D objects from a single perspective view. IEEE versity (1990∼1992). His research interest
Transactions on Pattern Analysis and Machine Intelligence, covers predictive control, robotics, and in-
1989, 11(12): 1265−1278 telligent control.
34 Durrant-Whyte H F. Uncertain geometry in robotics. In: E-mail: wangwei@dlut.edu.cn
Proceedings of International Conference on Robotics and
Automation. Oxford, UK: IEEE, 1987. 851−856 ZHUANG Yan Lecturer in Depart-
ment of Automation at Dalian University
35 Ayache N, Fauqeras O D. Maintaining representations of
of Technology. He received his Ph. D. de-
the environment of a mobile robot. IEEE Transactions on
gree in control theory and control engineer-
Robotics and Automation, 1989, 5(6): 804−819
ing from Dalian University of Technology
36 Zhang Z Y, Faugeras O D. 3D Dynamic Scene Analysis: in 2004. His research interest covers robot
a Stereo Based Approach. Berlin: Springer-Verlag, 1992. localization, map building, and navigation.
43−54 E-mail: zhuang@dlut.edu.cn

You might also like