SLAM Simulation Part
Simulated Turtlebot and Gazebo
We downloaded a virtual machine (VMware) and connected it with Matlab
through ROS:
In matlab you put the following code to connect it
rosinit(MASTER_IP)
It then starts the ROS Master global node
Understanding ROS
We read about ROS terminology and how it actually works, in general the
best picture to summarize what we need from ROS is:
Figure 1 Source:https://wiki.ros.org/rosbuild/ROS/Concepts#Stacks
Main commands in ROS:
rosnode list——list the nodes
rostopic list ——list the topics
rostopic info <topic name> —- the publishers and subscribers of the topic
[rostopic]type *<topicname>* to see the message type used by a topic
[rosmsg] show *<messagetype>* to view the properties of a message type
Use rospublisher to create a publisher that sends ROS string messages to
any topic.
<pub node> =
rospublisher("/chatter","std_msgs/String","DataFormat","struct")
pause(2) % Wait to ensure publisher is registered
to publish a message in a topic:
send(<publisher> ,<message>)
Use [rossubscriber] to create a subscriber to any topic, rossubscriber detects
its message type automatically, so you do not need to specify it.
<sub node>=rossubscriber("/pose","DataFormat","struct")
[receive] (<sub node>. <time out in seconds>)——get data from the
subscriber,
<receive variable>.<message field>.<message field>…etc
rosShowDetails(<reieve variable>), gives all message fields in the receive
variable
Saving and Loading:
save('posedata.mat','<recieve varaible>')
clear <receive varaible>
messageData = load('posedata.mat')
messageData.<receive variable>
delete('posedata.mat')
Callbacks
Instead of using [receive] to get data, you can specify a function to be called
when a new message is received (Callbacks). Callbacks are essential if you
want to use multiple subscribers.
robotpose =
rossubscriber("/pose",@exampleHelperROSPoseCallback,"DataFormat","struc
t")
One way of sharing data between your main workspace and the callback
function is to use global variables.
In Odometry, for Pose orientation Quaternion (4 parameters) is used, which
avoid “gimbal lock”, unlike Euler (3 paramters).
The TurtleBot uses the /odom topic to publish its current position and
orientation (collectively denoted as pose). Since the TurtleBot is not equipped
with a GPS system, the pose will be relative to the pose that the robot had
when it was first turned on.
rosshutdown: To shutdown the ROS master global node
tf Transformation Tree in ROS
The tf system in ROS keeps track of multiple coordinate frames and
maintains the relationship between them in a tree structure. tf is distributed,
so that all coordinate frame information is available to every node in the ROS
network.
rostf – to create a new transformation tree object
getTransform - Retrieve the transformation that describes the relationship
between two frames
waitforTransform - to wait until the transformation between the two frames is
available.
transform – Transform message entities into target coordinate frame
sendTransform - to broadcast this transformation.
Simple Obstacle Avoidance Algorithm
The Lidar scan output is a matrix of 2 columns (x & y coordinates) and
multiple rows depending on the number of points it detects.
This matrix can be plotted as the image below, where the center is always
the robot (wherever it moves) and the x positive axis is on the left:
Based on the distance readings from the laser scan, we can implement a
simple obstacle avoidance algorithm. we can use a simple while loop to
implement this behavior.
Firstly, we set some parameters that will be used in the processing loop. we
can modify these values for different behavior.
spinVelocity = 0.6; % Angular velocity (rad/s)
forwardVelocity = 0.1; % Linear velocity (m/s)
backwardVelocity = -0.02; % Linear velocity (reverse) (m/s)
distanceThreshold = 0.6; % Distance threshold (m) for turning
We will compute the closest distance using Pythagorean theorem.
Run a loop to move the robot forward and compute the closest obstacles to
the robot. When an obstacle is within the limits of the distanceThreshold, the
robot turns. This loop stops after 20 seconds of run time.
tic;
while toc < 20
% Collect information from laser scan
scan = receive(laser);
rosPlot(scan);
data = rosReadCartesian(scan);
x = data(:,1);
y = data(:,2);
% Compute distance of the closest obstacle
dist = sqrt(x.^2 + y.^2);
minDist = min(dist);
% Command robot action
if minDist < distanceThreshold
% If close to obstacle, back up slightly and spin
velmsg.Angular.Z = spinVelocity;
velmsg.Linear.X = backwardVelocity;
else
% Continue on forward path
velmsg.Linear.X = forwardVelocity;
velmsg.Angular.Z = 0;
end
send(robot,velmsg);
end
Some observations:
When you make the robot rotate and move in the same time, it will
move in a circle around a fixed point, the radius of this circle increase
by increasing the linear velocity and decrease when increasing the
Angular velocity.
Gazebo Simulation simulates momentum (when you give it high speed
and then send a stop signal it doesn’t stop right away, rather it slides)
The above algorithm doesn’t take into account if the min(dist) is in
which direction relative to the robot (in front of him, in his way of
movement…etc), so sometimes it senses an object next to him and
acts as if it will rush into it while that’s not true, rather, sometimes the
robot makes it worse and get more close to that object!
For the sake of this last observation, I modified the algorithm to take
into account the place of this min(dist) point:
spinVelocity = 0.5*pi; % Angular velocity (rad/s) "so it will
% rotate 90deg every sec aka iteration)
forwardVelocity = 0.5; % Linear velocity (m/s)
backwardVelocity = -0.3; % Linear velocity (reverse) (m/s)
distanceThreshold = 0.4; % Distance threshold (m) for turning
revtime = 0.6; %seconds
tic;
while toc < 200
% Collect information from laser scan
scan = receive(laser);
rosPlot(scan);
data = rosReadCartesian(scan);
%disp (data)
x = data(:,1);
y = data(:,2);
% Compute distance of the closest obstacle
dist = sqrt(x.^2 + y.^2);
[minDist,minDistIndx] = min(dist);
% Command robot action
if minDist < distanceThreshold && y(minDistIndx)>0 &&
y(minDistIndx)<3*x(minDistIndx)
% If close to obstacle on the left or front, back up slightly and
spin clockwise
velmsg.Angular.Z = -spinVelocity;
velmsg.Linear.X = backwardVelocity;
disp("left");
disp ("y=");
disp (y(minDistIndx));
disp ("x=");
disp (x(minDistIndx));
send(robot,velmsg);
pause (revtime);
elseif minDist < distanceThreshold && y(minDistIndx)<0 &&
y(minDistIndx)>-3*x(minDistIndx)
% If close to obstacle on the right, back up slightly and spin
counterclockwise
velmsg.Angular.Z = spinVelocity;
velmsg.Linear.X = backwardVelocity;
disp("right");
disp ("y=");
disp (y(minDistIndx));
disp ("x=");
disp (x(minDistIndx));
send(robot,velmsg);
pause (revtime);
else
% Contuine on forward path
velmsg.Linear.X = forwardVelocity;
velmsg.Angular.Z = 0;
send(robot,velmsg);
end
%send(robot,velmsg);
%disp(minDist);
End
After trying this modification, it got way better in avoiding obstacles,
yet there’s some small mistakes from time to time.
Localize TurtleBot using Monte Carlo
Localization Algorithm
Monte Carlo Localization (MCL) is a particle filter-based algorithm for robot
localization. The algorithm requires a known map, and the task is to estimate
the robot's pose (position and orientation) within the map using the robot's
motion and sensor data. The algorithm begins with an initial belief about the
robot pose's probability distribution, which is represented by particles
distributed according to that belief. When the robot's pose changes, these
particles propagate in accordance with its motion model. When each particle
receives new sensor readings, it evaluates its accuracy by calculating the
likelihood of receiving such readings at its current pose. The algorithm will
then redistribute (resample) particles to bias them to be more accurate.
Continue iterating these moving, sensing, and resampling steps until all
particles converge to a single cluster near the robot's true pose, indicating
that localization was successful.
Adaptive Monte Carlo Localization (AMCL), a variant of MCL, dynamically
adjusts the number of particles based on KL distance to ensure that the
particle distribution closely resembles the true distribution of robot state
based on all previous sensor and motion measurements.
We loaded the binary occupancy grid:
load officemap.mat
show(map)
Did a loop that Receive laser scan and odometry message, then Create
lidarScan object to pass to the AMCL object. For sensors that are mounted
upside down, you need to reverse the order of scan angle readings using
'flip' function. Then, compute robot's pose [x,y,yaw] from odometry
message. Then, update estimated robot's pose and covariance using new
odometry and sensor readings. Finally, Drive the robot to next pose and plot
the robot’s estimated pose, particle, and laser scans on the map.
After first AMCL update, particles are uniformly distributed inside the free
office space:
After 8 updates, the particles start converging to areas with higher likelihood:
After 60 updates, all particles should converge to the correct robot pose and
the laser scans should closely align with the map outlines.
.
SLAM with MATLAB
We loaded a Lidar data and used (lidarSlam) class. The SLAM algorithm takes
in lidar scans and attaches them to a node in an underlying pose graph. The
algorithm then correlates the scans using scan matching. It also searches for
loop closures, where scans overlap previously mapped regions, and
optimizes the node poses in the pose graph.
We use (addscan) to add scans one by one in a loop into the lidarSlam class.
We identified Loop Closure Threshold to 205, and studied the effect of Loop
Closure Search Radius.
Figure 2 Studying the LCR effect.
Here’s the Occupancy Grid Map Built Using Lidar SLAM
Figure 3 Occupancy Grid Map Built Using Lidar SLAM