Module 4 Sec
Module 4 Sec
Swarm intelligence is the idea that a group of simple individuals can work together
to create smart and effective behavior. This means that a group can do more and be
smarter together than each member can on their own. This idea applies to both
animals and robots or computer programs.
In nature, animals need to find food to survive. Foraging theory says that animals try
to get the most energy from food while spending the least time and effort. This helps
them save time for other important activities like finding mates or building shelters.
Different animals have different ways of finding food, but they all aim to be efficient
to survive.
When members of a group share information, they can all benefit and become more
knowledgeable. For example, if two people share unique pieces of information, they
both end up with more knowledge. This sharing of information helps the group work
better together and achieve more.
Biological Inspiration
Swarm intelligence is inspired by how animals behave. For example, ants use
chemical trails to find the shortest path to food. Other animals like termites, bees,
and birds also show self-organized behavior that helps them survive and succeed
without any central leader.
Swarm intelligence includes both living and non-living systems. Examples include the
gravitational pull between planets, chemical reactions, or self-moving particles. These
systems follow simple rules and local interactions, leading to smart and reliable
group behavior without a central control.
Conclusion
Swarm intelligence shows how simple actions and interactions can create complex
and intelligent systems. This concept applies to both nature and technology,
providing useful ideas for science and engineering. The study of swarm intelligence is
ongoing, with researchers working to understand how these systems work and why
they are effective.
BACKGROUND OF SWARM INTELLIGENT SYSTEMS
Background of Swarm Intelligent Systems
Swarm intelligent systems have developed over time based on the principle that the combined
actions of many simple individuals can lead to smart and effective outcomes. These systems
are successful if they are useful in various real-world applications. There are many techniques
in swarm intelligence, but only a few are widely recognized and used. We will explore those
in detail later.
Let's understand how swarm intelligence works using a different example: finding a lost
person in a forest. Imagine a group of search and rescue volunteers tasked with locating the
missing individual. Here are three methods they might use:
• Process:
• Divide the forest into a grid.
• Assign each volunteer to search one section of the grid.
• Volunteers systematically search their assigned sections, ensuring no area is
left unchecked.
• If a volunteer finds the person, they notify the rest of the group.
• Explanation:
• This method involves breaking the large search area into smaller, manageable
sections. Each volunteer works independently but follows a coordinated plan.
The combined efforts ensure the entire area is covered efficiently, similar to
how swarm intelligence systems use simple rules and collective behavior to
solve problems.
2. Random Walk
• Process:
• Each volunteer walks randomly through the forest, changing direction
frequently.
• Volunteers occasionally communicate their positions to ensure they don't
cover the same area multiple times.
• If a volunteer finds the person, they signal the others.
• Explanation:
• This method leverages randomness to explore different areas of the forest.
Random movements increase the chances of covering diverse regions quickly.
It illustrates how randomness in swarm intelligence (like random search
patterns) can help explore and solve complex problems effectively.
3. Signal-Based Search
• Process:
• Explanation:
• This method relies on communication and coordination among volunteers. By
sharing information and adjusting their actions based on signals, they can
efficiently narrow down the search area. This approach mirrors how swarm
intelligence systems use information sharing and interaction to achieve a
common goal.
Summary
These methods show how a group of volunteers can use different strategies to find a lost
person in a forest. Similarly, swarm intelligence systems use simple rules and interactions to
tackle complex tasks. The collective behavior of many individuals, each following simple
rules and sharing information, can lead to effective and intelligent solutions, much like the
different ways to conduct a search and rescue operation in a forest.
• Multi-Agent Approach: These algorithms use multiple agents, like simulated ants,
that work together to find solutions. Just as real ants work as a team, these artificial
ants collaborate to achieve their goal.
• Combinatorial Optimization: They excel at problems where you need to find the
best combination or arrangement of elements. This means they are great for tasks that
involve organizing or arranging things in an optimal way.
• Inspired by Nature: Real ants find the shortest path to food by following trails of
pheromones (chemical signals). These algorithms mimic that behavior, with artificial
ants leaving and following virtual pheromone trails to find the best paths or solutions.
• Local Search: Finding the best solution in a specific, localized part of a problem.
• Image Mapping and Compression: Optimizing how images are stored and processed
to save space and improve efficiency.
• Database Search: Enhancing the speed and accuracy of searching through large
databases for specific information.
Ongoing Research
Researchers are continuously exploring these algorithms because of their effectiveness in
finding good solutions to complex problems. They are discovering new ways to use ant
algorithms, leading to many potential applications that could benefit from this approach.
Summary
Ant algorithms are powerful tools inspired by how real ants find food. By working together
and following simple rules, these artificial ants can solve complex problems efficiently. They
are being adapted for use in many fields, proving to be very effective in finding near-optimal
solutions.
1. Pheromone Trails:
• As ants travel between the food and their nest, they leave behind a chemical
trail called pheromone.
• This pheromone trail helps other ants follow the same path.
• When ants reach a fork in the path, they randomly choose which direction to
go.
• Some ants will end up taking a shorter path, while others will take a longer
one.
• Ants on the shorter path return to the nest faster and lay down more
pheromone.
• Over time, the shorter path gathers more pheromone because more ants
travel it more quickly.
4. Path Selection:
• Ants can smell the pheromone and prefer to follow paths with a stronger
pheromone scent.
• This positive feedback means more pheromone attracts more ants, leading
the colony to choose the shortest path.
Key Concepts
• Stigmergy: This is a type of indirect communication where ants influence each other
by changing their environment, like laying down pheromone trails.
• Autocatalysis (Positive Feedback): This is the process where the more a path is
used, the more attractive it becomes due to accumulating pheromone, which
reinforces the behavior.
• Recruitment: Ants are drawn to food sources by following pheromone trails left by
other ants. This process is known as "mass recruitment."
Summary
In simple terms, ants use pheromone trails to communicate and find the shortest
path to food. This collective behavior, driven by positive feedback and indirect
communication, helps them work efficiently as a group.
1. Cooperating Individuals:
• Artificial ants work together like real ants, sharing information to solve
problems.
• Ants make local decisions to find the shortest path between points,
using nearby information rather than looking far ahead.
• Ants sometimes choose paths randomly but are more likely to choose
paths with stronger pheromone trails.
• Discrete World:
• Artificial ants move from one specific position to another, unlike real
ants that move continuously.
• Internal State:
• Each ant has a memory to remember where it has been and what it has
done, helping it make better decisions.
• Pheromone Deposition:
• Problem-Dependent Timing:
• Enhanced Capabilities:
1. Initialization:
2. Exploration:
3. Pheromone Update:
4. Iteration:
• The process repeats, with ants gradually finding and reinforcing better
paths, while poorer paths lose pheromone over time.
5. Convergence:
• Over time, ants focus on the best solution or a set of good solutions,
guided by the pheromone trails.
Applications
• Routing: Finding the best routes in networks (like internet traffic or delivery
routes).
• Scheduling: Optimizing schedules for tasks, jobs, or events.
• Optimization: Solving complex problems in logistics, manufacturing, and
other areas.
Summary
In simple words, an AACS uses the principles of real ant behavior, enhanced with
extra capabilities, to efficiently solve complex problems. It finds and reinforces good
solutions through cooperative exploration and pheromone-based communication.
• Each ant builds a solution one step at a time. For example, if the
problem is finding the shortest route through several cities (like in the
Traveling Salesman Problem), an ant decides the next city to visit step
by step.
1. Evaporation:
2. Adding Pheromone:
• After an ant finishes its journey, it leaves pheromone on the paths it
used.
• The better the solution (e.g., the shorter the route), the more
pheromone the ant leaves. This makes good paths more attractive to
other ants.
Imagine you have a maze and you want to find the shortest way through it using
ants:
• Each ant starts at the entrance and moves towards the exit.
• At each junction in the maze, ants decide which way to go based on the
amount of pheromone on each path and the distance (shorter paths
are preferred).
• When ants reach the exit, they leave pheromone on their path.
• Paths that helped ants reach the exit quickly get more pheromone.
• New ants come and follow the paths with more pheromone, but they
also try different paths sometimes to find even better solutions.
• Over time, the path with the most pheromone will usually be the
shortest path through the maze.
• Routing: Finding the best routes for network data or delivery vehicles.
• Scheduling: Creating optimal schedules for tasks or events.
• Optimization: Solving various problems in logistics, manufacturing, and
more.
Summary
In simple terms, an ACS algorithm uses artificial ants to explore different solutions to
a problem. They share information through pheromone trails, reinforcing good
solutions and allowing the system to improve over time. This approach mimics the
way real ants find efficient paths to food, making it a powerful tool for solving
complex problems.
Probabilistic Transition Rule
Probabilistic Transition Rule in Simple Ant Colony Optimization (S ACO)
In a Simple Ant Colony Optimization (SACO) algorithm, artificial ants are like little
explorers navigating through a graph (think of it like a map) to find the shortest path from a
starting point to a destination. Here's a breakdown of how it all works in simple terms:
The Setup:
1. Graph Representation:
2. Pheromone Trails:
• Picture each road having a special scent trail, kind of like breadcrumbs, called
pheromone.
• Initially, there's just a tiny bit of pheromone on all roads.
The Process:
1. Building Solutions:
• Our artificial ants start their journey at a starting city and want to find their
way to a destination city.
• They travel from city to city, creating a path. The length of their path is simply
how many roads they travel.
2. Decision Making:
• At each city they reach, ants need to decide which neighboring city to move to
next.
• This decision isn't random; it's based on two things:
• The amount of pheromone on the road they're considering. More
pheromone means the road is more attractive.
• Some extra local info, like how far away the neighboring city is.
Summary:
In SACO:
Pheromone updating
in Ant Colony Optimization (ACO) is like leaving breadcrumbs for ants to follow.
Here's a simple breakdown of how it works:
1. Pheromone Deposit:
2. Pheromone Evaporation:
• After ants finish their journey and find their way back to the starting
point, they update the pheromone levels on all the roads they used.
• This update happens after the whole trip is completed.
3. Offline Update:
Summary:
In ACO:
• Pheromone updating ensures that good paths are marked for ants to follow,
while also encouraging exploration.
• It's achieved through depositing pheromone on travelled paths and gradually
letting it fade away through evaporation.
• Different update strategies, like real-time updating, delayed updating, and
global updates, help ants find better paths over time.
Solution evaluation
in Ant Colony Optimization (ACO) is like giving feedback to ants on the paths they
find. Here's a simple breakdown:
1. How It Works:
• Ants naturally behave to find shorter paths.
• When an ant finds a shorter path, it reaches the destination quicker and
leaves pheromone earlier.
• Other ants follow this pheromone trail, making the shorter path more
attractive.
2. Application:
• Works well in problems like network routing.
• Ants leave pheromone regardless of the path's quality.
• Shorter paths get more reinforcement because of quicker pheromone
deposition.
1. Enhanced Performance:
• Mixing implicit and explicit evaluations can make the algorithm better.
• Pheromone deposition depends on solution quality, combining natural
ant behavior with direct feedback.
2. Efficiency in Network Problems:
• Works great in distributed network problems.
• The decentralized nature of such problems fits well with implicit
evaluation.
• It efficiently reinforces shorter paths without extra computational costs.
Summary:
Ant Colony Optimization (ACO) is like a group of ants searching for the best route to
food, but applied to solving complex problems. Here's a simple breakdown of how it
works:
1. Initialization:
• We set up the problem by defining what we're trying to optimize and how to
measure success.
2. Iterative Process:
• We repeat the same steps over and over until we find a good solution or
reach a stopping point.
3. Ant Movement:
4. Solution Evaluation:
• After an ant completes its journey (constructs a solution), we check how good
that solution is.
• This is done by evaluating an objective function, which tells us how well the
solution meets our goals.
5. Pheromone Update:
• Pheromones are like breadcrumbs left by ants. They mark the paths ants have
taken.
• When ants find good solutions, we increase the pheromone level on the paths
they used.
• More pheromone means other ants are more likely to follow that path in the
future.
7. Stopping Criterion:
• We keep repeating these steps until we decide to stop.
• This could be after a certain number of iterations or when we're happy with
the solution we've found.
Summary:
• ACO mimics how ants search for food to solve complex optimization
problems.
• It balances exploring new paths and exploiting good solutions by updating
pheromone levels.
• Through iterations, it converges towards promising solutions, just like ants
converge towards food sources.
Evaluation function
In Ant Colony Optimization (ACO), the evaluation function acts like a judge,
giving scores to potential solutions based on how good they are. Here's a
breakdown:
1. Purpose:
• The evaluation function tells us how well a solution solves our
problem.
• It gives each solution a number that represents its quality or fitness.
2. Representation:
• In ACO, solutions are often represented by binary strings.
• Each binary string corresponds to a real value (like a number on a
graph).
3. Evaluation Process:
• Let's say we have three binary strings representing three potential
solutions.
• The evaluation function calculates the real value for each solution.
• Then, it uses this real value in the objective function to see how well
the solution works.
• The lower the value from the objective function, the better the
solution.
4. Example:
• Suppose we have three solutions represented by binary strings.
• We use the evaluation function to find out how well each solution
performs.
• Let's say the function tells us that solution 2 is the best because it
gives us the lowest value when we plug it into our objective function.
5. Updating Pheromones:
• The fitness values we get from the evaluation function help us decide
how much artificial pheromone to put on each path (solution).
• Paths that represent better solutions get more pheromone, making
them more attractive to future ants.
Summary:
• The evaluation function gives scores to potential solutions.
• These scores help us decide where to put pheromones, guiding the
ants to find better solutions.
• It's like grading papers in school – the better the paper, the higher
the score, and the more likely it is to influence future behavior.
Pseudo-random Probability Function
Sure! Let's break down the pseudo-random probability function used by ants in the
Ant Colony Optimization (ACO) algorithm in simple terms:
1. Probability Calculation:
• Balancing act: Ants need to balance between exploiting known good paths
and exploring new ones.
• Control parameter: This balance is controlled by a parameter called 𝑞0q0.
• Exploitation: If a randomly generated number is greater than or equal
to 0.9, ants exploit by choosing the path with the highest probability.
• Exploration: If the random number is less than 0.9 but greater than
𝑞0q0, ants explore other paths.
3. Pseudo-random Selection:
1. Calculate Probability:
By following this method, ants can dynamically choose paths in a way that balances
between using known good paths and exploring new ones, which helps in finding
optimal solutions over time.
Conclusion
• Shorter paths get more pheromone: Ant 1’s shorter path gets more pheromone (15)
than Ant 2’s longer path (9).
• Better solutions are reinforced: This means future ants are more likely to follow the
shorter path because it has more pheromone.
• Guiding future ants: Over time, this helps the colony of ants to converge on the best
paths, optimizing their search for solutions.
In simple terms, ants leave pheromone trails to mark good paths. The better the path (shorter
or with a better fitness value), the more pheromone it gets. This helps guide other ants to find
the best solutions more efficiently.
In ACO, pheromone trails help ants communicate which paths are good. Over time,
these pheromone trails need to be updated to prevent the algorithm from getting
stuck on a suboptimal path. This is done through a process called pheromone
evaporation. Here’s how it works:
Example:
Conclusion:
• Balancing act: Pheromone evaporation ensures that the trails do not become
overly strong, which helps the ants continue to explore new paths and avoid
getting stuck on suboptimal ones.
• Refreshing trails: The addition of new pheromone keeps the information
about good paths current and relevant.
Sure, let's break down the Ant Colony Optimization (ACO) algorithm for solving the
Traveling Salesman Problem (TSP) in simple terms:
1. Initialization Phase:
• Setup:
• Place artificial ants on different cities.
• This setup is like starting a race, with each ant starting from a different
point.
2. Search-and-Update Phase:
• Exploration:
• Ants start traveling from one city to another, trying to visit each city
exactly once and return to the starting city.
• As ants move, they leave a trail of pheromones on the paths they take.
Pheromones are like markers that indicate how good a particular path
is.
• Decision Making:
• Pheromone Update:
• After completing their tours, ants update the pheromone trails based
on the quality of their paths.
• Better (shorter) paths get more pheromones, making them more
attractive for other ants.
• This process is repeated for many iterations, allowing ants to explore
and improve their paths over time.
3. Ant's Movement:
4. Experimental Validation:
Summary:
In simple terms, the ACO algorithm for TSP works by having ants explore different
routes between cities while leaving markers (pheromones) to indicate good paths.
Over time, ants use these markers to find shorter and better routes, improving the
overall solution. The SAS model ensures ants move in an orderly fashion, enhancing
the efficiency and quality of the solutions found.
1. Graph Representation:
• Graph Basics:
• Imagine you have a map with several cities. Each city is a point (or
node) on the map.
• To connect every city to every other city, you draw lines (or arcs)
between them.
• The map with all the cities and lines is called a "complete weighted
graph."
• "Complete" means every city is connected to every other city.
• "Weighted" means each line has a number (weight) representing the
distance between the two cities it connects.
• Components:
2. Objective:
• Goal:
• A salesman needs to start from a home city, visit every other city
exactly once, and return to the starting city.
• The aim is to find the shortest possible route that does this.
• Hamiltonian Circuit:
• This special route is called a "Hamiltonian circuit," which means a loop
that visits each city exactly once and returns to the start.
• Symmetric TSP:
• The distance between two cities is the same no matter which direction
you travel.
• Example: The distance from City A to City B is the same as from City B
to City A (d_rs = d_sr).
• Common Use:
4. Euclidean TSP:
• Euclidean Space:
• Distance Calculation:
• To find the distance between two cities on this plane, use the Euclidean
distance formula.
• Formula: 𝑑𝑟𝑠=(𝑥𝑟−𝑥𝑠)2+(𝑦𝑟−𝑦𝑠)2drs=(xr−xs)2+(yr−ys)2
• This calculates the straight-line distance between two points (cities)
based on their coordinates.
Summary:
• Representation:
• Find the shortest route that starts from a home city, visits each city
exactly once, and returns home.
• Types:
• Euclidean TSP:
• Cities are points on a plane, and distances are calculated using the
Euclidean distance formula.
In simple terms, TSP is like planning the shortest route for a delivery driver who must
visit several locations exactly once and then return to the starting point, using the
shortest total distance possible. The problem can be symmetric (same distance both
ways) or asymmetric (different distances each way), and sometimes the distances are
calculated based on straight-line measurements between points on a map.
ure, let's break down the Ant System model with elitist ants for solving the Traveling
Salesman Problem (TSP) into simple steps:
1. Initialization:
• Starting Point:
• A certain number of ants are placed randomly on different cities.
• Each ant begins its journey from its starting city.
3. Tabu List:
• Memory:
• Each ant keeps a list of cities it has already visited.
• This prevents the ant from visiting the same city more than once.
4. Tour Construction:
5. Pheromone Update:
• Refreshing Trails:
• After all ants have completed their tours:
• Evaporation: Pheromone levels on all paths are slightly reduced
to prevent buildup.
• Deposition: Ants add pheromone to the paths they traveled,
with more pheromone added to shorter paths.
• This means paths that form shorter tours become more attractive for
future ants.
6. Iteration:
Summary:
• Simplified Steps:
• Start: Place ants randomly on cities.
• Choose Next City: Based on pheromone strength and distance.
• Memory: Use a list to avoid revisiting cities.
• Complete Tour: Visit all cities.
• Update Pheromones: Reduce old trails, add new pheromones for
shorter paths.
• Repeat: Continue until a good solution is found.
In essence, the ants work together by exploring different routes and sharing
information through pheromone trails, which helps them collectively find the
shortest possible tour for the TSP.
Overview:
The TACO algorithm helps ants find good solutions for problems by deciding between two
choices (0 or 1) for each step in a binary string. This method can be applied to various
problems, including the Traveling Salesman Problem (TSP).
• Ants make choices at each step of the binary string, deciding whether to place a '0'
or a '1'.
• These decisions are based on pheromone levels, which indicate which choices are
promising based on past experiences.
• Once an ant completes the sequence of decisions for all bits, it has created a
potential solution.
2. Evaluation:
• Checking Quality:
• The completed binary string (solution) is evaluated using a function that calculates
how good this solution is.
• This function gives a numeric score indicating the quality of the solution.
• Ant Movement:
• An ant starts at the beginning of this 12-bit string and makes decisions (0 or 1) for
each bit.
• By the end of its journey, the ant has created a 12-bit string.
• Grouping Bits:
• Split this string into three sets (each representing a city):
• '1010' for the first city
• '1110' for the second city
• '0110' for the third city
• Converting to Decimal:
• Convert each set from binary to decimal numbers:
• '1010' becomes 10
• '1110' becomes 14
• '0110' becomes 6
• Sorting:
• Sort these decimal numbers:
• [6, 10, 14]
• This sequence represents the order in which the cities are visited along with the
calculated distance of the tour.
Summary:
• Ants Decide on Bits: Ants choose between 0 and 1 for each bit in a binary string based on
pheromones.
• Create Solutions: These choices build a potential solution.
• Evaluate Solutions: Each solution is checked for quality.
• Solve TSP: For TSP, the binary string is split, converted to numbers, sorted, and used to
determine the travel path of the cities.
This method helps ants collaboratively find shorter and more efficient paths by learning from
past decisions and updating pheromone trails accordingly.
Let's break down the Serial Ant System (SAS) model with elitist ants for solving the
Traveling Salesman Problem (TSP) in simple terms:
Overview:
The SAS model uses artificial ants to find the shortest possible route that visits all
cities exactly once and returns to the starting point. In this model, ants travel one by
one through a matrix representing the search space and follow specific rules to
construct their tours.
• Create a 4x4 matrix (since we have 4 cities). This matrix represents the search
space.
• Starting the Journey:
• Each ant starts its journey from the beginning of the matrix, but only one ant
moves at a time. The next ant starts only after the previous ant finishes its
tour.
2. Ant Movement:
• Choosing Cities:
• As an ant moves through the matrix, it selects one city per column. It uses a
list (tabu list) to keep track of the cities it has visited, ensuring it doesn't visit
the same city twice.
• Calculating Cost:
• The total distance an ant travels is calculated using the Euclidean distance
(straight-line distance) between cities.
• For example, an ant's tour might be [(2 → 1) → (1 → 3) → (3 → 4) → (4 → 2)],
indicating the sequence of visited cities.
3. Pheromone Management:
• Starting Pheromone Levels:
• When an ant uses an edge in its tour, the pheromone on that edge is
reinforced.
• Elitist Reinforcement:
• If an ant uses edges from the best tour found so far, those edges get extra
pheromone reinforcement, guiding future ants towards potentially better
solutions.
Summary:
• Ants move through a matrix representing the cities, choosing one city per column.
• They use pheromone levels and distances to make their choices, ensuring they visit
each city exactly once.
• Pheromone levels are updated based on the quality of the tours, with the best tours
getting extra reinforcement.
• This process helps guide future ants towards finding the shortest possible route,
ultimately aiming to solve the TSP efficiently.
This method leverages both exploration and exploitation, allowing ants to find
optimal or near-optimal solutions over multiple iterations.
• Issue: As the number of cities increases, the binary path (the sequence of bits
representing city visits) gets longer.
• Impact: This makes it harder for ants to find the best solution quickly and
accurately.
2. Sub-path Selection:
• Issue: Ants can only choose between two sub-paths ('1' or '0').
• Impact: This limits the ants' ability to explore different routes, potentially
missing better solutions.
3. Pheromone-based Decisions:
• Issue: Ants decide based only on pheromone information from the final
evaluation of the binary string, not on the costs of traveling between cities.
• Impact: This approach can overlook better solutions that consider the actual
travel costs between cities.
Model Performances:
• Performance: It performs well but needs more computing power and time,
especially as the number of cities grows.
• Parameters: Parameters are chosen based on previous studies and trial runs
for the best results.
• Performance: This model is simple and fast. It competes well with the AS
model but needs more testing and fine-tuning.
• Validation: Needs further validation and performance checks to ensure the
parameters are optimal.
General Observations:
• Iteration Limits:
• The solutions are obtained after a fixed number of iterations, which might not
always find the best solution, especially for larger problems.
• Computing Setup:
• The experiments were run on a computer with a Pentium IV processor, 3.5
GHz speed, 1GB RAM, and MATLAB Version 7.4 software.
Summary:
• Fine-tuning parameters is key to optimizing models for the TSP.
• Longer binary paths and limited sub-path choices can reduce solution quality.
• The AS model with elitist ants is effective but resource-intensive.
• The SAS model is fast and simple but requires further validation.
• Fixed iteration limits may not always yield the best solution for larger TSP problems.
• The models were tested on a standard computer setup using MATLAB software.
This simplified explanation highlights the importance of parameter settings and the
performance of different models in solving the TSP.