[go: up one dir, main page]

0% found this document useful (0 votes)
15 views35 pages

Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views35 pages

Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Graph plan:

GRAPHPLAN is an algorithm used for automated planning in artificial


intelligence. It was developed by Avrim Blum and Merrick Furst in the 1990s.
The algorithm constructs a planning graph, which is a data structure that
represents the actions and propositions (facts) relevant to a planning problem
over time.
Key Steps of GRAPHPLAN:
1. Graph Construction: The algorithm builds a planning graph in levels. The
first level contains the initial state (propositions), and subsequent levels
alternate between actions and propositions. Actions are added to a level if
their preconditions are satisfied by the propositions from the previous level.

2. Mutex Relationships: The algorithm identifies mutex (mutually exclusive)


relationships between actions and propositions. Two actions are mutex if they
cannot occur simultaneously, and two propositions are mutex if they cannot be
true at the same time.

3. Goal Checking: After constructing the planning graph, GRAPHPLAN checks


if the goals can be achieved in the last level of the graph. It verifies that all goal
propositions are present and not mutex.

4. Plan Extraction: If the goals are reachable, the algorithm extracts a valid
plan by working backwards from the goals to find the actions that achieve
them, ensuring that the actions are not mutex.

Advantages:
- Efficiency: GRAPHPLAN can efficiently find a solution in many cases by
leveraging the structure of the planning graph.
- Parallelism: It allows for concurrent actions in the plan, which can lead to
more efficient execution.

1
Termination of GRAPHPLAN
The termination of the GRAPHPLAN algorithm is crucial for its correctness and
efficiency. One of the key features of GRAPHPLAN is that it operates on a
planning graph, which is incrementally expanded at each iteration until it either
finds a solution or determines that no solution exists. The monotonicity
properties of the planning graph are central to the termination of the
algorithm, as they ensure that the planning graph cannot grow indefinitely
without reaching a conclusion.

1. Literals Increase Monotonically

- Once a literal (i.e., proposition) appears at a given level in the planning graph,
it will persist in all subsequent levels.
- A literal can be either a fact (e.g., "A is on B") or a negated fact (e.g., "A is not
on B").
- Once a literal becomes true in the planning graph, it remains true in all future
levels. This is because of the persistence actions that propagate literals across
levels.
- For example, if a literal is true at a certain level, then in the subsequent level,
there will be an action that preserves this literal as true.

2. Actions Increase Monotonically

- Once an action appears at a given level in the planning graph, it will also
appear at all subsequent levels.
- If an action becomes available at a certain level (i.e., its preconditions are
satisfied), it will remain available in all subsequent levels because the
preconditions are maintained as the graph grows. The presence of an action at
a certain level means that the conditions required to apply it are satisfied, and
as the planning graph progresses, the conditions continue to be met.
- If an action appears in one level, its preconditions are satisfied and will be
available in the following levels, making the planning graph stable in terms of
the actions that can be applied. This helps avoid infinite expansion of the
graph.

3. Mutexes Decrease Monotonically

2
- If two actions are mutex (mutually exclusive) at a given level, they will remain
mutex in all subsequent levels.
- For instance, if one action deletes a fact that another action requires, these
two actions will be mutex at that level.
- Once two actions are found to be mutex at a particular level, they will
continue to be mutex at all future levels. The mutex relation between actions is
monotonic—it does not become less restrictive.

Example Problem: Block Stacking

Imagine a simple problem where you need to stack two blocks, A and B, on top
of each other.
Step-by-Step Walkthrough of GRAPHPLAN
1. Initial Planning Graph
The first thing GRAPHPLAN does is create the initial planning graph based on
the initial state and the goal.

- Initial State:
Block A is on the table.
Block B is on the table.
- Goal:
Block A on Block B.

- Initial Planning Graph:


The planning graph starts with level 0 representing the initial state. Here’s
what it looks like:

- Level 0 (Propositions):
On(A, Table)
On(B, Table)

- Level 1 (Actions):
Stack(A, B) (This action would make Block A be stacked on Block B).
Stack(B, A) (This action would make Block B be stacked on Block A, which we
don’t want, because it's not part of the goal).

3
- Level 2 (Propositions):
On(A, B) (This is the goal state).
On(B, Table) (Block B is still on the table because we haven't moved it yet).

2. First Iteration: Graph Expansion

The algorithm then expands the planning graph step-by-step. For each new
level, it adds more propositions and actions. The goal is to see if we can
achieve the goal state.

- Level 1 (Actions) already has two actions: Stack(A, B) and Stack(B, A).

- Level 2 (Propositions) is generated based on those actions:


If Stack(A, B) happens, it will make On(A, B) true.
If Stack(B, A) happens, it would make On(B, A) true, which doesn't satisfy the
goal (On(A, B)), but it’s still a possible action in the graph.

Now let’s apply the monotonicity properties:

3. Monotonicity Properties in Action

1. Literals Increase Monotonically:


- Once a literal (a fact like On(A, Table)) appears in the graph at a certain level,
it will stay in all future levels.
- For example, in Level 0, On(A, Table) and On(B, Table) are true, and they will
stay true in Level 1 and Level 2 because they are part of the initial state and will
not change unless modified by actions.

2. Actions Increase Monotonically:


- Once an action appears in a level (for example, Stack(A, B)), it will continue
to appear in all future levels as long as its preconditions are satisfied.
- In Level 1, we have actions like Stack(A, B) and Stack(B, A). Once Stack(A, B)
appears in Level 1, it will stay in Level 2 because its preconditions (such as
On(A, Table) and On(B, Table)) are satisfied. The action will persist in
subsequent levels.

3. Mutexes Decrease Monotonically:


4
- Mutexes (mutual exclusions) are actions that cannot happen at the same
time. If Stack(A, B) and Stack(B, A) are mutex at Level 1, meaning they cannot
both occur at the same time, then they will remain mutex in all future levels.
- This is because once two actions are found to be mutually exclusive (due to
their effects or preconditions), that relationship doesn’t change over time. For
example, Stack(A, B) and Stack(B, A) will be mutex because the effects of the
first action will block the second one from succeeding, and vice versa.

4. Termination Check

At this point, GRAPHPLAN checks if the goals are achievable at the current level
of the graph. The goals are On(A, B), so we check if this literal appears in Level
2 (Propositions).

- If we look at Level 2, we see that On(A, B) is true, which is the goal!


- The goals are non-mutex in this level because there’s no conflict. Therefore,
the algorithm can now extract the solution.

5. Solution Extraction

Now, the algorithm uses the planning graph to extract the sequence of actions
that will achieve the goal. In this case, the solution is simple:

- The action Stack(A, B) will achieve the goal On(A, B).

So, the plan is:

1. Stack(A, B)

Mutex Relation Conditions

5
There are three primary conditions under which actions are considered mutex
in planning: inconsistent effects, interference, and competing needs. Let's
explore each of these conditions in detail:

1. Inconsistent Effects
Inconsistent effects refer to the situation where two actions, when performed
together, produce contradictory outcomes on the same fact or condition.

- Definition: Two actions have inconsistent effects if the effects of one action
directly undo the effects of the other action.

Blocks Example:
- Action 1: "Pick up block A" (adds the fact Holding(A)).
- Action 2: "Put down block A" (deletes the fact Holding(A)).

These actions are mutex because the effects are inconsistent: if you pick up a
block, you can't simultaneously put it down, as one action creates the fact
Holding(A) while the other deletes it. They produce conflicting effects on the
same proposition (the Holding(A) fact).

2. Interference
Interference occurs when the preconditions or effects of one action are in
conflict with the preconditions or effects of another action. This usually
happens when one action makes a condition that is needed by another action
false or prevents it from being satisfied.

- Definition: Two actions interfere if the effect of one action makes it impossible
for the other action to achieve its precondition (or the precondition of the first
action makes the second action impossible).

Example:
- Action 1: "Pick up block A" (requires the precondition On(A, Table) and adds
Holding(A)).
- Action 2: "Move block B" (requires Clear(A) as a precondition).

These actions interfere because Pick up block A will make Clear(A) false by
occupying the block with Holding(A), which makes it impossible for Move block

6
B to execute, as it requires Clear(A) to be true.

3. Competing Needs
Competing needs occur when two actions require access to the same resource
at the same time, which makes it impossible for both actions to be executed
simultaneously. This is a classic case of resource contention.

- Definition: Two actions have competing needs if they require the same
resource (whether it's a physical resource, like a tool or location, or a shared
condition) simultaneously, which prevents them from being executed together.

Example:
- Action 1: "Robot 1 picks up a hammer."
- Action 2: "Robot 2 picks up the hammer."

These actions are mutex because both robots need access to the same
resource (the hammer). Since only one robot can hold the hammer at a time,
they cannot perform these actions simultaneously.

Key Points
- Inconsistent Effects: Two actions are mutex if their effects on the world
contradict each other. One action's outcome undoes the outcome of the other.
- Interference: Two actions are mutex if the preconditions or effects of one
action make it impossible for the other action to execute or achieve its goal.
- Competing Needs: Two actions are mutex if they require the same resource at
the same time, and that resource can only be used by one action.

Graphical Representation in Planning

In the context of planning algorithms like GRAPHPLAN, these mutex relations


are represented in the planning graph, which contains layers of propositions
(facts) and actions. As the planning graph is expanded, the system evaluates
whether actions are mutex based on the conditions mentioned above. If two
actions are mutex, they cannot be selected to be executed in the same level of
the plan.

- Mutex Propagation: Once mutex relations between actions or propositions

7
are identified, this information propagates through the graph, helping the
planner eliminate incompatible options and narrowing down the search for
valid plans.

Have cake Example:


The Have Cake and Eat Cake problem is a classic example in planning, where
the goal is to achieve two seemingly contradictory objectives: having a cake
and eating the cake.

To understand the mutex (mutual exclusion) relationships between actions in


this scenario, we need to break down the actions involved and see how these
conditions—inconsistent effects, interference, and competing needs—apply to
the situation.

The Problem:
- Objective: The goal is to have a cake and eat it too. However, "having the
cake" means that the cake is in its intact, uneaten state, and "eating the cake"
means that the cake is consumed or no longer available in its original form.

Actions Involved:
1. Action 1: Have Cake — The cake is in an intact form (available, not eaten).
- Precondition: Cake exists (in an uneaten state).
- Effect: The cake is still intact, and you still have the cake.

2. Action 2: Eat Cake — The cake is eaten.


- Precondition: The cake is available (uneaten).
- Effect: The cake is now eaten and is no longer available in its original state.

Mutex Conditions in the "Have Cake and Eat Cake " Problem

1. Inconsistent Effects
- Definition: Two actions have inconsistent effects if they produce directly
contradictory outcomes on the same fact or condition.

In this case:
- Action 1 ("Have Cake") ensures that the cake is intact (i.e., it's available and
not eaten).

8
- Action 2 ("Eat Cake") ensures that the cake is no longer intact (i.e., it is eaten).

These two actions are mutually exclusive because the effects of one contradict
the effects of the other:
- If you have the cake (Action 1), you cannot simultaneously eat the cake
(Action 2), because once the cake is eaten, you no longer "have" the intact
cake.
- The effects of "Have Cake" (cake is intact) are inconsistent with the effects of
"Eat Cake" (cake is eaten).

Thus, the actions are mutex due to inconsistent effects: eating the cake directly
contradicts the state of having it intact.

2. Interference
- Definition: Two actions interfere with each other if the preconditions or
effects of one action make it impossible for the other action to execute or
achieve its goal.

In this case:
- The precondition for eating the cake is that the cake must be available
(uneaten).
- If you have the cake (Action 1), the cake is intact and available, so you are able
to eat it.
- However, once you eat the cake (Action 2), the cake is no longer available or
intact.

3. Competing Needs
- Definition: Two actions have competing needs if they require access to the
same resource at the same time, and that resource cannot be used
simultaneously by both actions.

In this scenario:
- The resource in question is the cake.
- Action 1 ("Have Cake") requires that the cake be intact and available.
- Action 2 ("Eat Cake") requires that the cake be available (uneaten) to eat it.

The competing need here arises from the fact that both actions need the cake,

9
but for different purposes:
- One action requires the cake to remain intact and not eaten (Action 1).
- The other action requires the cake to be eaten (Action 2), thus making it
unavailable in its original form.

These two actions compete for the same resource (the cake) because both
actions depend on the cake being in a certain state, but in incompatible ways:
- Action 1 needs the cake to be in its "intact" state.
- Action 2 needs the cake to be in its "eaten" state.

Thus, the competing needs arise from the fact that the same cake cannot
simultaneously be eaten and intact. Only one of these actions can occur at a

10
time because the resource (the cake) is being contested.

11
12
13
14
15
16
17
18
19
20
21
22
23
Hierarchial Planning:
Hierarchical planning is a structured approach in artificial intelligence (AI) and
decision-making where complex tasks are decomposed into simpler, more
manageable subtasks. It operates at multiple levels of abstraction, enabling
efficient problem-solving and planning. Here's an overview of the key
components you mentioned:

1. High-Level Actions
High-level actions represent abstract operations or goals in the planning
hierarchy.
These actions define "what" needs to be achieved without specifying "how" it
should be done.
Example: In robotics, a high-level action might be "clean the room," without
detailing the specific steps like moving to different parts of the room or picking
up objects.

2. Searching for Primitive Solutions


Primitive actions are the basic, low-level operations that directly interact with
the environment.
In this step, the planner breaks down high-level actions into sequences of
primitive actions that can achieve the desired outcomes.
Example: For the "clean the room" action, the planner might define primitive
actions like "move to the table," "pick up trash," and "dispose of trash."

3. Searching for Abstract Solutions


Before delving into the details, the planner first identifies abstract solutions
that provide a general outline for solving the problem.
These solutions serve as a roadmap, guiding the refinement process into
concrete, executable plans.
Example: The abstract solution for "clean the room" might involve dividing the
room into zones and planning actions for each zone without specifying every
movement.

24
Benefits of Hierarchical Planning:
Efficiency: Reduces the complexity of planning by focusing on one level of
abstraction at a time.

Scalability: Enables handling of large, complex problems by breaking them


into smaller parts.

Flexibility: Abstract solutions can be reused and adapted to different


scenarios.

Applications:
Hierarchical planning is widely used in robotics, game AI, logistics, and more,
where both strategic and tactical decisions are essential.

Example:
Let’s take an example of robotic vacuum cleaner planning a room cleaning
task to demonstrate hierarchical planning.

Scenario
The robot needs to clean a room, and the hierarchical planning breaks the
problem into multiple levels of abstraction.

1. High-Level Actions (Abstract Plan)


At this level, the task is divided into broad goals or abstract actions:
 High-level action 1: Divide the room into zones.
 High-level action 2: Clean each zone sequentially.
 High-level action 3: Return to the charging station.

2. Searching for Abstract Solutions


For each high-level action, the robot creates a rough plan without delving into
fine-grained details.

25
 Divide the room into zones: Identify four zones in the room:
o Zone 1: Near the door.
o Zone 2: Near the table.
o Zone 3: Under the furniture.
o Zone 4: Open central area.
 Clean each zone sequentially:
o Decide a sequence: clean Zone 1 → Zone 2 → Zone 3 → Zone 4.
o For now, avoid specifying exactly how each zone will be cleaned.
 Return to the charging station: Determine the shortest path to the
charger without detailed movement planning.

3. Searching for Primitive Solutions


Now the robot refines the abstract plan into specific, low-level actions.
Example for Zone 1: Near the Door
To clean Zone 1:
1. Primitive Action 1: Move to the door area using sensors to navigate.
2. Primitive Action 2: Identify dirt patches using cameras/sensors.
3. Primitive Action 3: Use the vacuum motor to clean the patches.
4. Primitive Action 4: Confirm Zone 1 is clean by rescanning.
Example for Returning to the Charger
1. Primitive Action 1: Determine the robot's current location.
2. Primitive Action 2: Navigate through obstacles back to the charger.
3. Primitive Action 3: Dock with the charging station.

Final Plan (Hierarchical)


High-level Plan:
1. Divide the room into zones.

26
2. Clean each zone sequentially.
3. Return to the charging station.
Detailed Plan for Execution:
1. Go to Zone 1 → Detect dirt → Clean Zone 1.
2. Repeat for Zone 2, Zone 3, Zone 4.
3. Navigate back to the charger.

Key Points
 High-level actions provide the "what" (e.g., clean zones).
 Abstract solutions provide a strategy (e.g., sequence and broad steps for
zones).
 Primitive solutions provide the exact "how" (e.g., specific movements
and actions to achieve cleaning).

Multi Level Agents:


Multi-agent planning involves creating coordinated plans for multiple agents
(e.g., robots, software agents, or humans) that operate in the same
environment, ensuring they achieve their objectives without conflicts and often
collaboratively. It is a vital area of research and application in fields like
robotics, artificial intelligence, and distributed systems.
Key Components of Multi-Agent Planning
1. Agents: Independent entities with their goals, states, and actions.
2. Environment: The shared space where agents operate, which could be
physical or virtual.
3. Goals: Objectives that agents aim to achieve. Goals can be individual or
collective.
4. Planning: Developing sequences of actions for each agent to achieve
their goals.
5. Coordination: Ensuring agents work together without conflicts (e.g.,
avoiding collisions or redundant actions).

27
Types of Multi-Agent Planning
1. Centralized Planning:
o A central planner generates plans for all agents.
o Pros: Easier to optimize globally.
o Cons: Computationally expensive and less robust to single-point
failures.
2. Decentralized Planning:
o Each agent plans independently but considers the actions of other
agents.
o Pros: Scalable and resilient.
o Cons: Harder to guarantee optimality and avoid conflicts.
3. Distributed Planning:
o Agents collaboratively plan by sharing information.
o Pros: Combines scalability with improved coordination.
o Cons: Requires efficient communication mechanisms.

Challenges in Multi-Agent Planning


1. Coordination: Preventing conflicts and ensuring agents work toward a
common goal.
2. Scalability: Dealing with a large number of agents.
3. Uncertainty: Handling incomplete or noisy information about the
environment or other agents.
4. Optimization: Balancing efficiency, fairness, and other metrics.
5. Communication: Managing how and when agents exchange information.

Applications of Multi-Agent Planning


1. Robotics:

28
o Multi-robot systems for warehouse automation (e.g., Amazon
Robotics).
o Drones for delivery or surveillance.
2. Traffic Management:
o Autonomous vehicles coordinating in intersections.
o Optimizing flow in air traffic control.
3. Game AI:
o Planning strategies for teams in real-time strategy games.
4. Resource Management:
o Distributed sensor networks.
o Energy optimization in smart grids.
5. Disaster Response:
o Coordinating rescue teams and drones in emergency scenarios.

Algorithms and Techniques


1. Search-Based Methods:
o A*: Adapts well to grid-based multi-agent pathfinding.
o Conflict-Based Search (CBS): Resolves conflicts between agents
iteratively.
2. Optimization Techniques:
o Mixed-Integer Linear Programming (MILP): For solving multi-agent
resource allocation problems.
o Reinforcement Learning: Training agents to learn cooperative
policies.
3. Game-Theoretic Approaches:
o Nash equilibrium and cooperative games for strategic decision-
making.
4. Behaviour Trees and Finite State Machines:

29
o Often used in game AI and robotics.
5. Decentralized Partially Observable Markov Decision Processes (Dec-
POMDPs):
o For decision-making under uncertainty with limited
communication.

Planning with multiple simultaneous actions:


Planning with multiple simultaneous actions involves coordinating agents to
perform concurrent activities, ensuring these actions are compatible and
contribute to achieving individual or collective goals. This approach is especially
relevant in multi-agent systems where parallelism can significantly enhance
efficiency and performance.
Key Considerations for Simultaneous Actions
1. Action Compatibility:
o Ensure that simultaneous actions do not conflict (e.g., two robots
avoiding collisions while moving).
o Consider resource constraints (e.g., shared tools or limited
bandwidth).
2. Temporal Constraints:
o Actions may need to start or finish at specific times relative to
other actions (e.g., task dependencies or synchronization).
3. Dependencies and Preconditions:
o Some actions might require the completion of other actions
before execution.
4. Communication and Coordination:
o Agents might need to exchange information to synchronize actions
and update plans dynamically.

Types of Simultaneous Action Planning


1. Parallel Planning:

30
o Design plans where multiple agents execute independent actions
simultaneously without interfering.
2. Collaborative Action Planning:
o Plan for agents to perform complementary actions that achieve a
shared goal (e.g., carrying an object together).
3. Hierarchical Planning:
o Use high-level planners to assign tasks to agents, while low-level
planners determine how to execute tasks simultaneously.

Techniques and Models


1. Multi-Agent Pathfinding (MAPF):
o Ensures agents navigate simultaneously without collisions.
o Algorithms:
 Conflict-Based Search (CBS): Resolves conflicts between
agents’ paths.
 Prioritized Planning: Assigns priorities to agents and plans
sequentially.
2. Temporal Planning:
o Models time explicitly using frameworks like Temporal PDDL
(Planning Domain Definition Language).
o Supports simultaneous actions with constraints like "starts-after,"
"overlaps-with," etc.
3. Resource-Constrained Planning:
o Ensures actions sharing resources (e.g., tools, energy) are
executed without exceeding capacity.
4. Coordination Graphs:
o Represents dependencies between agents’ actions and uses
optimization techniques to find feasible simultaneous actions.

31
5. Decentralized Partially Observable Markov Decision Processes (Dec-
POMDPs):
o Extends POMDPs to handle multiple agents acting simultaneously
under uncertainty.

Example Applications
1. Robotics:
o Multi-robot systems performing assembly tasks concurrently.
o Coordinated drone swarms for mapping or surveillance.
2. Warehouse Automation:
o Robots simultaneously picking, packing, and delivering items.
3. Autonomous Vehicles:
o Coordinated intersection crossing or platooning on highways.
4. Game AI:
o Real-time strategy games where units perform simultaneous
actions for defence or attack.

Planning with multiple agents : cooperation and coordination


Planning with multiple agents, emphasizing cooperation and coordination, is
about enabling agents to work together effectively to achieve shared or
complementary goals. It involves designing strategies and mechanisms to
ensure their actions align and contribute to overall success without conflicts.

Key Concepts
1. Cooperation:
o Agents actively collaborate to achieve shared goals.
o Often involves task allocation, shared resources, and synchronized
actions.
2. Coordination:

32
o Ensures agents’ actions are compatible and avoid conflicts.
o Focused on scheduling, synchronization, and conflict resolution.

Steps in Cooperative and Coordinated Multi-Agent Planning


1. Goal Identification:
o Determine individual and collective goals.
o Align individual objectives to benefit the group.
2. Task Decomposition:
o Break down complex tasks into smaller sub-tasks.
o Assign sub-tasks to agents based on capabilities, availability, or
location.
3. Resource Allocation:
o Distribute shared resources to prevent contention.
o Examples: time slots, tools, or energy.
4. Conflict Detection and Resolution:
o Identify potential conflicts (e.g., overlapping paths or resource
contention).
o Use algorithms to resolve conflicts (e.g., re-planning or
prioritization).
5. Execution Monitoring:
o Continuously monitor execution to adapt plans in response to
dynamic changes or failures.

Techniques for Cooperation and Coordination


1. Task Allocation:
o Use algorithms like:
 Auction-based systems: Agents bid for tasks based on cost
or capability.

33
 Contract Net Protocol (CNP): A negotiation framework for
assigning tasks.
2. Communication Protocols:
o Agents share information about goals, plans, and states.
o Techniques include:
 Blackboard systems: A shared space for agents to post and
read data.
 Direct messaging: Peer-to-peer communication.
3. Planning Algorithms:
o Centralized Planning:
 A single planner generates a global plan for all agents.
 Examples: Multi-agent A*, Joint Intention Theory.
o Decentralized Planning:
 Agents plan independently but consider others’ actions.
 Examples: Distributed Constraint Optimization Problems
(DCOPs).
o Hierarchical Planning:
 High-level tasks are decomposed into low-level plans by
individual agents.
4. Coordination Strategies:
o Coordination Graphs: Represent dependencies between agents’
actions.
o Synchronization Points: Specific times or conditions where agents
must align actions.
o Priority-based Scheduling: Assign execution priorities to avoid
conflicts.

Applications
1. Robotics:
34
o Multi-robot systems assembling a product collaboratively.
o Drones coordinating for aerial mapping or delivery.
2. Traffic and Logistics:
o Autonomous vehicles coordinating at intersections.
o Scheduling deliveries in warehouse systems.
3. Disaster Response:
o Search-and-rescue teams with humans and robots working
together.
4. Game AI:
o Coordinating team strategies in real-time games

35

You might also like