AIML Problem Solving Agents Module 2
AIML Problem Solving Agents Module 2
– Perception: The agent takes in information from the environment using sensors. This
information is used to determine the current state of the world.
– Action: Based on the perceived information, the agent selects actions to execute. These
actions are performed in the environment to bring about changes.
– Goal Formulation: The agent must have a clear understanding of its objectives or
goals. The process of determining what needs to be achieved is called goal
formulation.
– Search: To find a sequence of actions that lead from the current state to the goal state,
the agent conducts a search through the space of possible actions and states. Various
search algorithms can be employed for this purpose.
– Planning: Planning involves selecting a sequence of actions that will transform the
current state into the goal state. This is often done through the use of heuristics or
search algorithms.
1
6/5/2024
■ A tile adjacent to the blank space can slide into the space.
■ The object is to reach the configuration shown on the right of the figure.
■ One important trick is to notice that rather than use operators such as "move the 3 tile
into the blank space," it is more sensible to have operators such as "the blank space
changes places with the tile to its left." This is because there are fewer of the latter
kind of operator. This leads us to the following formulation:
2
6/5/2024
– Goal test: state matches the goal configuration shown in Figure below.
– Path cost: each step costs 1, so the path cost is just the length of the path.
■ The 8-puzzle belongs to the family of sliding-block puzzles.
■ Figure below shows an attempted solution that fails: the queen in the rightmost column
is attacked by the queen at top left.
3
6/5/2024
– State Representation: The state is represented by the current location of the agent and
the remaining part of the route to the destination.
– Initial State: The initial state is the starting location of the agent.
– Goal State: The goal state is the destination or a specific location the agent needs to
reach.
– Actions: Legal actions involve moving from one location to another along a valid
path.
■ Consider a GPS navigation system guiding a driver from their current location to a
specific destination. The AI agent needs to determine the best sequence of roads or
paths to reach the goal efficiently, considering factors like traffic, road conditions, and
travel time.
4
6/5/2024
– State Representation: The state is represented by the cities the salesman needs to visit
and the order in which they are visited.
– Goal State: The goal is to find a tour that visits each city exactly once and returns to
the starting city with the minimum total travel distance.
– Actions: Legal actions involve choosing the next city to visit in the tour.
■ Imagine a salesperson who needs to visit multiple cities to make sales. The AI agent
must determine the most efficient route that minimizes the total travel distance,
ensuring that each city is visited exactly once and returning to the starting point.
5
6/5/2024
– Search Algorithms: Both route finding and the traveling salesman problem can be
solved using search algorithms like A* search, depth-first search, or breadth-first
search. These algorithms explore the space of possible routes to find the optimal or
near-optimal solution.
6
6/5/2024
■ These algorithms systematically explore the search space without using any domain-
specific knowledge or heuristics. Here are some commonly used uninformed search
techniques:
7
6/5/2024
■ BFS starts at the root node and explores all nodes at the current depth level before moving on
to the next depth level.
■ The basic working nature of BFS is used to search a tree or graph data structure for the node
that meets the set of criteria. It begins with root node of the tree or graph and investigate all
the nodes at the current depth level before moving on to the nodes at the next depth level.
■ It uses first in first out search technique which is more suitable for searching the elements such
as decision tree concepts.
– Step 2: Select a starting node (visiting a node) and insert it into the Queue.
– Step 3: Provided that the Queue is not empty, extract the node from the Queue and insert its
child nodes (exploring a node) into the Queue.
8
6/5/2024
– Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is
O(bd).
– Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then
BFS will find a solution.
– Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
9
6/5/2024
■ It is called the depth-first search because it starts from the root node and follows each path to
its greatest depth node before moving to the next path.
■ DFS uses a stack data structure for its implementation ( First in Last Out).
■ Advantages:
■ DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the current node.
■ It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).
■ Disadvantage:
■ There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.
■ DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
10
6/5/2024
■ It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will
traverse node C and then G, and here it will terminate as it found goal node.
■ Where,
– m= maximum depth of any node and this can be much larger than d (Shallowest solution depth)
– Space Complexity: DFS algorithm needs to store only single path from the root node, hence space
complexity of DFS is equivalent to the size of the fringe set, which is O(nm).
– Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost
to reach to the goal node.
11