[go: up one dir, main page]

0% found this document useful (0 votes)
36 views11 pages

AIML Problem Solving Agents Module 2

aiml mod 2

Uploaded by

A.P Ramya Shetty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views11 pages

AIML Problem Solving Agents Module 2

aiml mod 2

Uploaded by

A.P Ramya Shetty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

6/5/2024

PROBLEM SOLVING AGENT


■ Problem-solving agent in artificial intelligence based on a framework that includes key
components and stages. Here's a breakdown based on their perspective:

– Perception: The agent takes in information from the environment using sensors. This
information is used to determine the current state of the world.

– Action: Based on the perceived information, the agent selects actions to execute. These
actions are performed in the environment to bring about changes.

– Goal Formulation: The agent must have a clear understanding of its objectives or
goals. The process of determining what needs to be achieved is called goal
formulation.

PROBLEM SOLVING AGENT


– Problem Formulation: The agent needs to convert the goal into a specific problem
that it can solve. This involves defining the current state, the possible actions, and the
goal state.

– Search: To find a sequence of actions that lead from the current state to the goal state,
the agent conducts a search through the space of possible actions and states. Various
search algorithms can be employed for this purpose.

– Planning: Planning involves selecting a sequence of actions that will transform the
current state into the goal state. This is often done through the use of heuristics or
search algorithms.

1
6/5/2024

PROBLEM SOLVING AGENT


– Learning: The agent may improve its performance over time by learning from
experience. This could involve adapting its behavior based on feedback from the
environment.
– Execution: The agent executes the chosen actions in the real world or simulated
environment.
– Monitoring: The agent continually monitors the environment to ensure that its actions
are leading toward the goal. If there are deviations or changes in the environment, the
agent may need to adapt its plan.
– Knowledge Representation: The agent maintains internal representations of the world,
its current state, and the knowledge needed for decision-making. This representation is
crucial for reasoning and problem-solving.

EXAMPLES OF PROBLEM SOLVING-8 PUZZLE TOY GAME


■ The 8-puzzle, an instance of which is shown in Figure below , consists of a 3x3 board
with eight numbered tiles and a blank space.

■ A tile adjacent to the blank space can slide into the space.

■ The object is to reach the configuration shown on the right of the figure.

■ One important trick is to notice that rather than use operators such as "move the 3 tile
into the blank space," it is more sensible to have operators such as "the blank space
changes places with the tile to its left." This is because there are fewer of the latter
kind of operator. This leads us to the following formulation:

2
6/5/2024

EXAMPLES OF PROBLEM SOLVING-8 PUZZLE TOY GAME


– States: a state description specifies the location of each of the eight tiles in one of the
nine squares. For efficiency, it is useful to include the location of the blank.

– <> Operators: blank moves left, right, up, or down.

– Goal test: state matches the goal configuration shown in Figure below.

– Path cost: each step costs 1, so the path cost is just the length of the path.
■ The 8-puzzle belongs to the family of sliding-block puzzles.

EXAMPLES OF PROBLEM SOLVING-8 QUEEN GAME


■ The goal of the 8-queens problem is to place eight queens on a chessboard such that no
queen attacks any other. (A queen attacks any piece in the same row, column or
diagonal.)

■ Figure below shows an attempted solution that fails: the queen in the rightmost column
is attacked by the queen at top left.

3
6/5/2024

REAL WORLD PROBLEMS- ROUTE FINDING


■ Problem Definition: Route Finding

– State Representation: The state is represented by the current location of the agent and
the remaining part of the route to the destination.

– Initial State: The initial state is the starting location of the agent.

– Goal State: The goal state is the destination or a specific location the agent needs to
reach.

– Actions: Legal actions involve moving from one location to another along a valid
path.

REAL WORLD PROBLEMS- ROUTE FINDING


■ Example:

■ Consider a GPS navigation system guiding a driver from their current location to a
specific destination. The AI agent needs to determine the best sequence of roads or
paths to reach the goal efficiently, considering factors like traffic, road conditions, and
travel time.

4
6/5/2024

TOURING AND TRAVELLING SALESPERSON PROBLEMS


■ Traveling Salesman Problem (TSP): Problem Definition

– State Representation: The state is represented by the cities the salesman needs to visit
and the order in which they are visited.

– Initial State: The initial state is any starting city.

– Goal State: The goal is to find a tour that visits each city exactly once and returns to
the starting city with the minimum total travel distance.

– Actions: Legal actions involve choosing the next city to visit in the tour.

TOURING AND TRAVELLING SALESPERSON PROBLEMS


■ Example:

■ Imagine a salesperson who needs to visit multiple cities to make sales. The AI agent
must determine the most efficient route that minimizes the total travel distance,
ensuring that each city is visited exactly once and returning to the starting point.

5
6/5/2024

STRATEGIES OF THE REAL WORLD PROBLEMS


■ Solution Strategies:

– Search Algorithms: Both route finding and the traveling salesman problem can be
solved using search algorithms like A* search, depth-first search, or breadth-first
search. These algorithms explore the space of possible routes to find the optimal or
near-optimal solution.

– Heuristics: In real-world scenarios, heuristics may be employed to guide the search


efficiently. For example, in route finding, a heuristic might consider traffic conditions
or road speed limits to estimate travel time.

STRATEGIES OF THE REAL WORLD PROBLEMS


– Optimization Techniques: The traveling salesman problem often involves optimization
techniques such as dynamic programming or genetic algorithms to find the most
efficient tour.

– Constraint Satisfaction: In some cases, constraints like avoiding toll roads or


considering specific preferences may be part of the problem. Constraint satisfaction
techniques can be applied to address these constraints.
■ Both of these problems are representative of real-world challenges that involve
decision-making, optimization, and path finding. The application of AI techniques to
solve these problems contributes to the development of intelligent systems that can
navigate and plan routes in complex environments.

6
6/5/2024

UNINFORMED SEARCH ALGORITHMS


■ Uninformed search techniques, also known as blind search algorithm or Brute Force
Technique (Heuristic Technique) .

■ These algorithms systematically explore the search space without using any domain-
specific knowledge or heuristics. Here are some commonly used uninformed search
techniques:

1.Breadth First Search Algorithm (BFS)

2.Depth first Search Algorithm (DFS)

CHARACTERISTICS OF UNINFORMED SEARCH


■ Completeness:
– Breadth-First Search and Uniform-Cost Search are complete algorithms, meaning they
will find a solution if one exists. Depth-First Search and Depth-Limited Search are not
complete in all cases.
■ Optimality:
– Breadth-First Search and Uniform-Cost Search are optimal, as they guarantee finding
the solution with the minimum path cost. DFS and Depth-Limited Search may not be
optimal.
■ Memory Usage:
– BFS tends to use more memory as it stores all nodes at each level. DFS uses less
memory, but it may run into problems with infinite paths. Depth-Limited Search and
IDDFS address some memory concerns.

7
6/5/2024

BREADTH FIRST SEARCH ALGORITHM (BFS)


■ Breadth-first search (BFS) is a graph traversal algorithm that searches a tree or graph data
structure for a node that meets a given property.

■ BFS starts at the root node and explores all nodes at the current depth level before moving on
to the next depth level.

■ The basic working nature of BFS is used to search a tree or graph data structure for the node
that meets the set of criteria. It begins with root node of the tree or graph and investigate all
the nodes at the current depth level before moving on to the nodes at the next depth level.

■ It uses first in first out search technique which is more suitable for searching the elements such
as decision tree concepts.

BREADTH FIRST SEARCH ALGORITHM (BFS)


– Now let’s take a look at the steps involved in traversing a graph by using Breadth-First
Search:

– Step 1: Take an Empty Queue.

– Step 2: Select a starting node (visiting a node) and insert it into the Queue.

– Step 3: Provided that the Queue is not empty, extract the node from the Queue and insert its
child nodes (exploring a node) into the Queue.

– Step 4: Print the extracted node.

8
6/5/2024

BREADTH FIRST SEARCH ALGORITHM (BFS)

BREADTH FIRST SEARCH ALGORITHM (BFS)


■ Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes traversed
in BFS until the shallowest Node. Where the d= depth of shallowest solution and b is a node at every
state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

– Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is
O(bd).

– Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then
BFS will find a solution.

– Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.

9
6/5/2024

APPLICATIONS OF BREADTH FIRST SEARCH ALGORITHM


(BFS)

UNIFORMED SEARCH : DEPTH FIRST SEARCH TECHNIQUE


■ Depth-first search is a recursive algorithm for traversing a tree or graph data structure.

■ It is called the depth-first search because it starts from the root node and follows each path to
its greatest depth node before moving to the next path.

■ DFS uses a stack data structure for its implementation ( First in Last Out).
■ Advantages:

■ DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the current node.

■ It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).

■ Disadvantage:

■ There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.

■ DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

10
6/5/2024

UNIFORMED SEARCH : DEPTH FIRST SEARCH TECHNIQUE


■ In the below search tree, we have shown the flow of depth-first search, and it will follow the order as:

■ Root node--->Left node ----> right node.

■ It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will
traverse node C and then G, and here it will terminate as it found goal node.

UNIFORMED SEARCH : DEPTH FIRST SEARCH TECHNIQUE


■ The time complexity of Depth first search algorithm is represented as

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

■ Where,

– m= maximum depth of any node and this can be much larger than d (Shallowest solution depth)

– Space Complexity: DFS algorithm needs to store only single path from the root node, hence space
complexity of DFS is equivalent to the size of the fringe set, which is O(nm).

– Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost
to reach to the goal node.

11

You might also like