Problem 1: Depth-First Search (DFS) and Breadth-First Search (BFS)
Aim
To implement Depth-First Search (DFS) and Breadth-First Search (BFS) algorithms
dynamically to explore a graph, simulating an AI agent exploring a state space.
Description
In Artificial Intelligence, uninformed search strategies such as DFS and BFS are widely
used for exploring graphs or state spaces.
- Breadth-First Search (BFS): Explores nodes level by level, visiting all neighbors of a
node before moving deeper. BFS guarantees the shortest path in an unweighted graph.
- Depth-First Search (DFS): Explores as far as possible along one branch before
backtracking. DFS is useful in puzzles, topological ordering, and backtracking problems.
Both algorithms form the foundation of pathfinding and AI search methods in problem-
solving.
Program
from collections import deque, defaultdict
def create_graph():
graph = defaultdict(list)
edges = int(input("Enter number of edges: "))
print("Enter edges (e.g. A B for edge between A and B):")
for _ in range(edges):
u, v = input().split()
graph[u].append(v)
graph[v].append(u)
return graph
def bfs(graph, start):
visited = set()
queue = deque([start])
result = []
while queue:
node = queue.popleft()
if node not in visited:
visited.add(node)
result.append(node)
queue.extend(graph[node])
return result
def dfs(graph, start, visited=None):
if visited is None:
visited = set()
visited.add(start)
result = [start]
for neighbor in graph[start]:
if neighbor not in visited:
result += dfs(graph, neighbor, visited)
return result
graph = create_graph()
start_node = input("Enter start node: ")
print("BFS Traversal:", bfs(graph, start_node))
print("DFS Traversal:", dfs(graph, start_node))
Sample Output 1
Enter number of edges: 5
Enter edges:
A B
A C
B D
C E
D F
Enter start node: A
BFS Traversal: ['A', 'B', 'C', 'D', 'E', 'F']
DFS Traversal: ['A', 'B', 'D', 'F', 'C', 'E']
Sample Output 2
Enter number of edges: 4
Enter edges:
1 2
1 3
2 4
3 5
Enter start node: 1
BFS Traversal: ['1', '2', '3', '4', '5']
DFS Traversal: ['1', '2', '4', '3', '5']
Result
Both BFS and DFS were successfully implemented and executed with user-defined graph
input. They systematically explore the graph, simulating an AI agent’s movement
through a state space.
- Completeness: BFS is complete; DFS may not be in infinite graphs.
- Optimality: BFS is optimal for unweighted graphs; DFS is not.
- Time Complexity: O(V + E) for both.
- Space Complexity: O(V) for both.
Problem 2: Travelling Salesman Problem (TSP)
Aim
To solve the Travelling Salesman Problem using brute-force search to find the shortest
possible tour that visits all cities and returns to the start.
Description
The Travelling Salesman Problem (TSP) is a classic NP-Hard combinatorial optimization
problem. The goal is to find the minimum-cost route that visits each city once and returns
to the origin.
Applications include AI planning, robotics path planning, and route optimization
systems. Since brute-force generates all permutations, it is only feasible for small
datasets.
Program
import itertools
def input_graph():
n = int(input("Enter number of cities: "))
graph = []
print("Enter cost matrix row by row (use space between values):")
for _ in range(n):
row = list(map(int, input().split()))
graph.append(row)
return graph, n
def tsp(graph, n):
cities = range(n)
min_cost = float('inf')
best_path = []
for perm in itertools.permutations(cities[1:]):
current_path = [0] + list(perm) + [0]
cost = sum(graph[current_path[i]][current_path[i + 1]] for i in
range(len(current_path) - 1))
if cost < min_cost:
min_cost = cost
best_path = current_path
return best_path, min_cost
graph, n = input_graph()
path, cost = tsp(graph, n)
print("Best Path (0-indexed):", path)
print("Minimum Cost:", cost)
Sample Output 1
Enter number of cities: 4
Enter cost matrix:
0 10 15 20
10 0 35 25
15 35 0 30
20 25 30 0
Best Path: [0, 1, 3, 2, 0]
Minimum Cost: 80
Sample Output 2
Enter number of cities: 3
Enter cost matrix:
0 5 9
5 0 6
9 6 0
Best Path: [0, 1, 2, 0]
Minimum Cost: 20
Result
The TSP algorithm correctly computes the shortest tour among cities using brute-force.
- Completeness: Complete (checks all possible paths).
- Optimality: Always finds the optimal solution.
- Time Complexity: O(n!).
- Space Complexity: O(n).
Problem 3: Simulated Annealing
Aim
To implement Simulated Annealing, a metaheuristic algorithm used to find approximate
solutions for optimization problems.
Description
Simulated Annealing (SA) is inspired by the cooling of metals where atoms settle into
low-energy states. It accepts worse solutions based on a probability, helping escape local
optima and converge toward a global minimum.
It is widely applied in AI optimization, scheduling, and machine learning hyperparameter
tuning.
Program
import math
import random
def objective_function(x):
return x**2 + 4 * math.sin(5 * x) + 2 * math.sin(10 * x)
def simulated_annealing():
start = float(input("Enter start of search range: "))
end = float(input("Enter end of search range: "))
T = float(input("Enter initial temperature (e.g. 1000): "))
cooling = float(input("Enter cooling rate (e.g. 0.95): "))
current_x = random.uniform(start, end)
current_energy = objective_function(current_x)
min_T = 0.1
while T > min_T:
new_x = current_x + random.uniform(-1, 1)
if new_x < start or new_x > end:
continue
new_energy = objective_function(new_x)
delta = new_energy - current_energy
if delta < 0 or math.exp(-delta / T) > random.random():
current_x = new_x
current_energy = new_energy
T *= cooling
return current_x, current_energy
sol, energy = simulated_annealing()
print("Best Solution:", sol)
print("Objective Function Value:", energy)
Sample Output 1
Enter start of search range: -10
Enter end of search range: 10
Enter initial temperature: 1000
Enter cooling rate: 0.95
Best Solution: -0.4
Objective Function Value: -1.8
Sample Output 2
Enter start of search range: -5
Enter end of search range: 5
Enter initial temperature: 500
Enter cooling rate: 0.9
Best Solution: 1.2
Objective Function Value: 2.7
Result
Simulated Annealing successfully approximated the global optimum.
- Completeness: Not complete (stochastic).
- Optimality: Not guaranteed, but often finds near-optimal solutions.
- Time Complexity: O(k), where k = number of iterations.
- Space Complexity: O(1).
Problem 4: Wumpus World Problem
Aim
To simulate an intelligent agent that explores the Wumpus World to find gold while
avoiding hazards like pits and the Wumpus.
Description
Wumpus World is a partially observable environment where the agent perceives limited
sensory data and must infer safe cells to explore. It is useful for studying rule-based and
knowledge-based AI agents.
Program
def create_world():
n = int(input("Enter grid size (e.g. 4 for 4x4): "))
world = [['' for _ in range(n)] for _ in range(n)]
print("Enter positions (0-indexed) as: type x y")
print("Types: G=Gold, P=Pit, W=Wumpus")
num = int(input("Enter number of elements to add: "))
for _ in range(num):
t, x, y = input().split()
x, y = int(x), int(y)
if t == 'G':
world[x][y] = 'Gold'
elif t == 'P':
world[x][y] = 'Pit'
elif t == 'W':
world[x][y] = 'Wumpus'
return world, n
def is_safe(world, x, y):
n = len(world)
if 0 <= x < n and 0 <= y < n and world[x][y] not in ['Pit',
'Wumpus']:
return True
return False
def find_gold(world):
n = len(world)
stack = [(0, 0)]
visited = set()
while stack:
x, y = stack.pop()
if (x, y) in visited:
continue
visited.add((x, y))
if world[x][y] == 'Gold':
print(f"Agent found the gold at ({x}, {y})!")
return True
for dx, dy in [(0,1), (1,0), (0,-1), (-1,0)]:
nx, ny = x + dx, y + dy
if is_safe(world, nx, ny) and (nx, ny) not in visited:
stack.append((nx, ny))
print("Agent failed to find the gold.")
return False
world, size = create_world()
find_gold(world)
Sample Output 1
Enter grid size: 4
Enter number of elements to add: 3
G 0 3
P 1 1
W 2 0
Agent found the gold at (0, 3)!
Sample Output 2
Enter grid size: 3
Enter number of elements to add: 2
G 2 2
W 1 0
Agent found the gold at (2, 2)!
Result
The agent was able to safely explore and find the gold using logical inference.
- Completeness: Complete if search space is finite.
- Optimality: Not guaranteed (greedy exploration).
- Time Complexity: O(n²).
- Space Complexity: O(n²).