Ai File
Ai File
LAB REPORT
SUBMITTED BY:
ARUN BHATNAGAR
102169006
3-EEC-3
SUBMITTED TO:
DR. VENKATA KARTEEK YANUMULA
EXPERIMENT 1
AIM- To implement Breadth First Search (BFS)
THEORY:
Breadth-First search is one of the simplest algorithms for searching a graph and the
archetype for many important graph algorithms. Prim’s minimum-spanning-tree algorithm
and Dijkstra’s singlesource shortest-paths algorithm use ideas similar to those in breadth-
First search. Given a graph G and a distinguished source vertex s, breadth-First search
systematically explores the edges of G to “discover” every vertex that is reachable from
‘s’. It computes the distance from ‘s’ to each reachable vertex, where the distance to a
vertex ‘v’ equals the smallest number of edges needed to go from ‘s’ to ‘v’. Breadth-First
search also produces a “breadth-first tree” with root ‘s’ that contains all reachable
vertices. For any vertex ‘v’ reachable from ‘s’, the simple path in the breadth-first tree from
‘s’ to ‘v’ corresponds to a shortest path from ‘s’ to ‘v’ in ‘G’, that is, a path containing the
smallest number of edges. The algorithm works on both directed and undirected graphs.
PROCEDURE:
Search and traversal imply visiting all the nodes of a graph. BFS is a recursive algorithm for
searching all the vertices or nodes of a graph or a tree. A graph can have cycles, unlike a tree,
which may result in visiting the same vertex again. To avoid processing processing a node
more than once, the vertices are divided into two categories: 1. Visited 2. Not visited The
algorithm is intended to mark each vertex as visited while avoiding cycles. It works as follows:
1. Start by putting the root of the graph’s vertices at the back of a queue. 2. Take
the front item of the queue and add it to the visited list. 3. Create a list of that
vertex’s adjacent nodes. Add the ones which aren’t in the visited list to the back
of the queue. 4. Keep repeating steps 2 and 3 until the queue is empty
CODE:
#include <iostream>
#include <vector>
#include <queue>
class Graph {
int V;
vector<vector<int>> adj;
public:
Graph(int V) : V(V), adj(V) {}
visited[startVertex] = true;
q.push(startVertex);
while (!q.empty()) {
int current = q.front();
cout << current << " ";
q.pop();
if (!visited[neighbor]) {
visited[neighbor] = true;
q.push(neighbor);
}
}
}
}
};
int main() {
Graph g(7);
g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(1, 3);
g.addEdge(1, 4);
g.addEdge(2, 5);
g.addEdge(2, 6);
return 0;
}
OUTPUT:
Breadth-First Traversal starting from vertex 0: 0 1 2 3 4 5 6
REPRESENTATION-
EXP-2
Aim-
To implement Depth First Search (DFS)
Theory-
There are two primary graph traversal algorithms: breadth-first search (BFS) and
depth-first search (DFS). For certain problems, it makes absolutely no difference
which you use, but in others the distinction is crucial. The difference between BFS
and DFS lies in the order in which they explore vertices. Depth-first search has a neat
recursive implementation, which eliminates the need to explicitly use a stack.
Procedure-
Search and traversal imply visiting all the nodes of a graph. DFS is a recursive algorithm for
searching all the vertices or nodes of a graph or a tree. A graph can have cycles, unlike a tree,
which may result in visiting the same vertex again. To avoid processing processing a node
more than once, the vertices are divided into two categories: 1. Visited 2. Not visited The
algorithm is intended to mark each vertex as visited while avoiding cycles. It works as follows:
1. Start by calling DFS function over the root of the graph’s vertices 2. Add the vertex
in the visited list. 3. Call DFS over all the neighbours of the vertex that are not in the
visited list. 4. Step-3 repeats itself until all the vertices are visited. Consider the
graph in Fig. 1, assuming that node-2 is the root node. Design your own graph and
write a code for implementing DFS using the pseudo code given below. Check the
order of processing the nodes in comparison with the BFS algorithm.
CODE-
#include <iostream>
#include <vector>
#include <stack>
class Graph {
int V;
vector<vector<int>> adj;
public:
Graph(int V) : V(V), adj(V) {}
visited[startVertex] = true;
s.push(startVertex);
while (!s.empty()) {
int current = s.top();
cout << current << " ";
s.pop();
};
int main() {
Graph g(7);
g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(1, 3);
g.addEdge(1, 4);
g.addEdge(2, 5);
g.addEdge(2, 6);
return 0;
}
OUTPUT-
Depth-First Traversal starting from vertex 0: 0 2 6 5 1 4 3
REPRESENTATION
EXP-3
Aim –
To implement Depth First Iterative Deepening Search (DFIDS) using
C/CPP/Java/Python/Matlab code. To compare the performance with BFS and
DFS algorithms in terms of number of steps needed to reach goal state.
Theory-
DFIDS is performed as a form of repetitive DFS moving to a successively deeper depth with
each iteration or a repetitive Depth Limited First Search (DLFS) with incremental depths. It
begins by performing a DFS to a depth of one. If no goal has been found, it then discards all
nodes generated and starts over doing a search to a depth of two. If no goal has been, repeat
the above step with a depth of three and so on until the goal state is found or some maximum
depth is reached. Since the DFIDS expands all nodes at a give depth before expanding nodes at
a greater depth, it is guaranteed to find a shortest-path solution. It has been shown to be
asymptotically optimal over DPFS and BFS in terms of time and space complexity.
Procedure –
Search and traversal imply visiting all the nodes of a graph or until a goal node
is found. DIDFS is also a recursive algorithm for searching all the vertices or
nodes of a graph or a tree. Consider the graph given in Fig. 1, assuming that
node-2 is the root node, find the number of steps needed to reach node 7 using
DFIDS algorithm. Compare the results with BFS and DFS algorithms.
CODE-
#include <iostream>
#include <vector>
#include <stack>
#include <queue>
class Node {
public:
int state; // Current state
int parent; // Parent state
int depth; // Depth of the node in the tree
class Graph {
public:
int V; // Number of vertices
vector<vector<int>> adj; // Adjacency list representation
Graph(int V) {
this->V = V;
adj.resize(V);
}
void addEdge(int u, int v) {
adj[u].push_back(v);
adj[v].push_back(u); // Undirected graph
}
// DFS implementation
void DFS(int start, int goal, int& steps) {
stack<Node> s;
s.push(Node(start, -1, 0));
while (!s.empty()) {
Node curr = s.top();
s.pop();
if (curr.state == goal) {
steps = curr.depth;
return;
}
}
}
}
while (!q.empty()) {
Node curr = q.front();
q.pop();
if (curr.state == goal) {
steps = curr.depth;
return;
}
}
}
}
// DFIDS implementation
void DFIDS(int start, int goal, int& steps) {
for (int depth = 0; depth <= MAX_DEPTH;
depth++) { steps = -1;
DFS(start, goal, steps, depth);
if (steps != -1) {
return;
}
}
if (limit == 0) {
return;
}
if (start == goal) {
steps = limit;
return;
}
int main() {
/ Create a graph Graph g(5); g.addEdge(0, 1); g.addEdge(0, 2); g.addEdge(1, 3);
g.addEdge(2, 4);
int start = 0;
int goal = 4;
// Perform BFS
g.BFS(start, goal, stepsBFS);
cout <<"BFS steps: "<<stepsBFS<<endl;
}
OUTPUT
BFS steps:3
DFIDS steps:2
EXP-4
Aim-
To implement hill climbing search using C/CPP/Java/Python/Matlab code. To
compare the performance with BFS, DFS, DFIDS algorithms in terms of
number of steps needed to reach goal state.
Theory-
When more information than the initial state, the operators, and the goal test is available, the
search space can usually be constrained, resulting in informed searches, such as Hill Climbing,
Best First, Branch and Bound, A∗ , etc.. They often depend on the use of heuristic information
expressed in the form of a heuristic evaluation function f(n, g), a function of the nodes n and/or
the goals g. At each point in the search path, a successor node that appears to lead most
quickly to the top of the hill (the goal) is selected for exploration.
Procedure –
It is like DFS where the most promising child is selected for expansion. This process continues
from node-to-node with previously expanded nodes being discarded. Hill climbing algorithm
finds the nearest neighbour to reach the goal state. It is very efficient over blind searches when
an informative, reliable function is available to guide the search to a global goal.
Code-
#include <iostream>
#include <random>
#include <cmath>
/ Define the objective function (you can replace this with your own function)
double objectiveFunction(double x) {
return -x * x; // Minimize the negative of a quadratic function
}
return current;
}
int main() {
srand(time(0)); // Initialize random seed
cout << "Hill Climbing Result: " << result << " Objective: " <<
objectiveFunction(result) << endl;
return 0;
}
Output-
Hill Climbing Result: -4.42761e-17 Objective: -1.96037e-33
EXP 5
AIM: Implementing Best First Search
THEORY:
Best-first search (BFS) is a type of informed search algorithm that uses a heuristic
function to guide its search. The heuristic function is an estimate of the distance from a
node to the goal node. BFS expands the node with the lowest heuristic value first. This
means that BFS will explore paths that are closer to the goal first. BFS is an improvement
over uninformed search algorithms, such as breadth-first search (BFS), because it does
not explore all possible paths before finding a solution. Instead, it focuses on the most
promising paths first. This can make BFS much more efficient than BFS, especially for
problems with large state spaces. However, BFS is not always complete or optimal. This
means that it may not find a solution, or it may not find the best solution. This is because
the heuristic function is only an estimate, and it may not be accurate for all paths.
PROCEDURE:
Best-First Search (BFS) resembles Depth-First Search (DFS) by prioritizing the most
promising child for expansion, reminiscent of the steepest ascent in hill climbing. It efficiently
navigates node-to-node, guided by a heuristic function. Similar to hill climbing, BFS excels in
search efficiency, particularly when aided by a reliable heuristic. However, like hill climbing, it
doesn't guarantee global optimality and relies on the quality of the heuristic for success
CODE:
#include <iostream>
#include <queue>
#include <vector>
struct Node {
int id;
int heuristic;
vector<Node*> neighbors;
};
while (!frontier.empty()) {
/ Get the node with the lowest heuristic value Node* currentNode =
frontier.top(); frontier.pop();
{ if (!explored.count(neighbor)) {
neighbor->heuristic = neighbor->id;
frontier.push(neighbor);
}
}
}
}
int main() {
/ Create the graph vector<Node*> nodes(6); for (int i = 0; i < 6; i++) {
nodes[i]->id = i;
nodes[0]->neighbors.push_back(nodes[1]);
nodes[0]->neighbors.push_back(nodes[2]);
nodes[1]->neighbors.push_back(nodes[3]);
nodes[1]->neighbors.push_back(nodes[4]);
nodes[2]->neighbors.push_back(nodes[5]);
nodes[3]->neighbors.push_back(nodes[5]);
if (goalNode) {
cout << "Path to the goal: " << endl;
reconstructPath(goalNode);
} else {
cout << "No path to the goal found" << endl;
return 0;
}
RESULT:
EXP-6
CODE-
% Create a FIS object fis = newfis('example');
addvar(fis,'input','x',[0 10]);
addvar(fis,'output','y',[0 1]);
subplot(2,1,1)
plotmf(fis,'input',1)
xlabel('x')
ylabel('Membership degree')
subplot(2,1,2)
plotmf(fis,'output',1)
xlabel('y')
ylabel('Membership degree')
OUTPUT:
EXP-7
Aim
Implement temperature control system (three levels using fuzzy interference system)
Theory
Fuzzy logic is a mathema cal paradigm that addresses uncertainty and imprecision by allowing
variables to take on values in a graded manner between 0 and 1. The Fuzzy Logic Toolbox in
MATLAB provides a powerful set of tools for implemen ng fuzzy logic systems, enabling the
modeling of complex systems in the presence of vague or incomplete informa on. In the context
of a temperature control system, the toolbox allows the defini on of fuzzy sets for input variables,
such as current temperature and rate of temperature change, each characterized by membership
func ons that assign degrees of belonging. The design process involves crea ng linguis c rules
that capture the rela onship between these inputs and the desired output, such as heater power.
The toolbox facilitates the crea on of a Fuzzy Inference System (FIS), consis ng of a fuzzifica on
interface, rule base, inference engine, and defuzzifica on interface. The resul ng FIS can then be
simulated to obtain crispoutput values based on fuzzy input informa on. This capability is
par cularly valuable in applica onslike temperature control systems, where uncertain
es and varia ons are inherent, making tradi onal control approaches less e ffec ve.
The Fuzzy Logic Toolbox's user-friendly interface and extensive func onality
empower engineers and researchers to design adap ve and robust control systems
that can accommodate the inherent complexi es of real-world scenarios.
Code:
% Fuzzy Inference System for Temperature Control with User Input
temperature
% Evaluate membership degrees for the given temperature input degree_cold = interp1(temperature,
= gaussmf(fan_speed, [20 50]); % Membership func on for medium fan speed high =
rule2 = min(degree_warm,
high);
% Combine rules
subplot(2, 1, 1);
plot(temperature, cold, 'b', temperature, warm, 'g', temperature, hot, 'r');
RESULT:
EXP-8
1. Fuzzy Logic Controller (FLC): Fuzzy logic handles uncertain es in the control system by using linguis c
variables like "low," "medium," and "high" speed. Membership func ons define the degree of membership of input
and output variables in these linguis c terms. Fuzzy rules, o en derived from expert knowledge, govern how input
variables relate to the output variable, allowing for adap ve and flexible control.
2. 2. PID Controller: The PID controller has three components: Propor onal (P), Integral (I), and Deriva ve (D). The P
term is propor onal to the speed error, the I term considers the accumula on of past errors, and the D term an cipates future
behaviour based on the rate of change of the error. PID control provides a balance between stability and responsiveness,
reducing steady-state error and improving the system's response to disturbances.
% Define membership func ons for output variable fis = addMF(fis, 'Change in Output', 'trimf', [-10 -5 0]);
fis = addMF(fis, 'Change in Output', 'trimf', [-5 0 5]); fis = addMF(fis, 'Change in Output', 'trimf', [0 5 10]);
% Define rules
ruleList = [
11111;
22111;
33111;
];
writeFIS(fis, 'fuzzy_motor_control.fis');
disp('Fuzzy Logic Controller file (fuzzy_motor_control.fis) created successfully.');
% DC Motor Transfer Func on (example) numerator = [1];
me = 0:0.1:10; % Simula on me
% PID Controller
error_sum = 0;
previous_error = 0;
% Calculate error
speed_error = reference_speed - fuzzy_speed(i-1);
% Clip input values to the expected range speed_error = max(min(speed_error, 10), -10);
delta_error = speed_error - fuzzy_speed(i-1);
fuzzy_speed_vector(end);
% PID Controller
pid_output(i) = kp * speed_error + ki * error_sum + kd * (speed_error - previous_error);
% Update error sum and previous error for PID controller error_sum = error_sum +
speed_error;
previous_error =
speed_error;end
xlabel('Time');
ylabel('Output');
OUTPUT:
EXP-9
THORY:
The AND gate takes two binary inputs (0 or 1) and produces one binary output. The output is 1 only
if both inputs are 1, and 0 otherwise.
Architecture:
A simple neural network for the AND gate can consist of two input neurons, a single hidden neuron, and an output
neuron. The input neurons receive the binary inputs, and their outputs are weighted and summed in the hidden
neuron. The ac va on func on of the hidden neuron is usually sigmoid, which squashes the input to a value between
0 and 1. The output neuron receives the output of the hidden neuron, and its output is mul plied by the weight of the
connec on between the hidden neuron and the output neuron. The ac va on func on of the output neuron is usually
sigmoid or a step func on, which determines the output as 0 or 1 based on the weighted sum.
Training:
The neural network is trained using a dataset of input-output pairs. For the AND gate, the dataset
would consist of four pairs of inputs and outputs: (0, 0), (0, 1), (1, 0), and (1, 1). The goal of training
is to adjust the weights in the network so that the output for each input pair is as close as possible
to the expected output. This can be done using an algorithm like gradient descent, which itera vely
adjusts the weights based on the error between the network's output and the expected output.
Interpreta on:
Once the network is trained, it can be used to evaluate new input pairs. If the network's output for a new input pair is
close to 1, then the network is confident that the correct output for that input pair is 1. Conversely, if the network's output
for a new input pair is close to 0, then the network is confident that the correct output for that input pair is 0.
OR Gate Implementa on
The OR gate takes two binary inputs and produces one binary output. The output is 1 if either or both inputs are 1, and
0 otherwise.
Architecture:
The neural network for the OR gate can be similar to the AND gate architecture. The main di fference is in the ac va
on func on of the hidden neuron. Instead of using sigmoid, a linear ac va on func on can be used. This allows the
hidden neuron to output any value between 0 and 1, which makes it more suitable for represen ng the OR func on.
Training:
The neural network is trained using the same dataset as the AND gate, but the goal of training is slightly di fferent. For the
OR gate, the goal is to adjust the weights in the network so that the output for any input pair is greater than or equal to 0.5.
This is because the OR func on always produces an output of 1 or more when either or both inputs are 1.
Interpreta on:
Once the network is trained, it can be used to evaluate new input pairs. If the network's output for a
new input pair is greater than or equal to 0.5, then the network is confident that the correct output
for that input pair is 1. Conversely, if the network's output for a new input pair is less than 0.5, then
the network is confident that the correct output for that input pair is 0.
CODE:
AND GATE;
%Output data
Y = [0; 0; 0; 1];
% Step 2: Create a neural network
net = feedforwardnet(5); % Create a feedforward neural network with 5
hidden neurons net = train(net, X', Y'); % Train the network using the dataset
testX = [0 0; 0 1; 1 0; 1 1];
disp([testX output']);
%Check if the neural network has learned the AND gate correctly if isequal(output', Y)
correctly.'); end
OR GATE;
%Define the input data (X) and the corresponding target output (T) X=[00;01;10;11];
T = [0; 1; 1; 1];
for i = 1:length(outputs)
fprin ('Input: [%d, %d] -> Output: %f\n', X(i, 1), X(i, 2),
outputs(i)); end
RESULTS:
AND GATE:
OR GATE:
EXP-10
Theory:
Implemen ng the XOR gate using neural networks involves crea ng a network architecture, training
the network with appropriate data, and interpre ng the network's output.
Network Architecture:
A simple three-layer neural network can be used to implement the XOR gate. The network consists of two
input neurons, two hidden neurons, and one output neuron. The input neurons receive the binary inputs (0 or
1), and their outputs are weighted and summed in the hidden neurons. The ac va on func on of the hidden
neurons is usually sigmoid, which squashes the input to a value between 0 and 1. The outputs of the hidden
neurons are weighted and summed in the output neuron, and its ac va on func on is also typically sigmoid.
Training Data
This training data is used to adjust the weights of the connec ons between the neurons in the
network. The goal of training is to minimize the error between the network's output and the
expected output for each input case. This can be done using an algorithm like gradient descent,
which itera vely adjusts the weights in the direc on of the steepest descent of the error func on.
Training Process:
The training process involves repeatedly feeding the network with the four input cases and adjus ng the
weights based on the error between the network's output and the expected output. This process is con nued
un l the error is sufficiently low, indica ng that the network has learned to correctly classify the XOR gate.
Interpre ng Output:
Once trained, the neural network can be used to evaluate new input pairs. If the network's output is closer to 1 than
to 0, then the network is confident that the correct output for that input pair is 1. Conversely, if the network's output
is closer to 0 than to 1, then the network is confident that the correct output for that input pair is 0.
U lizing neural networks to implement logic gates like XOR demonstrates their ability to learn complex
rela onships between inputs and outputs. This capability makes neural networks valuable tools for
various applica ons, including pa ern recogni on, machine learning, and ar ficial intelligence.
CODE:
input = [0 0; 0 1; 1 0; 1 1]';
output = [0 1 1 0];
testInput));
disp('Test Input:');
disp(testInput);
disp('Predicted Output:');
disp(predictedOutput');
RESULT:
Exp-6
AIM-Fuzzy logic system for controlling the power of a heater based
on the ambient temperature
PROCEDURE-
1.Define Membership Functions:
Create triangular membership functions for both temperature and heater
power, each divided into three levels: Low, Medium, and High.
2.Visualize Membership Functions:
Plot the membership functions for temperature and heater power using subplots.
CODE-
temperature = 0:0.1:100;
heater_power = 0:5:100;
figure;
subplot(2, 1, 1);
plot(temperature, [temp_low; temp_medium;
temp_high]); title('Temperature Membership
Functions'); legend('Low', 'Medium', 'High');
subplot(2, 1, 2);
plot(heater_power, [power_low; power_medium;
power_high]); title('Heater Power Membership
Functions'); legend('Low', 'Medium', 'High');
rule2 = [3 3 2 2 2 2];
rule3 = [3 3 2 1 1 1];
% Define the input temperature and perform fuzzy inference input_temp = 75;
output_power = evalfis(input_temp, fis);
title('Input Temperature');
subplot(2, 1, 2);
plotmf(fis, 'output', 1);
title('Heater Power');