[go: up one dir, main page]

0% found this document useful (0 votes)
2 views47 pages

Ai File

The document is a lab report on various search algorithms in artificial intelligence, including Breadth First Search (BFS), Depth First Search (DFS), Depth First Iterative Deepening Search (DFIDS), Hill Climbing, and Best First Search. Each experiment outlines the aim, theory, procedure, and includes code implementations in C++. The report compares the performance of these algorithms in terms of steps needed to reach a goal state.

Uploaded by

Singh Manreet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views47 pages

Ai File

The document is a lab report on various search algorithms in artificial intelligence, including Breadth First Search (BFS), Depth First Search (DFS), Depth First Iterative Deepening Search (DFIDS), Hill Climbing, and Best First Search. Each experiment outlines the aim, theory, procedure, and includes code implementations in C++. The report compares the performance of these algorithms in terms of steps needed to reach a goal state.

Uploaded by

Singh Manreet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

FOUNDATION OF ARTIFICIAL INTELLIGENCE-UCS541-2324ODDSEM

LAB REPORT
SUBMITTED BY:
ARUN BHATNAGAR
102169006
3-EEC-3

SUBMITTED TO:
DR. VENKATA KARTEEK YANUMULA

EXPERIMENT 1
AIM- To implement Breadth First Search (BFS)

THEORY:
Breadth-First search is one of the simplest algorithms for searching a graph and the
archetype for many important graph algorithms. Prim’s minimum-spanning-tree algorithm
and Dijkstra’s singlesource shortest-paths algorithm use ideas similar to those in breadth-
First search. Given a graph G and a distinguished source vertex s, breadth-First search
systematically explores the edges of G to “discover” every vertex that is reachable from
‘s’. It computes the distance from ‘s’ to each reachable vertex, where the distance to a
vertex ‘v’ equals the smallest number of edges needed to go from ‘s’ to ‘v’. Breadth-First
search also produces a “breadth-first tree” with root ‘s’ that contains all reachable
vertices. For any vertex ‘v’ reachable from ‘s’, the simple path in the breadth-first tree from
‘s’ to ‘v’ corresponds to a shortest path from ‘s’ to ‘v’ in ‘G’, that is, a path containing the
smallest number of edges. The algorithm works on both directed and undirected graphs.

PROCEDURE:
Search and traversal imply visiting all the nodes of a graph. BFS is a recursive algorithm for
searching all the vertices or nodes of a graph or a tree. A graph can have cycles, unlike a tree,
which may result in visiting the same vertex again. To avoid processing processing a node
more than once, the vertices are divided into two categories: 1. Visited 2. Not visited The
algorithm is intended to mark each vertex as visited while avoiding cycles. It works as follows:
1. Start by putting the root of the graph’s vertices at the back of a queue. 2. Take
the front item of the queue and add it to the visited list. 3. Create a list of that
vertex’s adjacent nodes. Add the ones which aren’t in the visited list to the back
of the queue. 4. Keep repeating steps 2 and 3 until the queue is empty

CODE:
#include <iostream>
#include <vector>
#include <queue>

using namespace std;

class Graph {
int V;
vector<vector<int>> adj;

public:
Graph(int V) : V(V), adj(V) {}

void addEdge(int u, int v) {


adj[u].push_back(v);
}

void BFS(int startVertex) {


vector<bool> visited(V, false);
queue<int> q;

visited[startVertex] = true;
q.push(startVertex);

while (!q.empty()) {
int current = q.front();
cout << current << " ";
q.pop();

for (int neighbor : adj[current]) {

if (!visited[neighbor]) {
visited[neighbor] = true;
q.push(neighbor);
}
}
}
}
};

int main() {
Graph g(7);

g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(1, 3);
g.addEdge(1, 4);
g.addEdge(2, 5);
g.addEdge(2, 6);

cout << "Breadth-First Traversal starting from


vertex 0: "; g.BFS(0);

return 0;
}
OUTPUT:
Breadth-First Traversal starting from vertex 0: 0 1 2 3 4 5 6

REPRESENTATION-
EXP-2
Aim-
To implement Depth First Search (DFS)
Theory-
There are two primary graph traversal algorithms: breadth-first search (BFS) and
depth-first search (DFS). For certain problems, it makes absolutely no difference
which you use, but in others the distinction is crucial. The difference between BFS
and DFS lies in the order in which they explore vertices. Depth-first search has a neat
recursive implementation, which eliminates the need to explicitly use a stack.
Procedure-
Search and traversal imply visiting all the nodes of a graph. DFS is a recursive algorithm for
searching all the vertices or nodes of a graph or a tree. A graph can have cycles, unlike a tree,
which may result in visiting the same vertex again. To avoid processing processing a node
more than once, the vertices are divided into two categories: 1. Visited 2. Not visited The
algorithm is intended to mark each vertex as visited while avoiding cycles. It works as follows:
1. Start by calling DFS function over the root of the graph’s vertices 2. Add the vertex
in the visited list. 3. Call DFS over all the neighbours of the vertex that are not in the
visited list. 4. Step-3 repeats itself until all the vertices are visited. Consider the
graph in Fig. 1, assuming that node-2 is the root node. Design your own graph and
write a code for implementing DFS using the pseudo code given below. Check the
order of processing the nodes in comparison with the BFS algorithm.
CODE-
#include <iostream>
#include <vector>
#include <stack>

using namespace std;

class Graph {
int V;
vector<vector<int>> adj;

public:
Graph(int V) : V(V), adj(V) {}

void addEdge(int u, int v) {


adj[u].push_back(v);
}

void DFS(int startVertex) {


vector<bool> visited(V, false);
stack<int> s;

visited[startVertex] = true;
s.push(startVertex);
while (!s.empty()) {
int current = s.top();
cout << current << " ";
s.pop();

for (int neighbor : adj[current]) {


if (!visited[neighbor]) {
visited[neighbor] = true;
s.push(neighbor);

};
int main() {
Graph g(7);

g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(1, 3);
g.addEdge(1, 4);
g.addEdge(2, 5);
g.addEdge(2, 6);

cout << "Depth-First Traversal starting from


vertex 0: "; g.DFS(0);

return 0;
}
OUTPUT-
Depth-First Traversal starting from vertex 0: 0 2 6 5 1 4 3

REPRESENTATION
EXP-3
Aim –
To implement Depth First Iterative Deepening Search (DFIDS) using
C/CPP/Java/Python/Matlab code. To compare the performance with BFS and
DFS algorithms in terms of number of steps needed to reach goal state.
Theory-
DFIDS is performed as a form of repetitive DFS moving to a successively deeper depth with
each iteration or a repetitive Depth Limited First Search (DLFS) with incremental depths. It
begins by performing a DFS to a depth of one. If no goal has been found, it then discards all
nodes generated and starts over doing a search to a depth of two. If no goal has been, repeat
the above step with a depth of three and so on until the goal state is found or some maximum
depth is reached. Since the DFIDS expands all nodes at a give depth before expanding nodes at
a greater depth, it is guaranteed to find a shortest-path solution. It has been shown to be
asymptotically optimal over DPFS and BFS in terms of time and space complexity.

Procedure –
Search and traversal imply visiting all the nodes of a graph or until a goal node
is found. DIDFS is also a recursive algorithm for searching all the vertices or
nodes of a graph or a tree. Consider the graph given in Fig. 1, assuming that
node-2 is the root node, find the number of steps needed to reach node 7 using
DFIDS algorithm. Compare the results with BFS and DFS algorithms.

CODE-
#include <iostream>
#include <vector>
#include <stack>
#include <queue>

using namespace std;

const int MAX_DEPTH = 10; // Maximum depth for DFIDS

class Node {
public:
int state; // Current state
int parent; // Parent state
int depth; // Depth of the node in the tree

Node(int state, int parent, int depth) {


this->state = state;
this->parent = parent;
this->depth = depth;
}
};

class Graph {
public:
int V; // Number of vertices
vector<vector<int>> adj; // Adjacency list representation

Graph(int V) {
this->V = V;
adj.resize(V);
}
void addEdge(int u, int v) {
adj[u].push_back(v);
adj[v].push_back(u); // Undirected graph
}

// DFS implementation
void DFS(int start, int goal, int& steps) {
stack<Node> s;
s.push(Node(start, -1, 0));

while (!s.empty()) {
Node curr = s.top();
s.pop();

if (curr.state == goal) {
steps = curr.depth;
return;
}

for (int next : adj[curr.state]) {


if (next != curr.parent) {
s.push(Node(next, curr.state, curr.depth + 1));

}
}
}

steps = -1; // Goal not reachable


}
// BFS implementation
void BFS(int start, int goal, int& steps) {
queue<Node> q;
q.push(Node(start, -1, 0));

while (!q.empty()) {
Node curr = q.front();
q.pop();

if (curr.state == goal) {
steps = curr.depth;
return;
}

for (int next : adj[curr.state]) {


if (next != curr.parent) {
q.push(Node(next, curr.state, curr.depth + 1));

}
}
}

steps = -1; // Goal not reachable


}

// DFIDS implementation
void DFIDS(int start, int goal, int& steps) {
for (int depth = 0; depth <= MAX_DEPTH;
depth++) { steps = -1;
DFS(start, goal, steps, depth);

if (steps != -1) {
return;
}
}

steps = -1; // Goal not reachable


}

// DFS with limited depth


void DFS(int start, int goal, int& steps, int
limit) { if (steps != -1) {
return;
}

if (limit == 0) {
return;
}

if (start == goal) {
steps = limit;
return;
}

for (int next : adj[start]) {


if (next != -1) {
DFS(next, goal, steps, limit - 1);
}
}
}
};

int main() {
/ Create a graph Graph g(5); g.addEdge(0, 1); g.addEdge(0, 2); g.addEdge(1, 3);

g.addEdge(2, 4);

int start = 0;
int goal = 4;

int stepsBFS, stepsDFS, stepsDFIDS;

// Perform BFS
g.BFS(start, goal, stepsBFS);
cout <<"BFS steps: "<<stepsBFS<<endl;
}
OUTPUT
BFS steps:3
DFIDS steps:2

EXP-4
Aim-
To implement hill climbing search using C/CPP/Java/Python/Matlab code. To
compare the performance with BFS, DFS, DFIDS algorithms in terms of
number of steps needed to reach goal state.
Theory-
When more information than the initial state, the operators, and the goal test is available, the
search space can usually be constrained, resulting in informed searches, such as Hill Climbing,
Best First, Branch and Bound, A∗ , etc.. They often depend on the use of heuristic information
expressed in the form of a heuristic evaluation function f(n, g), a function of the nodes n and/or
the goals g. At each point in the search path, a successor node that appears to lead most
quickly to the top of the hill (the goal) is selected for exploration.

Procedure –
It is like DFS where the most promising child is selected for expansion. This process continues
from node-to-node with previously expanded nodes being discarded. Hill climbing algorithm
finds the nearest neighbour to reach the goal state. It is very efficient over blind searches when
an informative, reliable function is available to guide the search to a global goal.

Code-
#include <iostream>
#include <random>
#include <cmath>

using namespace std;

/ Define the objective function (you can replace this with your own function)
double objectiveFunction(double x) {
return -x * x; // Minimize the negative of a quadratic function
}

double hillClimbing(double start, double stepSize, int maxIterations) {


double current = start;

for (int i = 0; i < maxIterations; i++) {


double neighbor = current + (stepSize * ((rand() % 2000 - 1000) / 1000.0)); //
Randomly select a neighbor
double currentObjective = objectiveFunction(current);

double neighborObjective = objectiveFunction(neighbor);


if (neighborObjective > currentObjective) {
current = neighbor; // Move to the neighbor with a better objective
}
}

return current;
}

int main() {
srand(time(0)); // Initialize random seed

double initialGuess = (rand() % 2000 - 1000) / 1000.0; // Random initial guess


between -1 and 1
double stepSize = 0.1; // Step size for neighbor
selection int maxIterations = 1000;

double result = hillClimbing(initialGuess, stepSize, maxIterations);

cout << "Hill Climbing Result: " << result << " Objective: " <<
objectiveFunction(result) << endl;

return 0;
}
Output-
Hill Climbing Result: -4.42761e-17 Objective: -1.96037e-33
EXP 5
AIM: Implementing Best First Search

THEORY:
Best-first search (BFS) is a type of informed search algorithm that uses a heuristic
function to guide its search. The heuristic function is an estimate of the distance from a
node to the goal node. BFS expands the node with the lowest heuristic value first. This
means that BFS will explore paths that are closer to the goal first. BFS is an improvement
over uninformed search algorithms, such as breadth-first search (BFS), because it does
not explore all possible paths before finding a solution. Instead, it focuses on the most
promising paths first. This can make BFS much more efficient than BFS, especially for
problems with large state spaces. However, BFS is not always complete or optimal. This
means that it may not find a solution, or it may not find the best solution. This is because
the heuristic function is only an estimate, and it may not be accurate for all paths.

PROCEDURE:
Best-First Search (BFS) resembles Depth-First Search (DFS) by prioritizing the most
promising child for expansion, reminiscent of the steepest ascent in hill climbing. It efficiently
navigates node-to-node, guided by a heuristic function. Similar to hill climbing, BFS excels in
search efficiency, particularly when aided by a reliable heuristic. However, like hill climbing, it
doesn't guarantee global optimality and relies on the quality of the heuristic for success

CODE:
#include <iostream>
#include <queue>
#include <vector>

using namespace std;

struct Node {
int id;
int heuristic;
vector<Node*> neighbors;
};

bool isGoal(Node* node) {


/ Check if the node is the goal node if (node->id == 5) {
return true;
}
return false;
}

Node* reconstructPath(Node* goalNode) {


Node* node = goalNode;
while (node) {
cout << node->id << " ";
node = node->parent;
}
cout << endl;
return goalNode;
}

Node* bestFirstSearch(Node* startNode) {


// Initialize priority queue and explored set
priority_queue<Node*, vector<Node*>, greater<Node*>>
frontier; set<Node*> explored;

/ Add the start node to the frontier frontier.push(startNode);

while (!frontier.empty()) {
/ Get the node with the lowest heuristic value Node* currentNode =
frontier.top(); frontier.pop();

/ Check if the node is already explored


if (explored.count(currentNode)) {
continue;
}

/ Check if the node is the goal node if (isGoal(currentNode)) {


return reconstructPath(currentNode);
}

/ Mark the node as explored explored.insert(currentNode);

/ Add all unvisited neighbors to the frontier


for (Node* neighbor : currentNode->neighbors)

{ if (!explored.count(neighbor)) {

neighbor->heuristic = neighbor->id;
frontier.push(neighbor);
}
}
}

/ No solution found return nullptr;

}
int main() {
/ Create the graph vector<Node*> nodes(6); for (int i = 0; i < 6; i++) {

nodes[i] = new Node;

nodes[i]->id = i;

nodes[0]->neighbors.push_back(nodes[1]);
nodes[0]->neighbors.push_back(nodes[2]);

nodes[1]->neighbors.push_back(nodes[3]);
nodes[1]->neighbors.push_back(nodes[4]);

nodes[2]->neighbors.push_back(nodes[5]);

nodes[3]->neighbors.push_back(nodes[5]);

/ Start node is node 0 Node* startNode = nodes[0];

/ Perform best-first search


Node* goalNode = bestFirstSearch(startNode);

if (goalNode) {
cout << "Path to the goal: " << endl;
reconstructPath(goalNode);
} else {
cout << "No path to the goal found" << endl;

return 0;
}
RESULT:
EXP-6

AIM- INTRODUCTION TO FUZZY


THEORY-
Fuzzy Inference Systems (FIS) are a type of artificial intelligence system that uses fuzzy logic
to represent and reason about uncertainty. Fuzzy logic is a generalization of classical logic that
allows for degrees of truth rather than just true and false. This makes fuzzy logic well-suited for
modeling complex systems that are not easily described with traditional logic. FISs are
typically composed of three main components: Fuzzy membership functions: These are
functions that map input values to membership degrees, which represent the degree to which
an input value belongs to a fuzzy set. Fuzzy rules: These are rules that describe the
relationship between the input and output variables of the FIS. Rules are typically expressed in
the form of "IF premise THEN conclusion" statements. Fuzzy inference engine: This is the
component that evaluates the fuzzy rules and produces the output of the FIS.

CODE-
% Create a FIS object fis = newfis('example');

% Add an input variable with three triangular membership functions fis =

addvar(fis,'input','x',[0 10]);

fis = addmf(fis,'input',1,'low','trimf',[0 0 5]);


fis = addmf(fis,'input',1,'medium','trimf',[0 5 10]);

fis = addmf(fis,'input',1,'high','trimf',[5 10 10]);

% Add an output variable with two Gaussian membership functions fis =

addvar(fis,'output','y',[0 1]);

fis = addmf(fis,'output',1,'small','gaussmf',[0.1 0.25]);

fis = addmf(fis,'output',1,'large','gaussmf',[0.1 0.75]);

% Plot the membership functions of the input and output variables

subplot(2,1,1)

plotmf(fis,'input',1)

title('Input variable x')

xlabel('x')

ylabel('Membership degree')
subplot(2,1,2)

plotmf(fis,'output',1)

title('Output variable y')

xlabel('y')

ylabel('Membership degree')

OUTPUT:
EXP-7

Aim
Implement temperature control system (three levels using fuzzy interference system)

Theory
Fuzzy logic is a mathema cal paradigm that addresses uncertainty and imprecision by allowing
variables to take on values in a graded manner between 0 and 1. The Fuzzy Logic Toolbox in
MATLAB provides a powerful set of tools for implemen ng fuzzy logic systems, enabling the
modeling of complex systems in the presence of vague or incomplete informa on. In the context
of a temperature control system, the toolbox allows the defini on of fuzzy sets for input variables,
such as current temperature and rate of temperature change, each characterized by membership
func ons that assign degrees of belonging. The design process involves crea ng linguis c rules
that capture the rela onship between these inputs and the desired output, such as heater power.
The toolbox facilitates the crea on of a Fuzzy Inference System (FIS), consis ng of a fuzzifica on
interface, rule base, inference engine, and defuzzifica on interface. The resul ng FIS can then be
simulated to obtain crispoutput values based on fuzzy input informa on. This capability is
par cularly valuable in applica onslike temperature control systems, where uncertain
es and varia ons are inherent, making tradi onal control approaches less e ffec ve.
The Fuzzy Logic Toolbox's user-friendly interface and extensive func onality
empower engineers and researchers to design adap ve and robust control systems
that can accommodate the inherent complexi es of real-world scenarios.

Code:
% Fuzzy Inference System for Temperature Control with User Input

% Take user input for temperature

temperature_input = input('Enter the current temperature: ');

% Define fuzzy input variables

temperature = linspace(0, 100, 100); % Input variable: Temperature

cold = gaussmf(temperature, [15 0]); % Membership func on for cold temperature

warm = gaussmf(temperature, [15 50]); % Membership func on for warm

temperature hot = gaussmf(temperature, [15 100]); % Membership func on for hot

temperature
% Evaluate membership degrees for the given temperature input degree_cold = interp1(temperature,

cold, temperature_input); degree_warm = interp1(temperature, warm, temperature_input); degree_hot =

interp1(temperature, hot, temperature_input);

% Display membership degrees

disp(['Membership degree for cold temperature: ' num2str(degree_cold)]);

disp(['Membership degree for warm temperature: '

num2str(degree_warm)]); disp(['Membership degree for hot

temperature: ' num2str(degree_hot)]);

% Define fuzzy output variable

fan_speed = linspace(0, 100, 100); % Output variable: Fan speed


low = gaussmf(fan_speed, [20 0]); % Membership func on for low fan speed medium

= gaussmf(fan_speed, [20 50]); % Membership func on for medium fan speed high =

gaussmf(fan_speed, [20 100]); % Membership func on for high fan speed

% Define fuzzy rules

rule1 = min(degree_cold, low);

rule2 = min(degree_warm,

medium); rule3 = min(degree_hot,

high);

% Combine rules

aggregated = max(rule1, max(rule2, rule3));

% Defuzzify to get the final output

output = defuzz(fan_speed, aggregated, 'centroid');


% Plot input and output membership func onsfigure;

subplot(2, 1, 1);
plot(temperature, cold, 'b', temperature, warm, 'g', temperature, hot, 'r');

RESULT:
EXP-8

AIM- Implement speed control of DC motor using fuzzy, PID controller


THEORY- The speed control of a DC motor using a combina on of fuzzy logic and a PID (Propor onalôIntegral-
Deriva ve) controller is a hybrid control approach that aims to benefit from the advantages of both methods.

1. Fuzzy Logic Controller (FLC): Fuzzy logic handles uncertain es in the control system by using linguis c
variables like "low," "medium," and "high" speed. Membership func ons define the degree of membership of input
and output variables in these linguis c terms. Fuzzy rules, o en derived from expert knowledge, govern how input
variables relate to the output variable, allowing for adap ve and flexible control.
2. 2. PID Controller: The PID controller has three components: Propor onal (P), Integral (I), and Deriva ve (D). The P
term is propor onal to the speed error, the I term considers the accumula on of past errors, and the D term an cipates future
behaviour based on the rate of change of the error. PID control provides a balance between stability and responsiveness,
reducing steady-state error and improving the system's response to disturbances.

% Define Fuzzy Logic Controller fis = mamfis;

% Define input variables

fis = addInput(fis, [-10 10], 'Name', 'Error');

fis = addInput(fis, [-5 5], 'Name', 'Change in Error');

% Define output variable

fis = addOutput(fis, [-10 10], 'Name', 'Change in Output');


% Define membership func ons for input variables fis = addMF(fis, 'Error', 'trimf', [-10 -5 0]);

fis = addMF(fis, 'Error', 'trimf', [-5 0 5]);


fis = addMF(fis, 'Error', 'trimf', [0 5 10]);

fis = addMF(fis, 'Change in Error', 'trimf', [-5 -2 0]);

fis = addMF(fis, 'Change in Error', 'trimf', [-2 0 2]);

fis = addMF(fis, 'Change in Error', 'trimf', [0 2 5]);

% Define membership func ons for output variable fis = addMF(fis, 'Change in Output', 'trimf', [-10 -5 0]);

fis = addMF(fis, 'Change in Output', 'trimf', [-5 0 5]); fis = addMF(fis, 'Change in Output', 'trimf', [0 5 10]);

% Define rules

ruleList = [

11111;
22111;

33111;

];

fis = addRule(fis, ruleList);

% Save FIS file

writeFIS(fis, 'fuzzy_motor_control.fis');
disp('Fuzzy Logic Controller file (fuzzy_motor_control.fis) created successfully.');
% DC Motor Transfer Func on (example) numerator = [1];

denominator = [1, 5, 6]; % Example transfer func on: s/(s^2 + 5s + 6)

sys = (numerator, denominator);

% Parameters for simula on

me = 0:0.1:10; % Simula on me

reference_speed = 100; % Reference speed (desired speed)

% Fuzzy Logic Controller

fuzzyController = readfis('fuzzy_motor_control.fis'); % Load the fuzzy

inference system fuzzy_output = zeros(size( me));

fuzzy_speed = zeros(size( me));

% PID Controller

kp = 1; % Propor onal gain

ki = 0.1; % Integral gain

kd = 0.01; % Deriva ve gain

pid_output = zeros(size( me));

error_sum = 0;

previous_error = 0;

% Simula on loop for i = 2:length( me)

% Calculate error
speed_error = reference_speed - fuzzy_speed(i-1);

% Clip input values to the expected range speed_error = max(min(speed_error, 10), -10);
delta_error = speed_error - fuzzy_speed(i-1);

delta_error = max(min(delta_error, 5), -5);

% Fuzzy Logic Controller

input_values = [speed_error, delta_error];

fuzzy_output(i) = evalfis(fuzzyController, input_values);

% Update motor speed using fuzzy output

fuzzy_speed_vector = lsim(sys, [fuzzy_output(i-1); fuzzy_output(i)], [ me(i-1), me(i)]);

% Extract the last value of the simulated vectorfuzzy_speed(i) =

fuzzy_speed_vector(end);

% PID Controller
pid_output(i) = kp * speed_error + ki * error_sum + kd * (speed_error - previous_error);

% Update motor speed using PID output

pid_speed_vector = lsim(sys, [pid_output(i-1); pid_output(i)], [ me(i-1), me(i)]);

% Extract the last value of the simulated vectorpid_speed = pid_speed_vector(end);

% Update error sum and previous error for PID controller error_sum = error_sum +

speed_error;

previous_error =

speed_error;end

% Plot results figure;


subplot(3,1,1);
plot( me, fuzzy_output, 'r', 'LineWidth', 2);

tle('Fuzzy Logic Controller Output');

xlabel('Time');

ylabel('Output');

OUTPUT:
EXP-9

AIM- Implemen ng OR and AND gate using neural networks

THORY:

AND Gate Implementa on

The AND gate takes two binary inputs (0 or 1) and produces one binary output. The output is 1 only
if both inputs are 1, and 0 otherwise.

Architecture:

A simple neural network for the AND gate can consist of two input neurons, a single hidden neuron, and an output
neuron. The input neurons receive the binary inputs, and their outputs are weighted and summed in the hidden
neuron. The ac va on func on of the hidden neuron is usually sigmoid, which squashes the input to a value between
0 and 1. The output neuron receives the output of the hidden neuron, and its output is mul plied by the weight of the
connec on between the hidden neuron and the output neuron. The ac va on func on of the output neuron is usually
sigmoid or a step func on, which determines the output as 0 or 1 based on the weighted sum.

Training:

The neural network is trained using a dataset of input-output pairs. For the AND gate, the dataset
would consist of four pairs of inputs and outputs: (0, 0), (0, 1), (1, 0), and (1, 1). The goal of training
is to adjust the weights in the network so that the output for each input pair is as close as possible
to the expected output. This can be done using an algorithm like gradient descent, which itera vely
adjusts the weights based on the error between the network's output and the expected output.

Interpreta on:

Once the network is trained, it can be used to evaluate new input pairs. If the network's output for a new input pair is
close to 1, then the network is confident that the correct output for that input pair is 1. Conversely, if the network's output
for a new input pair is close to 0, then the network is confident that the correct output for that input pair is 0.
OR Gate Implementa on

The OR gate takes two binary inputs and produces one binary output. The output is 1 if either or both inputs are 1, and
0 otherwise.

Architecture:

The neural network for the OR gate can be similar to the AND gate architecture. The main di fference is in the ac va
on func on of the hidden neuron. Instead of using sigmoid, a linear ac va on func on can be used. This allows the
hidden neuron to output any value between 0 and 1, which makes it more suitable for represen ng the OR func on.

Training:

The neural network is trained using the same dataset as the AND gate, but the goal of training is slightly di fferent. For the
OR gate, the goal is to adjust the weights in the network so that the output for any input pair is greater than or equal to 0.5.
This is because the OR func on always produces an output of 1 or more when either or both inputs are 1.

Interpreta on:

Once the network is trained, it can be used to evaluate new input pairs. If the network's output for a
new input pair is greater than or equal to 0.5, then the network is confident that the correct output
for that input pair is 1. Conversely, if the network's output for a new input pair is less than 0.5, then
the network is confident that the correct output for that input pair is 0.

CODE:

AND GATE;

%Step 1: Define the dataset

%Input data (AND gate truth table) X=[00;01;10;11];

%Output data

Y = [0; 0; 0; 1];
% Step 2: Create a neural network
net = feedforwardnet(5); % Create a feedforward neural network with 5

hidden neurons net = train(net, X', Y'); % Train the network using the dataset

%Step 3: Test the neural network

%Test input data

testX = [0 0; 0 1; 1 0; 1 1];

%Simulate the network to get output output = net(testX');

output = round(output); % Round the output to the nearest integer (0 or 1)

%Display the results

disp('Input | Output (AND gate result)');

disp([testX output']);

%Check if the neural network has learned the AND gate correctly if isequal(output', Y)

disp('The neural network has successfully learned the

AND gate.'); else

disp('The neural network has not learned the AND gate

correctly.'); end

OR GATE;

%Define the input data (X) and the corresponding target output (T) X=[00;01;10;11];

T = [0; 1; 1; 1];

%Create a feedforward neural network with one hidden layer

net = feedforwardnet(4); % You can change the number of hidden

neurons as needed % Set the training parameters

net.trainParam.epochs = 1000; % Number of training

epochs net.trainParam.goal = 1e-5; % Training goal

(MSE) net.trainParam.showWindow = 1; % Display the

training progress % Train the neural network


net = train(net, X', T');

%Test the network with OR gate inputs outputs = net(X');

%Display the network's responses

for i = 1:length(outputs)

fprin ('Input: [%d, %d] -> Output: %f\n', X(i, 1), X(i, 2),

outputs(i)); end

RESULTS:

AND GATE:

OR GATE:
EXP-10

AIM: Implementa on of XOR gate using neural networks

Theory:

Implemen ng the XOR gate using neural networks involves crea ng a network architecture, training
the network with appropriate data, and interpre ng the network's output.

Network Architecture:

A simple three-layer neural network can be used to implement the XOR gate. The network consists of two
input neurons, two hidden neurons, and one output neuron. The input neurons receive the binary inputs (0 or
1), and their outputs are weighted and summed in the hidden neurons. The ac va on func on of the hidden
neurons is usually sigmoid, which squashes the input to a value between 0 and 1. The outputs of the hidden
neurons are weighted and summed in the output neuron, and its ac va on func on is also typically sigmoid.

Training Data

This training data is used to adjust the weights of the connec ons between the neurons in the
network. The goal of training is to minimize the error between the network's output and the
expected output for each input case. This can be done using an algorithm like gradient descent,
which itera vely adjusts the weights in the direc on of the steepest descent of the error func on.

Training Process:

The training process involves repeatedly feeding the network with the four input cases and adjus ng the
weights based on the error between the network's output and the expected output. This process is con nued
un l the error is sufficiently low, indica ng that the network has learned to correctly classify the XOR gate.

Interpre ng Output:

Once trained, the neural network can be used to evaluate new input pairs. If the network's output is closer to 1 than
to 0, then the network is confident that the correct output for that input pair is 1. Conversely, if the network's output
is closer to 0 than to 1, then the network is confident that the correct output for that input pair is 0.
U lizing neural networks to implement logic gates like XOR demonstrates their ability to learn complex
rela onships between inputs and outputs. This capability makes neural networks valuable tools for
various applica ons, including pa ern recogni on, machine learning, and ar ficial intelligence.

CODE:

%XOR Gate Implementa on using Neural Network in MATLAB

%Define XOR input and output

input = [0 0; 0 1; 1 0; 1 1]';

output = [0 1 1 0];

% Create a feedforward neural network

net = feedforwardnet(5); % 5 neurons in the hidden layer

%Set the training parameters


net.trainParam.epochs =

10000; net.trainParam.lr = 0.01;

%Train the neural network

net = train(net, input, output);

%Test the neural network testInput = [0 0; 0 1; 1 0; 1 1]'; predictedOutput = round(sim(net,

testInput));

%Display the results

disp('Test Input:');

disp(testInput);

disp('Predicted Output:');

disp(predictedOutput');
RESULT:
Exp-6
AIM-Fuzzy logic system for controlling the power of a heater based
on the ambient temperature
PROCEDURE-
1.Define Membership Functions:
Create triangular membership functions for both temperature and heater
power, each divided into three levels: Low, Medium, and High.
2.Visualize Membership Functions:
Plot the membership functions for temperature and heater power using subplots.

3.Define Fuzzy Rule Base:


Create a rule base using three rules, each representing a combination of
temperature and heater power levels. Define Fuzzy Inference System: Create a
fuzzy inference system (FIS) object. Add variables for temperature (input) and
heater power (output). Define membership functions for temperature and heater
power within the FIS. Add the fuzzy rule base to the FIS.

4.Perform Fuzzy Inference:


Set an input temperature value (75 degrees Celsius in this case). Evaluate the fuzzy inference
system using the input temperature to obtain the corresponding heater power output. Display
Output Power: Print the input temperature and calculated heater power. Visualize Fuzzy Inference
Result: Plot the input temperature and heater power membership functions.

5. Visualize Fuzzy Inference Surface:


Generate a 3D surface plot representing the fuzzy inference process.

CODE-
temperature = 0:0.1:100;
heater_power = 0:5:100;

% membership functions for input (temperature) and output (heater power)


temp_low = trimf(temperature, [0 0 50]);
temp_medium = trimf(temperature, [0 50 100]);

temp_high = trimf(temperature, [50 100 100]);

power_low = trimf(heater_power, [0 0 50]);


power_medium = trimf(heater_power, [0 50 100]);
power_high = trimf(heater_power, [50 100 100]);

figure;
subplot(2, 1, 1);
plot(temperature, [temp_low; temp_medium;
temp_high]); title('Temperature Membership
Functions'); legend('Low', 'Medium', 'High');

subplot(2, 1, 2);
plot(heater_power, [power_low; power_medium;
power_high]); title('Heater Power Membership
Functions'); legend('Low', 'Medium', 'High');

% Define the fuzzy rule base rule1 = [1 1 1 2 2 2];

rule2 = [3 3 2 2 2 2];

rule3 = [3 3 2 1 1 1];

rule_base = [rule1; rule2; rule3];

% Define the fuzzy inference engine fis = newfis('TemperatureControl');

fis = addvar(fis, 'input', 'Temperature', [0 100]);

fis = addmf(fis, 'input', 1, 'Low', 'trimf', [0 0 50]);

fis = addmf(fis, 'input', 1, 'Medium', 'trimf', [0 50 100]);

fis = addmf(fis, 'input', 1, 'High', 'trimf', [50 100 100]);

fis = addvar(fis, 'output', 'HeaterPower', [0 100]);


fis = addmf(fis, 'output', 1, 'Low', 'trimf', [0 0 50]);
fis = addmf(fis, 'output', 1, 'Medium', 'trimf', [0 50 100]);

fis = addmf(fis, 'output', 1, 'High', 'trimf', [50 100 100]);


fis = addrule(fis, rule_base);

% Define the input temperature and perform fuzzy inference input_temp = 75;
output_power = evalfis(input_temp, fis);

% Display the output power


fprintf('For an input temperature of %.2f, the heater power is %.2f\
n', input_temp, output_power);

% Plot the fuzzy inference result figure;


subplot(2, 1, 1);

plotmf(fis, 'input', 1);

title('Input Temperature');

subplot(2, 1, 2);
plotmf(fis, 'output', 1);
title('Heater Power');

% Perform a surface plot of the fuzzy inference result figure;


gensurf(fis);
title('Fuzzy Inference Surface')
OUTPUT

You might also like