Keshav_SC
Keshav_SC
Keshav_SC
ETCS-456
PRACTICAL RECORD
Branch : CSE
PRACTICAL DETAILS
VISION
To nurture young minds in a learning environment of high academic value and imbibe spiritual and
ethical values with technological and management competence.
MISSION
The Institute shall endeavour to incorporate the following basic missions in the teaching methodology:
❖ Engineering Hardware – Software Symbiosis: Practical exercises in all Engineering and
Management disciplines shall be carried out by Hardware equipment as well as the related
software enabling a deeper understanding of basic concepts and encouraging inquisitive nature.
❖ Life-Long Learning: The Institute strives to match technological advancements and encourage
students to keep updating their knowledge for enhancing their skills and inculcating their habit
of continuous learning
❖ Liberalization and Globalization: The Institute endeavors to enhance technical and
management skills of students so that they are intellectually capable and competent
professionals with Industrial Aptitude to face the challenges of globalization.
❖ Diversification: The Engineering, Technology and Management disciplines have diverse fields
of studies with different attributes. The aim is to create a synergy of the above attributes by
encouraging analytical thinking.
❖ Digitization of Learning Processes: The Institute provides seamless opportunities for innovative
learning in all Engineering and Management disciplines through digitization of learning processes
using analysis, synthesis, simulation, graphics, tutorials and related tools to create a platform for
multi-disciplinary approach.
❖ Entrepreneurship: The Institute strives to develop potential Engineers and Managers by
enhancing their skills and research capabilities so that they emerge as successful
entrepreneurs and responsible citizens.
MAHARAJA AGRASEN INSTITUTE OF TECHNOLOGY
VISION
MISSION
Objectives:
• Understanding of the basic mathematical elements of fuzzy sets.
• Analyze concepts of fuzzy set.
• To use fuzzy set operations to implement current computing techniques used in
fuzzy computing.
Theory:
Fuzzy Logic:
Fuzzy logic is an organized method for dealing with imprecise data. It is a multivalued logic that allows
intermediate values to be defined between conventional solutions.
In classical set theory, the membership of elements in a set is assessed in binary terms according to a
bivalent condition — an element either belongs or does not belong to the set. By contrast, fuzzy set
theory permits the gradual assessment of the membership of elements in a set; this is described with the
aid of a membership function valued in the real unit interval [0, 1].
Bivalent Set Theory can be somewhat limiting if we wish to describe a 'humanistic' Illustration
mathematically. For Illustration, Fig 1 below illustrates bivalent sets to characterize the temperature of a
room. The most obvious limiting feature of bivalent sets that can be seen clearly from the diagram is that
they are mutually exclusive - it is not possible to have membership of more than one set. Clearly, it is not
accurate to define a transition from a quantity such as 'warm' to 'hot' by the application of one degree
Fahrenheit of heat. In the real world a smooth (unnoticeable) drift from warm to hot would occur.
Fuzzy Sets:
w
Let Then,
• is called not included in the fuzzy set if
• is called fully included if
• is called a fuzzy member if .
Illustration:
1. Union:
2. Intersection
3. Complement
4. Algebraic Sum
Algebraic sum of two fuzzy sets and B is denoted by and is defined as,
Illustration:
5. Algebraic Product
as, Illustration:
Source Code:
Result/Conclusion
The concepts of union, intersection and complement are implemented using fuzzy sets which helped
to understand the differences and similarities between fuzzy set and classical set theories. It provides
the basic mathematical foundations to use the fuzzy set operations.
EXPERIMENT-2
Objectives:
• Understanding of the basic mathematical elements of the theory of fuzzy sets.
• To introduce the ideas of fuzzy sets, fuzzy logic.
• To implement applications using the fuzzy set operations.
Theory:
Definition:
Composition of Fuzzy Relations
• Consider two fuzzy relation; R (X ×Y) and S (Y ×Z), then a relation T (X × Z), can be
expressed as max-min composition
T=RoS
µT (x, z) = max-min [µR (x, y), µS (y, z)]
= V [µR (x, y) ^ µS (y, z)]
T=RoS
µT (x, z) = max [µR (x, y) . µS (y, z)]
• The max-min composition can be interpreted as indicating the strength of the existence of
relation between the elements of X and Z.
Fuzzy relation:
If , then
Max-Min Composition:
Let X,Y and Z be universal sets and let R and Q be relations that relate them as,
Illustration: and
Source code:
clear;
clc;
R=input("enter the first relation ");
disp("R=",R);
S=input("enter the second relation ");
disp("S=",S);
[m,n]=size(R);
[a,b]=size(S);
if(n==a)
for i=1:m
for j=1:b
c=R(i,:);
d=S(:,j);
[f,g]=size(c);
[h,q]=size(d);
for l=1:g
e(1,l)=c(1,l)*d(l,1);
end
t(i,j)=max(e);
end
end
disp("the final max-product is ")
disp("t=",t);
else
disp("cannot find max-product");
end
if(n==a)
for i=1:m
for j=1:b
c=R(i,:);
d=S(:,j);
f=mtlb_t(d);
e=min(c,f);
h(i,j)=max(e);
end
end
disp("the final min-max output is ")
disp("h=",h);
else
disp("cannot find min-max");
end
Output:
Result/Conclusion: With the use of fuzzy logic principles max min composition of fuzzy set
is calculated which describes the relationship between two or more fuzzy sets.
EXPERIMENT-3
Objectives:
• Cover fuzzy logic inference with emphasis on their use in the design of
intelligent or humanistic systems.
• Prepare the students for developing intelligent systems.
Theory:
Control System:
Any system whose outputs are controlled by some inputs to the system is called control
system.
Fuzzy Controller:
Fuzzy controllers are the most important applications of fuzzy theory. They work
different than conventional controllers as:
Expert knowledge is used instead of differential equations to describe a system.
This expert knowledge can be expressed in very natural way using linguistic
variables, which are described by fuzzy sets.
The fuzzy controllers are constructed in following three stages:
1. Create the membership values (fuzzify).
2. Specify the rule table.
3. Determine your procedure for defuzzifying the result.
To design a system using fuzzy logic, input & output is necessary part of the system.
Main function of the washing machine is to clean cloth without damaging the cloth. In
order to achieve it, the output parameters of fuzzy logic, which are the washing
parameters, must be given more importance. The identified input & output parameters
are:
Input:
1. Degree of dirt
2. Type of dirt
Output:
Wash time
Fuzzy sets:
The fuzzy sets which characterize the inputs & output are given as follows:
1. Dirtiness of clothes
2. Type of dirt
3. Wash time
Procedure:
Step1: Fuzzification of inputs
For the fuzzification of inputs, that is, to compute the membership for the
antecedents, the formula used is,
MAX
Degree of
membership
Point 1 x Point 2
S M L
NG VS S M
M M M L
G L L VL
Execution:
Objectives:
Theory:
Neural network was inspired by the design and functioning of human brain
and components.
Definition:
―Information processing model that is inspired by the way biological nervous
system (i.e the brain) process information, is called Neural Network.‖
Neural Network has the ability to learn by Illustrations. It is not designed to perform
fix /specific task, rather task which need thinking (e.g. Predictions).
ANN is composed of large number of highly interconnected processing elements(neurons)
working in unison to solve Illustrations. It mimic human brain. It is configured for special
application such as pattern recognition and data classification through a learning process.
ANN is 85-90% accurate.
Y- output neuron
Yin= x1w1+x2w2
Output is:
y=f(Yin)
Output= function
The early model of an artificial neuron is introduced by Warren McCulloch and Walter
Pitts in 1943. The McCulloch-Pitts neural model is also known as linear threshold gate. It
is a neuron of a set of inputs I1,I2,I3…Im and one output y . The linear threshold gate
simply classifies the set of inputs into two different classes. Thus the output y is binary.
Such a function can be described mathematically using these equations:
W1,W2…Wm are weight values normalized in
the range of either (0,1) or (-1,1) and associated
with each input line, Sum is the weighted sum,
and T is a threshold constant. The function f is
a linear step function at threshold T as shown
in figure
For inhibition to be absolute, the threshold with the activation function should satisfy the
following condition:
θ >nw –p
Output will fire if it receives ―k‖ or more excitatory inputs but no inhibitory inputs where
kw≥θ>(k-1) w
- The M-P neuron has no particular training algorithm.
- An analysis is performed to determine the weights and the threshold.
- It is used as a building block where any function or phenomenon is modelled based on
a logic function.
Illustration Statement: Implement XOR function using MP model
Truth table for XOR function is:
X1 X2 Y
0 0 0
0 1 1
1 0 1
1 1 0
Yin=x1w1+x2w2
As we know,
X1 X2 Z1
0 0 0
0 1 0
1 0 1
1 1 0
For Z1,
Θ=1
X1 X2 Z2
0 0 0
0 1 1
1 0 0
1 1 0
For Z2,
Θ=1
Y=Z1+Z2
Z1 Z2 Y
0 0 0
0 1 1
1 0 1
1 1 0
For Y,
Θ=1
Source Code :
clear;
clc;
//Getting weights and threshold value
disp('Enter weights ');
w1=input('weight w1=');
w2=input('weight w2=');
disp('Enter threshold value');
theta=input('theta=');
y=[0 0 0 0];
x1=[0 0 1 1];
x2=[0 1 0 1];
z=[0 0 1 0];
con=1;
while con
zin=x1*w1+x2*w2;
for i=1:4
if zin(i)>=theta
y(i)=1;
else
y(i)=0;
end
end
disp('Output of Net');
disp(y);
if y==z
con=0;
else
disp('Net is not learning enter another set of weights and threshold value');
w1=input('weight w1=');
w2=input('weight w2=');
theta=input('theta=');
end
end
disp('Mcculloch-Pitts Net for ANDNOT function');
disp('Weights of Neuron');
disp(w1);
disp(w2);
disp('threshold value');
disp(theta);
OUTPUT/ Screenshots
Result/Conclusion:
Mc-Culloch pits Model is implemented for XOR function by using the thresholding activation
function. Activation of M-P neurons is binary (i.e) at any time step the neuron mayfire or may
not fire. Threshold plays major role here.
EXPERIMENT-5
Objectives:
• To become familiar with neural networks learning algorithms from available Illustrations.
• Provide knowledge of learning algorithm in neural networks.
Theory:
Neural networks are a branch of ―Artificial Intelligence". Artificial Neural Network is a
system loosely modelled based on the human brain. Neural networks are a powerful technique
to solve many real world Illustrations. They have the ability to learn from experience in order
to improve their performance and to adapt themselves to changes in the environment. In
addition to that they are able to deal with incomplete information or noisy data and can be very
effective especially in situations where it is not possible to define the rules or steps that lead to
the solution of a Illustration. In a nutshell a Neural network can be considered as a black box
that is able to predict an output pattern when it recognizes a given input pattern. Once trained,
the neural network is able to recognize similarities when presented with a new input pattern,
resulting in a predicted output pattern.
Algorithm:
The algorithm is as follows:
1. Initialize the weights and threshold to small random numbers.
2. Present a vector x to the neuron inputs and calculate the output.
3. Update the weights according to:
wj(t+1)= wj(t)+ η (d-y)x
where d is the desired output,
• t is the iteration number, and
• eta is the gain or step size, where 0.0 < n < 1.0
4. Repeat steps 2 and 3 until:
1. the iteration error is less than a user-specified error threshold or
2. a predetermined number of iterations have been completed.
Source Code:
clear;
clc;
x=[0 0 1 1; 0 1 0 1];
//input variable pass
d=[1 1 0 0];
//target output
w=[-20 3 3 -5];
//initialize weight for per input
z=[0 0 0 0];
//vector to store the calculated value of the sigma input*weight + bias
bias=0.2;
//iniatlize of bias to store the value
//calculate the values total value
for j= 1:2
sigma=0;
for i=1 : 4
sigma=bias + x(j,i)*mtlb_t(w);
end
end
disp('final calculation');
disp(sigma);
//set the theta value for step function
theta=0.3;
for i=1:4
if sigma(i)> theta
z(i)=1;
elseif sigma(i)<=theta
z(i)=0;
end
end
disp('Final output of the computed net value');
disp(z);
disp('oldweight');
disp(w);
eta=1.2; // learning rate ;
for j= 1:4
lr=0;
for i=1 : 2
lr= x(i,j)*eta;
end end
disp(lr);
for i=1:4
if z(i)==1 & d(i)==0
w(i)=w(i)-lr;
elseif z(i)==0 & d(i)==1
w(i)=w(i)+lr;
end
end
disp('final updated weight');
disp(w);
Output/ Screenshots:
Result/Conclusion:
result_test=perceptron_test(d_test_X,d_test_y.shape,weights)
# Calculate score
score(result_test,d_test_y)
Score=46.15384615384615
Single layer perceptron learning algorithm is implemented for AND function. It is used for
train the iterations of neural network. Neural network mimics the human brain and
perceptron learning algorithm trains the neural network according to the input given.
EXPERIMENT-6
Objectives:
• To become familiar with neural networks learning algorithms from available
Illustrations.
• To give design methodologies for artificial neural networks.
• Provide knowledge of un-supervised learning in neural networks.
Theory:
Unsupervised Learning Algorithm:
These types of model are not provided with the correct results during the training. It can be
used to cluster the input data in classes on the basis of their statistical properties only. The
labelling can be carried out even if the labels are only available for a small number of objects
represented of the desired classes. All similar input patters are grouped together as clusters.
If matching pattern is not found, a new cluster is formed. In contrast to supervised learning,
unsupervised or self-organized learning does not require an external teacher. During the
training session, the neural network receives a number of different patterns & learns how to
classify input data into appropriate categories. Unsupervised learning tends to follow the
neuro-biological organization of brain. It aims to learn rapidly & can be used in real-time.
Hebian Learning:
Hebbian Learning is inspired by the biological neural weight adjustment mechanism.
It describes the method to convert a neuron an inability to learn and enables it to
develop cognition with response to external stimuli. These concepts are still the basis
for neural learning today.
Algorithm:
Result/Conclusion:
Unsupervised Hebbian learning algorithm is implemented which does not require
supervisor. It updates the weights the accordingly if error comes and train the network.
EXPERIMENT-7
Aim: Implementation Genetic Application – Match Word Finding.
Objectives:
• To familiarize with Mathematical foundations for Genetic algorithm, operator.
• To study the Applications of Genetic Algorithms.
Theory:
Genetic algorithm:
• Genetic algorithm is a search technique used in computing to find true or
approximate solutions to approximate solutions to optimization & search Illustrations.
• Genetic algorithms are inspired by Darwin's theory about evolution. Solution to
an Illustration solved by genetic algorithms is evolved.
• Algorithm is started with a set of solutions (represented by chromosomes) called population.
Solutions from one population are taken and used to form a new population.This is motivated
by a hope, that the new population will be better than the old one.Solutions which are selected
to form new solutions (offspring) are selected according to theirfitness - the more suitable they
are the more chances they have to reproduce.
• This is repeated until some condition (for Illustration number of populations or
improvement of the best solution) is satisfied.
Algorithm:
import random
# Valid genes
GENES = '''abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOP
QRSTUVWXYZ 1234567890, .-;:_!"#%&/()=?@${[]}'''
class Individual(object):
'''
Class representing individual in population
'''
def init (self, chromosome):
self.chromosome = chromosome
self.fitness = self.cal_fitness()
@classmethod
def mutated_genes(self):
'''
create random genes for mutation
'''
global GENES
gene = random.choice(GENES)
return gene
@classmethod
def create_gnome(self):
'''
create chromosome or string of genes
'''
global TARGET
gnome_len = len(TARGET)
return [self.mutated_genes() for _ in range(gnome_len)]
def mate(self, par2):
'''
Perform mating and produce new offspring
'''
# random probability
prob = random.random()
def cal_fitness(self):
'''
Calculate fittness score, it is the number of
characters in string which differ from target
string.
'''
global TARGET
fitness = 0
for gs, gt in zip(self.chromosome, TARGET):
if gs != gt: fitness+= 1
return fitness
# Driver code
def main():
global POPULATION_SIZE
#current generation
generation = 1
found = False
population = []
population = new_generation
generation += 1
Result/Conclusion:
The match word finding algorithm is implemented using the genetic algorithms which
include all the genetic algorithm operators. Genetic algorithm includes the selection,
crossover, mutation operators along with fitness function.
EXPERIMENT-8
Aim: Study of ANFIS Architecture.
Objectives:
• Study of hybrid systems.
• Prepare the students for developing intelligent system.
Theory:
The adaptive network-based fuzzy inference systems (ANFIS) is used to solve Illustrations
related to parameter identification. This parameter identification is done through a hybrid
learning rule combining the back-propagation gradient descent and a least-squares method.
Let the membership functions of fuzzy sets Ai, Bi, i=1,2, be , AiBi.
In evaluating the rules, choose product for T-norm (logical and).
1. Evaluating the rule premises results in
Layer 2 (L2): Each node calculates the firing strength of each rule using the min
or prod operator. In general, any other fuzzy AND operation can be used.
Layer 3 (L3): The nodes calculate the ratios of the rule are firing strength to the sum
of all the rules firing strength. The result is a normalised firing strength.
Layer 4 (L4): The nodes compute a parameter function on the layer 3 output.
Parameters in this layer are called consequent parameters.
Layer 5 (L5): Normally a single node that aggregates the overall output as
the summation of all incoming signals.
Algorithm:
When the premise parameters are fixed, the overall output is a linear combination of the
consequent parameters. In symbols, the output f can be written as which is linear in the
consequent parameters cij (i = 1,2¸ j = 0,1,2). A hybrid algorithm adjusts the consequent
parameters cij in a forward pass and the premise parameters {ai, bi, ci} in a backward pass
(Jang et al., 1997). In the forward pass the network inputs propagate forward until layer 4,
where the consequent parameters are identified by the least-squares method. In the backward
pass, the error signals propagate backwards and the premise parameters areupdated by
gradient descent.
Because the update rules for the premise and consequent parameters are decoupled in the
hybrid learning rule, a computational speedup may be possible by using variants of the
gradient method or other optimisation techniques on the premise parameters.
Result/Conclusion:
This study experiments describe the architecture of neuro fuzzy systems. Fuzzy rule
based system includes the model like sugenor type fuzzy which is having neural learning
capabilities.
EXPERIMENT-9
Objectives: From this experiment, the student will be able to study hybrid systems.
• Aware of the use of neuro fuzzy inference systems in the design of intelligent
or humanistic systems.
• To become knowledgeable about neuro fuzzy inference systems.
• An ability to apply knowledge of computing and use of current computing
techniques appropriate to the discipline.
Theory:
In the field of artificial intelligence, a genetic algorithm (GA) is a search heuristic that
mimics the process of natural selection. This heuristic (also sometimes called a
metaheuristic) is routinely used to generate useful solutions to optimization and search
Illustrations.[1] Genetic algorithms belong to the larger class of evolutionary algorithms
(EA), which generate solutions to optimization Illustrations using techniques inspired by
natural evolution, such as inheritance, mutation, selection and crossover.
Optimization Illustrations
The evolution usually starts from a population of randomly generated individuals, and is
an iterative process, with the population in each iteration called a generation. In each
generation, the fitness of every individual in the population is evaluated; the fitness is
usually the value of the objective function in the optimization Illustration being solved. The
more fit individuals are stochastically selected from the current population, and each
individual's genome is modified (recombined and possibly randomly mutated) to form a
new generation. The new generation of candidate solutions is then used in the next iteration
of the algorithm. Commonly, the algorithm terminates when either a maximum number of
generations has been produced, or a satisfactory fitness level has been reached for the
population.
Once the genetic representation and the fitness function are defined, a GA proceeds to
initialize a population of solutions and then to improve it through repetitive application of
the mutation, crossover, inversion and selection operators.
Initialization
The population size depends on the nature of the Illustration, but typically contains several
hundreds or thousands of possible solutions. Often, the initial population isgenerated
randomly, allowing the entire range of possible solutions (the search space). Occasionally,
the solutions may be "seeded" in areas where optimal solutions are likely tobe found.
Selection
The fitness function is defined over the genetic representation and measures the quality of
the represented solution. The fitness function is always Illustration dependent. For instance,
in the knapsack Illustration one wants to maximize the total value of objects thatcan be put
in a knapsack of some fixed capacity. A representation of a solution might be an array of
bits, where each bit represents a different object, and the value of the bit (0 or
1) represents whether or not the object is in the knapsack. Not every such representation is
valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the
solution is the sum of values of all objects in the knapsack if the representation is valid, or
0 otherwise.
In some Illustrations, it is hard or even impossible to define the fitness expression; in these
cases, a simulation may be used to determine the fitness function value of aphenotype (e.g.
computational fluid dynamics is used to determine the air resistance of a vehicle
whose shape is encoded as the phenotype), or even interactive genetic algorithms are used.
Terminology:
Result/Conclusion:
This study experiments describe the various techniques used for derivative
free optimization. It also describes how to use optimization techniques in soft
computing domain.
EXPERIMENT-10
Aim: Study of research paper on Soft Computing and give a review report consists
of abstract, introduction, state of the art, methodology, results, conclusion, reference.
• Crate awareness among the students towards the recent trends in soft
computing.
Theory:
Students can find the research papers based on the artificial neural
network, hybrid systems, genetic algorithm, fuzzy system, fuzzy logic,
fuzzy inference system etc.
Students need to search recent papers on any of the above mentioned topics,
study it and prepare presentation on the same
Result/Conclusion:
Through this experiment, we have understood the recent advancements and
applications of various subdomains of soft computing.
References:
1. www.ieeeexplore.com
2. www.Scicencedirect.com
3. Any open access journal