[Introduction to Fuzzy/Neural Systems]
4
[On Perceptron Neural Network Learning Rules]
Module 4 Exp 4: On Perceptron Neural Network
Learning Rules
Course Learning Outcomes:
C4. Conduct experiments using advanced tools related to fuzzy and neural
systems.
C5. Extend existing knowledge in designing a feedback fuzzy controller and
single/multilayer neural networks based on the given Industrial
requirements.
C7. Interpret and evaluate through written and oral forms.
C8. Lead multiple groups and projects with decision making responsibilities.
Topics
Exp 4: On Perceptron Neural Network Learning Rules
Exp 4: On Perceptron Neural Network Learning Rules
1. Aim/Objective:
What is the objective of this experiment?
2. Theory
Provide explanation on the perceptron learning rule
3. Procedure:
Step by step implementation of the implementation
4. Program
Python code to implement Perceptron learning rule
5. Result
State the your inputs and its corresponding output and figures
6. Conclusion
Summarize what you have learned from the experiment
Course Module
Python code to implement Perceptron Learning Rule
import numpy as np
class Perceptron(object):
def __init__(self, no_of_inputs, threshold=1000, learning_rate=0.01):
self.threshold = threshold
self.learning_rate = learning_rate
self.weights = np.zeros(no_of_inputs + 1)
def predict(self, inputs):
summation = np.dot(inputs, self.weights[1:]) + self.weights[0]
if summation > 0:
activation = 1
else:
activation = 0
return activation
def train(self, training_inputs, labels):
for _ in range(self.threshold):
for inputs, label in zip(training_inputs, labels):
prediction = self.predict(inputs)
self.weights[1:] += self.learning_rate * (label - prediction) * inputs
self.weights[0] += self.learning_rate * (label - prediction)
The Code:
Line-by-line
1. import numpy as np
The only important thing to know numpy module is that we’re using it to evaluate our dot
product w · x during our summation.
numpy lets us create vectors, and gives us both linear algebra functions and python list-like
methods to use with it. We access its functions by calling them on np.
2. class Perceptron(object):
Here, we’re creating a new class Perceptron. This will, among other things, allow us to
maintain state in order to use our perceptron after it has learned and assigned values to
its weights.
[Introduction to Fuzzy/Neural Systems]
4
[On Perceptron Neural Network Learning Rules]
3. def __init__(self, no_of_inputs, threshold=100, learning_rate=0.01):
In our constructor, we accept a few parameters that represent concepts that we looked at the
end of Learning Machine Learning Journal #3.
The no_of_inputs is used to determine how many weights we need to learn.
The threshold, is the number of epochs we’ll allow our learning algorithm to iterate through
before ending, and it’s defaulted to 100.
The learning_rate is used to determine the magnitude of change for our weights during each
step through our training data, and is defaulted to 0.01.
The threshold and learning_rate variables can be played with to alter the efficiency of our
perceptron learning rule, because of that, I’ve decided to make them optional parameters,
so that they can be experimented with at runtime.
4. self.threshold = threshold
5. self.learning_rate = learning_rate
These two lines set the threshold and learning_rate arguments to instance variables.
6. self.weights = np.zeros(no_of_inputs + 1)
Here, we initialize our weight vector. np.zeros(n), will create a vector with an n-number of 0’s.
Here, we use the no_of_inputs, (which again, is number of inputs in our input vector, x),
plus 1.
Remember in Learning Machine Learning Journal #3, we move our bias into the weight
vector, so that we didn’t have to deal with it independently of our other weights? This bias
is the +1 to our weight vector, and is referred to as the bias weight.
7. def predict(self, inputs):
Now, we define our predict method.
This method will house the f(x) = 1 if w · x + b > 0 : 0 otherwise algorithm.
The predict method takes one argument, inputs, which it expects to be
an numpy array/vector of a dimension equal to the no_of_inputs parameter that the
perceptron was initialized with on line 5.
8. summation = np.dot(inputs, self.weights[1:]) + self.weights[0]
Course Module
This is where the numpy dot product function comes in, and it works exactly how you might
expect. np.dot(a, b) == a · b. It’s important to remember that dot products only work if
both vectors are of equal dimension. [1, 2, 3] · [1, 2, 3, 4] is invalid. Things get a bit tricky
here because we’ve added an extra dimension to our self.weights vector to act as the bias.
There are two options here, either we can add a 1 to the beginning of our inputs vector, like
we discussed in Learning Machine Learning Journal #3, or, we can take the dot product of
the inputs and the self.weights vector with the the first value “removed”, and then add the
first value of the self.weights vector to the dot product. Either way works, I just happened
to think that this way was cleaner.
We then store the result in the variable, summation.
if summation > 0:
activiation = 1
else:
activation = 0
return activation
This is our step function. It kind of reads like pseudocode: if the summation from above is
greater than 0, we store 1 in the variable activation, otherwise, activation = 0, then we
return that value.
We don’t need the temporary variable activation, but for now, the goal is to be explicit.
9. def train(self, training_inputs, labels):
Next, we define the train method, which takes two arguments: training_inputs and labels.
training_inputs is expected to be a list made up of numpy vectors to be used as inputs by
the predict method.
labels is expected to be a numpy array of expected output values for each of the
corresponding inputs in the training_inputs list.
In essence, the input vector at training_inputs[n] has the expected output at labels[n],
therefore len(training_inputs) == len(labels).
10. for _ in range(self.threshold):
This creates a loop wherein the following code block will be run a number of times equal to
the threshold argument we passed into the Perceptron constructor. If one hasn’t been
passed in, it’s defaulted to 100 epochs. Because we don’t care to use an iterator variable,
convention has us set it to _.
11. for inputs, label in zip(training_inputs, labels):
There are three important steps happening in this line:
[Introduction to Fuzzy/Neural Systems]
4
[On Perceptron Neural Network Learning Rules]
We zip training_inputs and labels together to create a new iterable object
We loop through the new object
While we iterate through, we store each elements in the training_inputs list into
the inputs variable, and each of the elements in labels, in the variable label.
In the code block after this line, when we reference label, we get the expected output of the
input vector stored in the inputs variable, and we do this once for every inputs/label pair.
12. prediction = self.predict(inputs)
Here, we pass the inputs vector into our previously defined predict method, and we store the
result in the prediction variable.
13. self.weights[1:] += self.learning_rate * (label - prediction) * inputs
This is almost all of the learning rule implementation:
w <- w + α(y — f(x))x
We find the error, label — prediction, then we multiply it by our self.learning_rate, and by
our inputs vector, we then add that result to the weight vector (with the bias weight
removed), and store it back into self.weights[1:].
Remember that self.weights[0] is our bias weight, so we can’t
add self.weights and inputs vectors directly, as they’re of different dimensions.
There were several options to take care of this, but I think the most explicit was is to mimic
what we have done early, by only considering the vector created by “removing” the bias
weight at self.weights[0].
We can’t just ignore the bias, so we deal with it next:
14. self.weights[0] += self.learning_rate * (label - prediction)
We update the bias in the same way as the other weights, except, we don’t multiply it by
the inputs vector.
Let’s put it to work and finally wrap up implementing AND
1. import numpy as np
from perceptron import Perceptron
First, we import numpy so that we can create our vectors, then we import our new
perceptron.
2. training_inputs = []
Course Module
training_inputs.append(np.array([1, 1]))
training_inputs.append(np.array([1, 0]))
training_inputs.append(np.array([0, 1]))
training_inputs.append(np.array([0, 0]))
Next, we generate our training data. These inputs are the A and B columns from
the AND truth table stored in an array of numpy arrays, called training_inputs.
3. labels = np.array([1, 0, 0, 0])
Here, we store the expected outputs, or labels in the label variable, making sure that each
label index lines up with the index of the input it’s meant to represent.
4. perceptron = Perceptron(2)
We instantiate a new perceptron, only passing in the argument 2 therefore allowing for the
default threshold=100 and learning_rate=0.01. Note that such a large threshold and such
a small learning rate probably isn’t needed, so feel free to play around to find what’s most
efficient! What happens if learning_rate=10? What if threshold=2?
5. perceptron.train(training_inputs, labels)
Now we train the perceptron by calling perceptron.train and passing in
our training_inputs and labels.
This should finish rather quickly. Even though there are 100 epochs, our training data is so
small and numpy is very efficient!
6. inputs = np.array([1, 1])
perceptron.predict(inputs)
<Exercise 1.
Lab Activity: Simulation
Design and develop the neural network system for the following experiment
Experiment 1: Perceptron learning
1. Design and train a neural network system which can perform AND and OR
operation.
2. Tune the neural network model and minimize the error by updating the
weights and perform the testing.
3. Run the simulation in group and explain the working principles of the
algorithm.
4. Interpret the output of the designed neural network system by varying the
inputs.
[Introduction to Fuzzy/Neural Systems]
4
[On Perceptron Neural Network Learning Rules]
References and Supplementary Materials
Books and Journals
1. Van Rossum, G. (2007). Python Programming Language. In USENIX annual technical
conference (Vol. 41, p. 36).
2. SN, S. (2003). Introduction to artificial neural networks.
3. Rashid, T. (2016). Make your own neural network. CreateSpace Independent Publishing
Platform.
Online Supplementary Reading Materials
1. Chaitanya Singh; How to Install Python. (n.d.). Retrieved 14 May 2020, from
https://beginnersbook.com/2018/01/python-installation/; 14-05-2020
Course Module