[go: up one dir, main page]

0% found this document useful (0 votes)
24 views23 pages

Lec1 Inroduction To Neural Network

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 23

INTRODUCTION TO NEURAL

NETWORKS
AGENDA

Biological Background
Artificial Neuron
Classes of Neural Networks
1. Perceptrons
2. Multi-Layered Feed-Forward Networks
3. Recurrent Networks.

Modeling the neuron


Activation functions
BIOLOGICAL BACKGROUND

• Neuron consists of:


• Cell body
• Dendrites
• Axon
• Synapses

Neural activation :
 Throught dendrites/axon
 Synapses have different
strengths
ANN INGREDIENTS
SIMULATION ON ANN
ARTIFICIAL NEURON EXAMPLE

Input links Unit Output links


(dendrites) (cell body) (axon)

aj Wji
ini = ai =
ai
SajWji g(ini)
CLASS I: PERCEPTRON

Ij Wj O

-1 W0
W1 in = a=
a1 a
W2 SajWj g(in)
a2

a = g(-W0 + W1a1 + W2a2) { 0, in<0


g(in) = 1, in>0
CLASS II: MULTI-LAYER
FEED-FORWARD NETWORKS

Input Hidden Output


Multiple layers:
 hidden layer(s)

• Feed-forward:
• Output links only connected to
input links in the next layer

Complex non-linear
functions can be
represented
MULTI LAYER NN EXAMPLE
CLASS III: RECURRENT NETWORKS

Input Hidden Output


No restrictions on
connections

Behaviour more
difficult to predict/
understand
Apps: Voice to text
MODELLING A NEURON

• aj :Activation value of unit j


ini   j Wj, iaj • wj,I :Weight on the link from unit j to unit i
• inI :Weighted sum of inputs to unit i
• aI :Activation value of unit i
• g :Activation function
ACTIVATION FUNCTIONS

• Stept(x) = 1 if x >= t, else 0


• Sign(x) = +1 if x >= 0, else –1
• Sigmoid(x) = 1/(1+e-x)
• Identity Function
• Relu(x) = max(o,X)
BOOLEAN FUNCTION
THE FIRST NEURAL NEURAL
NETWORKS

X1 1 AND
Y
X1 X2 Y
1 1 1
X2 1
1 0 0
AND Function 0 1 0
0 0 0
Threshold(Y) = 2
THE FIRST NEURAL NEURAL
NETWORKS

X1 2
OR
Y
X1 X2 Y
1 1 1
X2 2 1 0 1
AND Function
OR Function 0 1 1
0 0 0
Threshold(Y) = 2
THE FIRST NEURAL NEURAL
NETWORKS

AND
X1 2 NOT
Y
X1 X2 Y
1 1 0
X2 -1 1 0 1
AND NOT Function 0 1 0
0 0 0
Threshold(Y) = 2
G5BAIM Neural Networks

What can perceptrons represent?


1,1
1,1
0,1
0,1

0,0 1,0
1,0
0,0
AND XOR

• Functions which can be separated in this way are called


Linearly Separable

• Only linearly Separable functions can be represented by a


perceptron
TRAINING A PERCEPTRONS

-1
W = 0.3

x t = 0.0
W = 0.5

W = -0.4
y
I1 I2 I3 Summation Output
-1 0 0 (-1*0.3) + (0*0.5) + (0*-0.4) = -0.3 0
-1 0 1 (-1*0.3) + (0*0.5) + (1*-0.4) = -0.7 0
-1 1 0 (-1*0.3) + (1*0.5) + (0*-0.4) = 0.2 1
-1 1 1 (-1*0.3) + (1*0.5) + (1*-0.4) = -0.2 0
LEARNING

While epoch produces an error

Present network with next inputs from


epoch
Err = T – O
If Err <> 0 then
Wj = Wj + LR * Ij * Err
End If
End While
LEARNING

While epoch produces an error


Present network with next inputs from epoch
Err = T – O
If Err <> 0 then
Wj = Wj + LR * Ij * Err
End If

End While

Epoch : Presentation of the entire training set to the neural


network.
In the case of the AND function an epoch consists of
four sets of inputs being presented to the network (i.e.
[0,0], [0,1], [1,0], [1,1])
LEARNING

While epoch produces an error


Present network with next inputs from epoch
Err = T – O
If Err <> 0 then
Wj = Wj + LR * Ij * Err
End If

End While

Training Value, T : When we are training a network we not


only present it with the input but also with a value that
we require the network to produce. For example, if we
present the network with [1,1] for the AND function
the training value will be 1
LEARNING

While epoch produces an error


Present network with next inputs from epoch
Err = T – O
If Err <> 0 then
Wj = Wj + LR * Ij * Err
End If
End While

Error, Err : The error value is the amount by which the value
output by the network differs from the training value.
For example, if we required the network to output 0
and it output a 1, then Err = -1
LEARNING

While epoch produces an error


Present network with next inputs from epoch
Err = T – O
If Err <> 0 then
Wj = Wj + LR * Ij * Err
End If

End While

Output from Neuron, O : The output value from the neuron


Ij : Inputs being presented to the neuron
Wj : Weight from input neuron (Ij) to the output neuron
LR : The learning rate. This dictates how quickly the network converges. It
is set by a matter of experimentation. It is typically 0.1

You might also like