01
LECTURE - 1
Workshop
Neural Networks
SHAPEAI
02
GETTING STARTED
WITH NEURAL NETWORKS
HISTORY COMMONLY USED IN ALSO USED IN
Neural Networks were CNNs are commonly They are also successful
inspired in the early used in image search in many other tasks,
1940s by researchers services, self-driving such as voice
tried to implement the cars, recommendation recognition and natural
same ideas of system, fraud detection, language processing.
neuroscience to and many more systems. There is just so much to
computers. neural networks.
03
NEURAL
NETWORKS
04
NEURAL
NETWORKS
Neurons are connected to and receive electrical signals from other
neurons.
Neurons process input signals and can be activated.
05
Then the question arised can we
07
take in this biological idea of
How humans learn? and apply
that to machines as well.
06
ARTIFICIAL NEURAL NETWORKS
Mathematical Model for learning inspired by biological
neural networks
07
ARTIFICIAL NEURAL
NETWORKS
Model mathematical function from inputs to
the outputs based on the structure and
parameters of the network.
Allows for learning the network's parameters
based on data.
08
biological neurons Neuron / Units
09
These Neurons can be
connected to one another
10
(x1, x2)
Using some input attributes predict
death in titanic.
11
h(x1, x2) = w0 + w1 x1 + w2 x 2
In order to determine this hypothesis function we just
need to determine what these wights should be.
12
At the end of the day we need to do some
ACTIVATION classification ie. will a person live or die if
FUNCTION they were on titanic. And hence we will
have to define some type of threshold.
Step Function
f(t)
g(x)=1, if x>=0 1
0, if x<0
0.5
t
-2 -1.5 -1 -0.5 0.5 1 1.5 2
W.X
13
There will some times were we don't want
ACTIVATION 0 or 1 we want somewhere in between like
FUNCTION the probability of a persons death on
titanic.
Sigmoid Function
W.
14
This is an other activation function where
ACTIVATION in if w.x is positive then it remains
FUNCTION unchanged while if it is negative then it
becomes 0.
ReLU Function
W.
15
h(x1, x2) = g( w0 + w1 x1 + w2 x2 )
Activation function can be thought of as an other function
g() that is applied to the result of all of this computation.
16
w0
x1 w1
g( w0 + w1 x1 + w2 x2 )
x2 w2
We will represent the above mathematical formula using
a structure like this.
17
OR Taking a simple example of OR function to
FUNCTION understand neural networks.
W.
18
-1
x0 1
g( -1 + 1* x1 + 1* x2 )
x1 1
-1 0 1
19
-1
x0 0 1
g( -1 + 1* x1 + 1* x2 )
x1 0 1
-1 0 1
20
-1
x0 0 1
g( -1 + 1* x1 + 1* x2 )
x1 0 1
-1 0 1
21
-1
x0 0 1
0 g( -1 + 1* x1 + 1* x2 )
x1 0 1
-1 0 1
22
CLASSIFICATION
humidity
it will rain or not
pressure
23
REGRESSION
advertising
sales
webinar
24
COMPLEX NEURAL
NETWORKS
25
DEEP NEURAL
NETWORKS
26
FEED-FORWARD
NEURAL NETWORKS
Neural Networks that has connections in only one direction.
Such that the inputs move from the input to the hidden layers and then to
the output. They propagate only in one direction.
input Network output
27
A Loss function quantifies how unhappy
LOSS you would be if you used 'w' to make a
FUNCTION prediction on x when the correct output is
y.
Loss (x, y, w)
TYPES OF LOSSES
06
2
y-y' (y-y') |y-y'|
Residual loss Squared Loss Absolute loss
W1 VS W2 PLOT
GRADIENT DESCENT
THANKS
SHAPEAI