Chapitre I: Introduction to deep
learning:
Fundamentals and application
Unit: Deep Learning
1
Introduction
https://www.youtube.com/watch?v=l82PxsKHxYc
2
https://www.youtube.com/watch?v=l82PxsKHxYc
Deep Learning Use case
https://www.artbreeder.com/
3
What is Deep Learning?
4
Definition: DL
5
Definition: DL
Deep Learning is a subfield of machine learning and artificial intelligence (AI) that imitates the way humans gain certain types
6
of knowledge.
What is Machine Learning?
7
What is Machine Learning?
Machine learning is a field of artificial intelligence that consists
in programming a machine so that it learns to perform tasks by
studying examples representing them.
8
What is Machine Learning?
Machine learning is a field of artificial intelligence that consists in
programming a machine so that it learns to perform tasks by
studying examples representing them.data
y
x
9
What is Machine Learning?
Machine learning is a field of artificial intelligence that consists in
programming a machine so that it learns to perform tasks by
studying examples representing them.data
y
Model:
f(x)=ax+b
x
Regression Problem 10
For this, we program an optimization algorithm in the machine which will test
different values of a and b until obtaining the combination which minimizes the
distance between the model and the points.
ML: Develop a model, using an optimization algorithm to
minimize errors between the model and our data.
11
12
Machine Learning
vs
Deep Learning
13
Why Now?
Neural Networks date back decades, so why the
First Artificial Neuron
1943 resurgence?
Invention of perceptron
1954
Multi layer perceptron
1986
Convolutional Neural Network_ LeNet
1990
I. Big Data II. Hardware III. Software
Recurrent Neural Network-LSTMs - Graphics Processing - Improved techniques
- Larger Datasets
1997 Unit (GPUs)
-Easier Collection & -New Models
-Massively
Storage -Toolboxes
IMAGNET competition
Parallelizeable
2012
Generative Adversarial
Neural Network
2014
ChatGPT
2022
14
Deep Learning timeline: First Artificial
Neuron
First Artificial Neuron
1943
Warren McCulloch Walter Pitts
15
Neuron
After being weighted by the strength of their Each of these incoming connections is dynamically strengthened or
respective connections, the inputs are summed weakened based on how often it is used (this is how we learn new
together in the cell body concepts!), and it’s the strength of each connection that determines
the contribution of the input to the neuron’s output.
The neuron receives its
inputs along antennae-
like structures called
dendrites
This sum is then transformed into a new signal that’s
propagated along the cell’s axon and sent off to
other neurons.
16
17
Deep Learning timeline: First Artificial Neuron
Aggregation
Activation
, , : input
, , : weights
18
19
Deep Learning timeline: Perceptron
First Artificial Neuron Invention of perceptron
1943 1957
Frank Rosenblatt 20
Perceptron (1957):
Algorithme
d’apprentissage
Frank Rosenblatt
-1 1 21
Hebb theory
When two biological neurons are jointly excited, then they strengthen their synaptic link.
22
Perceptron (1957):
Train an artificial neuron on reference data (X,y)
so that it reinforces its parameters W each time an input X
is activated at the same time as the output y present in these data.
23
Perceptron (1957):
Train an artificial neuron on reference data (X,y)
so that it reinforces its parameters W each time an input X
is activated at the same time as the output y present in these data.
W=W+ ( - )X
reference output
: output produced by the neuron
X : input to the neuron
: learning rate
24
Perceptron (1957):
W=W+ ( - )X
=1
=0
=1 W=W+ (1 - 0)X
=0 =W+ X
( = 1)
( = 0)
25
Perceptron (1957):
W=W+ ( - )X
=1
=0
=1 W = W + (1 - 0)X
=0 =W+ X
Reinforced
( = 1)
( = 0)
26
Perceptron (1957):
W=W+ ( - )X
=1
=1
=1 W = W + (1 - 0)X
=0 =W+ X
Reinforced
( = 1)
( = 0)
27
28
Deep Learning timeline: Multilayer perceptron
First Artificial Neuron Multi layer perceptron
1943 1986
Invention of perceptron
1957
Geoffrey Hinton
29
Perceptron is a linear model
30
Perceptron is a linear model
31
Limits of linear neurons:
● Linear neurons are easy to compute with, but they run into serious limitations.
● In fact, it can be shown that any feed-forward neural network consisting of only linear
neurons can be expressed as a network with no hidden layers.
● This is problematic because hidden layers are what enable us to learn important
features from the input data. In other words, in order to learn complex relationships, we
need to use neurons that employ some sort of nonlinearity.
32
Deep Learning timeline: Multilayer perceptron
McCulloch et Pitts: by connecting several neurons together, it is possible to solve more
complex problems than with a single one.
33
34
Artificial neural networks
- 3 neurons
- 2 layers y3 = 1
y3 = 0
y3
Layer 1 Layer 2
Multilayer perceptron
35
Forward propagation
Forward propagation (or forward pass) refers to the calculation and storage
of intermediate functions (including outputs) for a neural network in order
from the input layer to the output layer
36
Backpropagation
Consists of determining how the output of the network varies as function
of the parameters (W,b) present in each layer.
Gradient
chain: 37
Backpropagation
Thanks to the gradients, we can then update the parameters (W,b) of each
layer so that they minimize the error between the output of the model and
the expected response.
error
Gradient
chain:
38
Descent gradient (½)
Gradient descent is an optimization algorithm that’s used when training a
machine learning model. It’s based on a convex function and tweaks its
parameters iteratively to minimize a given function to its local minimum.
39
Descent gradient (2/2)
α is the learning rate 40
In summary
developing an artificial neural network
Forward propagation
1. Forward propagation
2. Cost function Error
calculation
3. Backward
propagation
4. Gradient descent
1. Backward propagation
41
1
Non linear neurons: -1 1
Sigmoid Tanh neurons
Restricted Linear Unit
(ReLU) neuron
42
Deep Learning timeline: CNN
Multi layer perceptron
First Artificial Neuron 1986
1943
1957 1990
Invention of perceptron Convolutional Neural
Network_ LeNet
43
Deep Learning timeline: RNN
Multi layer perceptron Recurrent Neural
First Artificial Neuron 1986 Network-LSTMs
1943 1997
1957 1990
Invention of perceptron Convolutional Neural
Network_ LeNet
44
Deep Learning timeline: IMAGNET
competition
Recurrent Neural
First Artificial Neuron Multi layer perceptron Network-LSTMs
1943 1986 1997
1957 1990 2012
Invention of perceptron Convolutional Neural IMAGNET competition:
Network_ LeNet The rise of deep learning
45
Deep Learning timeline: GAN
First
Artificial Recurrent Neural Generative Adversarial
Neuron Multi layer perceptron Network-LSTMs Neural Network
1943 1986 1997 2014
1957 1990 2012
Invention of perceptron Convolutional Neural IMAGNET
Network_ LeNet competition:
The rise of deep
learning
46
Deep Learning timeline: ChatGPT
First
Artificial Recurrent Neural Generative Adversarial
Multi layer perceptron Network-LSTMs Neural Network
Neuron
1986 2014
1943 1997
1957 1990 2012 2022
Invention of perceptron Convolutional Neural IMAGNET ChatGPT
Network_ LeNet competition:
The rise of deep
learning
47
Deep Learning Use case: Conclusion
https://www.artbreeder.com/
48
Conclusion
https://www.artbreeder.com/
RNN CNN
GAN
Autoencoder
49
50