Autoencoders
Presented By:
Parag Choudhury (210710007032)
Chao Asheem Gogoi (210710007012)
Jintu Nath (210710007019)
1
What is Autoencoders?
★ unsupervised machine learning
algorithm.
★compress the data and reduce
its dimensionality.
2
Components of Autoencoders
★ Encoder.
★ Code/Bottleneck.
★ Decoder
3
Basic architecture of autoencoders 4
Properties of Autoencoders
Unsupervised
Data-specific
Lossy
5
Autoencoders and PCA
★ Both PCA and autoencoders are used for
dimensionality reduction.
★ PCA reduces dimensionality by projecting data
onto a new axis.
★ Autoencoders reduce dimensionality by
useing an encoder-decoder network.
★ If an autoencoder has only one hidden layer, a
linear activation function and mean squared error
(MSE) loss, the Autoencoder become equivalent
to PCA.
Regularization in autoencoders
Regularization -> prevent overfitting and improve generalization
Overfitting :
Training data - high accuracy
Test data - low accuracy
Generalization : How well a model performs on new unseen data.
7
8
Remove certain neuron -> increase linearity
9
Two main types of Regularization
L1 Regularization(Lasso Regression)
L2 Regularization (Ridge Regression)
10
L1 Regularization(Lasso Regression):
Adds “absolute value of magnitude”of the coefficient
as a penalty term to the loss function
number of featres
regularization
actual target value
parameter
number of examples coefficient
predicted target value 11
L2 Regularization (Ridge Regression):
Adds “square magnitude” of the coefficient as a
penalty term to the loss function
number of features
number of examples
regularization coefficient
actual target value
parameter
12
predicted target value
Denoising autoencoders
Remove noise from data.
Create a corrupted copy of the input by intoroducing some noise .
Helps to avoid the autoencoders to copy the input to the output, without learning
features about the data .
13
Denoising Autoencoders
14
15
Sparse Autoencoder
A Sparse Autoencoder (SAE) is a type of autoencoder that enforces
sparsity on the hidden layer.
Instead of reducing dimensions like traditional autoencoders, it limits the
number of active neurons per input.
Most neurons are inactive, Only a few neurons “light up” at a time.
Think of it like: A team where only the specialists work on a task, not
everyone
16
Here is the diagram of SAE
where only blue color
neurons are activate and
rest others are in neutral
state for a particular
training dataset
17
Advantages of Sparse Autoencoder
Improved Generalization
Reduced Computational Complexity
Lower Risk of Overfitting
Feature Extraction
18
How Do We Make It Sparse
Add a rule (called an L1 penalty) to the training process.
What it does: Encourages most neuron outputs to be zero.
Result: Only a few neurons are active for each input.
Analogy: Like paying a small fine every time too many team members join
in—keeps the group small.
19
Contractive Autoencoder
Encoder
compresses
noisy image into
a lower-
dimensional
representation.
Decoder
reconstructs
original image
from compressed
representation. 20
Working of CAE
Additional regularization constraints that enforce robustness.
Contractive penalty is applied here to the neurons respond to small
changes in input.
Ensures that each neuron does not overreact to slight input variations,
making the encoding more stable and robust
21
Sensitivity
here,
hi --> output image
xj ---> input image
22
Loss Function
Loss
Reconstruction Error
coefficient penalty factor
for sensitivity and
stability
23
24
Thank You
25