Lecture 5
Lecture 5
Lecture 5
autoencoder
James Zou
October 10, 2016
Recap
Recap: feedforward and convnets
Main take-aways:
• Composition. Units/layers of a NN are modular and can be
composed to form complex architecture.
output
hidden units
input
Recurrent neural network
output
= · +
hidden units
= ( · + )
input
Recurrent neural network
output
hidden units
input
Recurrent neural network
+
+
Recurrent neural network
= ( · + · + )
Recurrent neural network
= · +
What does RNN remind you of?
+
+
Vanilla RNN: lacks long term memory
+
+
LSTM network
LSTM network
memory
• Parameters of LSTM:
,
forget
, new memory
, weight of new memory (input)
, output
LSTM application: enhancer/TF prediction
Output: 919 binary vector for
the presence of TF/chromatin
Bi-directional LSTM
Similar convolutional
architecture as before
• Feedforward
Learning a nonlinear
mapping from inputs to
outputs.
Predicting:
• Convnets
TF binding,
gene expression,
disease status from images,
• RNN, LSTM
risk from SNPs,
protein structure
…
Deep unsupervised learning
decoding
( )
encoding
Autoencoder
ˆ ˆ= ( · + )
( ) = ( · + )
, = arg min || ˆ ||
,
( )
What is wrong with this picture?
ˆ h(X) can just copy X exactly!
Overcomplete. Need to impose
sparsity on h.
( )
Denoising autoencoder
( )
0
0
independent noise
Denoising autoencoder
( )
0
0
independent noise
Illustration of denoising autoencoder
ˆ
ˆ
Deep autoencoder example
original
DAE
PCA
PCA
Deep autoencoder