[go: up one dir, main page]

0% found this document useful (0 votes)
22 views1 page

Unit V Deep Lrearning

This document covers the fundamentals of Recurrent Neural Networks (RNNs) including backpropagation through time and Long-Short Term Memory (LSTM) architectures, addressing issues like exploding and vanishing gradients. It also introduces various types of autoencoders, such as denoising, sparse, and contractive autoencoders. The content is designed for a 9-hour instructional unit.

Uploaded by

Ramprakash Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views1 page

Unit V Deep Lrearning

This document covers the fundamentals of Recurrent Neural Networks (RNNs) including backpropagation through time and Long-Short Term Memory (LSTM) architectures, addressing issues like exploding and vanishing gradients. It also introduces various types of autoencoders, such as denoising, sparse, and contractive autoencoders. The content is designed for a 9-hour instructional unit.

Uploaded by

Ramprakash Reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1

UNIT 5 RECURRENT NEURAL NETWORKS 9 hours

Basic concepts of Recurrent Neural Networks (RNNs), backpropagation through


time, Long-Short Term Memory (LSTM) architectures, the problem of exploding
and vanishing gradients, and basics of word embedding.
AUTOENCODERS
Autoencoders, Denoising autoencoders, sparse autoencoders, contractive
Autoencoders

You might also like