[go: up one dir, main page]

0% found this document useful (0 votes)
29 views6 pages

Long Short-Term Memory: Machine Learning Data Mining

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 6

Long short-term memory

From Wikipedia, the free encyclopedia


Jump to navigationJump to search
Machine learning and
data mining

Problems[show]

Supervised learning
(classification • regression)

[show]

Clustering[show]

Dimensionality reduction[show]

Structured prediction[show]

Anomaly detection[show]

Artificial neural networks[show]

Reinforcement learning[show]

Theory[show]

Machine-learning venues[show]

Glossary of artificial intelligence[show]

Related articles[show]

 Machine learning portal


 v
 t
 e

The Long Short-Term Memory (LSTM) cell can process data sequentially and keep its hidden state through
time.

Long short-term memory (LSTM) units are units of a recurrent neural network (RNN). An RNN
composed of LSTM units is often called an LSTM network. A common LSTM unit is composed of
a cell, an input gate, an output gate and a forget gate. The cell remembers values over
arbitrary time intervals and the three gates regulate the flow of information into and out of the
cell.
LSTM networks are well-suited to classifying, processing and making predictions based on time
series data, since there can be lags of unknown duration between important events in a time
series. LSTMs were developed to deal with the exploding and vanishing gradient problems that
can be encountered when training traditional RNNs. Relative insensitivity to gap length is an
advantage of LSTM over RNNs, hidden Markov models and other sequence learning methods in
numerous applications[citation needed].

Contents

 1History
 2Architectures
 3Variants
o 3.1LSTM with a forget gate
 3.1.1Variables
 3.1.2Activation functions
o 3.2Peephole LSTM
o 3.3Peephole convolutional LSTM
 4Training
o 4.1CTC score function
 5Applications
 6See also
 7External links
 8References

History[edit]
LSTM was proposed in 1997 by Sepp Hochreiter and Jürgen Schmidhuber[1] and improved in
2000 by Felix Gers' team.[2]
Among other successes, LSTM achieved record results in natural language text
compression,[3] unsegmented connected handwriting recognition[4]and won
the ICDAR handwriting competition (2009). LSTM networks were a major component of a
network that achieved a record 17.7% phonemeerror rate on the classic TIMIT natural speech
dataset (2013).[5]
As of 2016, major technology companies including Google, Apple, and Microsoft were using
LSTM as fundamental components in new products.[6]For example, Google used LSTM for
speech recognition on the smartphone,[7][8] for the smart assistant Allo[9] and for Google
Translate.[10][11] Appleuses LSTM for the "Quicktype" function on the iPhone[12][13] and
for Siri.[14] Amazon uses LSTM for Amazon Alexa.[15]
In 2017 Microsoft reported reaching 95.1% recognition accuracy on the Switchboard corpus,
incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-
short-term memory".[16]

Architectures[edit]
There are several architectures of LSTM units. A common architecture is composed of a
memory cell, an input gate, an output gate and a forget gate.
An LSTM cell takes an input and stores it for some period of time. This is equivalent to applying

the identity function ( to the input. Because the derivative of the identity function is constant,
when an LSTM network is trained with backpropagation through time, the gradient does not
vanish.
The activation function of the LSTM gates is often the logistic function. Intuitively, the input
gate controls the extent to which a new value flows into the cell, the forget gate controls the
extent to which a value remains in the cell and the output gate controls the extent to which the
value in the cell is used to compute the output activation of the LSTM unit.
There are connections into and out of the LSTM gates, a few of which are recurrent. The weights
of these connections, which need to be learned during training, determine how the gates operate.

Variants[edit]
In the equations below, the lowercase variables represent vectors.

Matrices and contain, respectively, the weights of the input and recurrent

connections, where can either be the input gate , output gate , the forget

gate or the memory cell , depending on the activation being calculated.


LSTM with a forget gate[edit]
The compact forms of the equations for the forward pass of an LSTM unit with a forget gate
are:[1][2]

where the initial values are and and the operator denotes the Hadamard

product (element-wise product). The subscript indexes the time step.


Variables[edit]

 : input vector to the LSTM unit

 : forget gate's activation vector


 : input gate's activation vector

 : output gate's activation vector

 : output vector of the LSTM unit

 : cell state vector

 , and : weight matrices and bias vector parameters which need to be


learned during training

where the superscripts and refer to the number of input features and number of
hidden units, respectively.
Activation functions[edit]

 : sigmoid function.

 : hyperbolic tangent function.

 : hyperbolic tangent function or, as the peephole LSTM paper[which?] suggests,


.[17][18]

Peephole LSTM[edit]

A peephole LSTM unit with input (i.e. ), output (i.e. ), and forget (i.e. ) gates. Each
of these gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural
network: that is, they compute an activation (using an activation function) of a weighted

sum. and represent the activations of respectively the input, output and forget gates,

at time step . The 3 exit arrows from the memory cell to the 3

gates and represent the peephole connections. These peephole connections actually

denote the contributions of the activation of the memory cell at time step , i.e. the

contribution of (and not , as the picture may suggest). In other words, the

gates and calculate their activations at time step (i.e.,


respectively, and ) also considering the activation of the memory cell at time

step , i.e. . The single left-to-right arrow exiting the memory cell is not a peephole

connection and denotes . The little circles containing a symbol represent an element-
wise multiplication between its inputs. The big circles containing an S-like curve represent the
application of a differentiable function (like the sigmoid function) to a weighted sum. There are
many other kinds of LSTMs as well.[19]

The figure on the right is a graphical representation of an LSTM unit with peephole
connections (i.e. a peephole LSTM).[17][18]Peephole connections allow the gates to access the

constant error carousel (CEC), whose activation is the cell state.[20] is not used, is
used instead in most places.

Peephole convolutional LSTM[edit]

Peephole convolutional LSTM.[21] The denotes the convolution operator.

Training[edit]
To minimize LSTM's total error on a set of training sequences, iterative gradient
descent such as backpropagation through timecan be used to change each weight
in proportion to the derivative of the error with respect to it. A problem with
using gradient descent for standard RNNs is that error
gradients vanish exponentially quickly with the size of the time lag between

important events. This is due to if the spectral radius of is smaller than


1.[22][23] With LSTM units, however, when error values are back-propagated from the
output, the error remains in the unit's memory. This "error carousel" continuously
feeds error back to each of the gates until they learn to cut off the value. Thus,
regular backpropagation is effective at training an LSTM unit to remember values for
long durations.
LSTM can also be trained by a combination of artificial evolution for weights to the
hidden units, and pseudo-inverse or support vector machines for weights to the
output units.[24] In reinforcement learning applications LSTM can be trained by policy
gradient methods, evolution strategies or genetic algorithms[citation needed].
CTC score function[edit]
Many applications use stacks of LSTM RNNs[25] and train them by connectionist
temporal classification (CTC)[26] to find an RNN weight matrix that maximizes the
probability of the label sequences in a training set, given the corresponding input
sequences. CTC achieves both alignment and recognition.

Applications[edit]
Applications of LSTM include:

 Robot control[27]
 Time series prediction[28]
 Speech recognition[29][30][31]
 Rhythm learning[18]
 Music composition[32]
 Grammar learning[33][17][34]
 Handwriting recognition[35][36]
 Human action recognition[37]
 Sign Language Translation[38]
 Protein Homology Detection[39]
 Predicting subcellular localization of proteins[40]
 Time series anomaly detection[41]
 Several prediction tasks in the area of business process management[42]
 Prediction in medical care pathways[43]
 Semantic parsing[44]
 Object Co-segmentation[45][46]

LSTM has Turing completeness in the sense that given enough network units it can
compute any result that a conventional computer can compute, provided it has the
proper weightmatrix, which may be viewed as its program[citation needed][further explanation needed].

You might also like