[go: up one dir, main page]

The wake-sleep algorithm[1] is an unsupervised learning algorithm for deep generative models, especially Helmholtz Machines.[2] The algorithm is similar to the expectation-maximization algorithm,[3] and optimizes the model likelihood for observed data.[4] The name of the algorithm derives from its use of two learning phases, the “wake” phase and the “sleep” phase, which are performed alternately.[1] It can be conceived as a model for learning in the brain,[5] but is also being applied for machine learning.[6]

Layers of the neural network. R, G are weights used by the wake-sleep algorithm to modify data inside the layers.

Description

edit

The goal of the wake-sleep algorithm is to find a hierarchical representation of observed data.[7] In a graphical representation of the algorithm, data is applied to the algorithm at the bottom, while higher layers form gradually more abstract representations. Between each pair of layers are two sets of weights: Recognition weights, which define how representations are inferred from data, and generative weights, which define how these representations relate to data.[8]

Training

edit

Training consists of two phases – the “wake” phase and the “sleep” phase. It has been proven that this learning algorithm is convergent.[3]

The "wake" phase

edit

Neurons are fired by recognition connections (from what would be input to what would be output). Generative connections (leading from outputs to inputs) are then modified to increase probability that they would recreate the correct activity in the layer below – closer to actual data from sensory input.[1]

The "sleep" phase

edit

The process is reversed in the “sleep” phase – neurons are fired by generative connections while recognition connections are being modified to increase probability that they would recreate the correct activity in the layer above – further to actual data from sensory input.[1]

Extensions

edit

Since the recognition network is limited in its flexibility, it might not be able to approximate the posterior distribution of latent variables well.[6] To better approximate the posterior distribution, it is possible to employ importance sampling, with the recognition network as the proposal distribution. This improved approximation of the posterior distribution also improves the overall performance of the model.[6]

See also

edit

References

edit
  1. ^ a b c d Hinton, Geoffrey E.; Dayan, Peter; Frey, Brendan J.; Neal, Radford (1995-05-26). "The wake-sleep algorithm for unsupervised neural networks". Science. 268 (5214): 1158–1161. Bibcode:1995Sci...268.1158H. doi:10.1126/science.7761831. PMID 7761831. S2CID 871473.
  2. ^ Dayan, Peter. "Helmholtz Machines and Wake-Sleep Learning" (PDF). Retrieved 2015-11-01.
  3. ^ a b Ikeda, Shiro; Amari, Shun-ichi; Nakahara, Hiroyuki (1998). "Convergence of the Wake-Sleep Algorithm". Advances in Neural Information Processing Systems. 11. MIT Press.
  4. ^ Frey, Brendan J.; Hinton, Geoffrey E.; Dayan, Peter (1996-05-01). "Does the wake-sleep algorithm produce good density estimators?" (PDF). Advances in Neural Information Processing Systems.
  5. ^ Katayama, Katsuki; Ando, Masataka; Horiguchi, Tsuyoshi (2004-04-01). "Models of MT and MST areas using wake–sleep algorithm". Neural Networks. 17 (3): 339–351. doi:10.1016/j.neunet.2003.07.004. PMID 15037352.
  6. ^ a b c Bornschein, Jörg; Bengio, Yoshua (2014-06-10). "Reweighted Wake-Sleep". arXiv:1406.2751 [cs.LG].
  7. ^ Maei, Hamid Reza (2007-01-25). "Wake-sleep algorithm for representational learning". University of Montreal. Retrieved 2011-11-01.
  8. ^ Neal, Radford M.; Dayan, Peter (1996-11-24). "Factor Analysis Using Delta Rules Wake-Sleep Learning" (PDF). University of Toronto. Retrieved 2015-11-01.