Generating sentences using a dynamic canvas
H Shah, B Zheng, D Barber - Proceedings of the AAAI Conference on …, 2018 - ojs.aaai.org
H Shah, B Zheng, D Barber
Proceedings of the AAAI Conference on Artificial Intelligence, 2018•ojs.aaai.orgAbstract We introduce the Attentive Unsupervised Text (W) riter (AUTR), which is a word
level generative model for natural language. It uses a recurrent neural network with a
dynamic attention and canvas memory mechanism to iteratively construct sentences. By
viewing the state of the memory at intermediate stages and where the model is placing its
attention, we gain insight into how it constructs sentences. We demonstrate that AUTR
learns a meaningful latent representation for each sentence, and achieves competitive log …
level generative model for natural language. It uses a recurrent neural network with a
dynamic attention and canvas memory mechanism to iteratively construct sentences. By
viewing the state of the memory at intermediate stages and where the model is placing its
attention, we gain insight into how it constructs sentences. We demonstrate that AUTR
learns a meaningful latent representation for each sentence, and achieves competitive log …
Abstract
We introduce the Attentive Unsupervised Text (W) riter (AUTR), which is a word level generative model for natural language. It uses a recurrent neural network with a dynamic attention and canvas memory mechanism to iteratively construct sentences. By viewing the state of the memory at intermediate stages and where the model is placing its attention, we gain insight into how it constructs sentences. We demonstrate that AUTR learns a meaningful latent representation for each sentence, and achieves competitive log-likelihood lower bounds whilst being computationally efficient. It is effective at generating and reconstructing sentences, as well as imputing missing words.
ojs.aaai.org