[go: up one dir, main page]

Neural Latent Extractive Document Summarization

Xingxing Zhang, Mirella Lapata, Furu Wei, Ming Zhou


Abstract
Extractive summarization models need sentence level labels, which are usually created with rule-based methods since most summarization datasets only have document summary pairs. These labels might be suboptimal. We propose a latent variable extractive model, where sentences are viewed as latent variables and sentences with activated variables are used to infer gold summaries. During training, the loss can come directly from gold summaries. Experiments on CNN/Dailymail dataset show our latent extractive model outperforms a strong extractive baseline trained on rule-based labels and also performs competitively with several recent models.
Anthology ID:
D18-1088
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
779–784
Language:
URL:
https://aclanthology.org/D18-1088
DOI:
10.18653/v1/D18-1088
Bibkey:
Cite (ACL):
Xingxing Zhang, Mirella Lapata, Furu Wei, and Ming Zhou. 2018. Neural Latent Extractive Document Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 779–784, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Neural Latent Extractive Document Summarization (Zhang et al., EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/D18-1088.pdf
Data
CNN/Daily Mail