[go: up one dir, main page]

Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer

Huiyuan Lai, Antonio Toral, Malvina Nissim


Abstract
Scarcity of parallel data causes formality style transfer models to have scarce success in preserving content. We show that fine-tuning pre-trained language (GPT-2) and sequence-to-sequence (BART) models boosts content preservation, and that this is possible even with limited amounts of parallel data. Augmenting these models with rewards that target style and content –the two core aspects of the task– we achieve a new state-of-the-art.
Anthology ID:
2021.acl-short.62
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
August
Year:
2021
Address:
Online
Editors:
Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
484–494
Language:
URL:
https://aclanthology.org/2021.acl-short.62
DOI:
10.18653/v1/2021.acl-short.62
Bibkey:
Cite (ACL):
Huiyuan Lai, Antonio Toral, and Malvina Nissim. 2021. Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 484–494, Online. Association for Computational Linguistics.
Cite (Informal):
Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer (Lai et al., ACL-IJCNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.acl-short.62.pdf
Video:
 https://aclanthology.org/2021.acl-short.62.mp4
Code
 laihuiyuan/Pre-trained-formality-transfer
Data
GYAFC