[go: up one dir, main page]

Exploring Cross-Lingual Transfer of Morphological Knowledge In Sequence-to-Sequence Models

Huiming Jin, Katharina Kann


Abstract
Multi-task training is an effective method to mitigate the data sparsity problem. It has recently been applied for cross-lingual transfer learning for paradigm completion—the task of producing inflected forms of lemmata—with sequence-to-sequence networks. However, it is still vague how the model transfers knowledge across languages, as well as if and which information is shared. To investigate this, we propose a set of data-dependent experiments using an existing encoder-decoder recurrent neural network for the task. Our results show that indeed the performance gains surpass a pure regularization effect and that knowledge about language and morphology can be transferred.
Anthology ID:
W17-4110
Volume:
Proceedings of the First Workshop on Subword and Character Level Models in NLP
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Manaal Faruqui, Hinrich Schuetze, Isabel Trancoso, Yadollah Yaghoobzadeh
Venue:
SCLeM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
70–75
Language:
URL:
https://aclanthology.org/W17-4110
DOI:
10.18653/v1/W17-4110
Bibkey:
Cite (ACL):
Huiming Jin and Katharina Kann. 2017. Exploring Cross-Lingual Transfer of Morphological Knowledge In Sequence-to-Sequence Models. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 70–75, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Exploring Cross-Lingual Transfer of Morphological Knowledge In Sequence-to-Sequence Models (Jin & Kann, SCLeM 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-4110.pdf