[go: up one dir, main page]

An Investigation of Noise in Morphological Inflection

Adam Wiemerslage, Changbing Yang, Garrett Nicolai, Miikka Silfverberg, Katharina Kann


Abstract
With a growing focus on morphological inflection systems for languages where high-quality data is scarce, training data noise is a serious but so far largely ignored concern. We aim at closing this gap by investigating the types of noise encountered within a pipeline for truly unsupervised morphological paradigm completion and its impact on morphological inflection systems: First, we propose an error taxonomy and annotation pipeline for inflection training data. Then, we compare the effect of different types of noise on multiple state-of-the- art inflection models. Finally, we propose a novel character-level masked language modeling (CMLM) pretraining objective and explore its impact on the models’ resistance to noise. Our experiments show that various architectures are impacted differently by separate types of noise, but encoder-decoders tend to be more robust to noise than models trained with a copy bias. CMLM pretraining helps transformers, but has lower impact on LSTMs.
Anthology ID:
2023.findings-acl.207
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3351–3365
Language:
URL:
https://aclanthology.org/2023.findings-acl.207
DOI:
10.18653/v1/2023.findings-acl.207
Bibkey:
Cite (ACL):
Adam Wiemerslage, Changbing Yang, Garrett Nicolai, Miikka Silfverberg, and Katharina Kann. 2023. An Investigation of Noise in Morphological Inflection. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3351–3365, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
An Investigation of Noise in Morphological Inflection (Wiemerslage et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.207.pdf