Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of Modulated Cross-Entropy in Natural Language Inference
-
Updated
Feb 9, 2023 - Python
Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of Modulated Cross-Entropy in Natural Language Inference
BERT and RoBERTa models fine-tuned on the MNLI dataset, optimized for binary entailment/non-entailment classification. Additionally, their performance is explored in handling figurative language.
Mitigating a language model's over-confidence with NLI predictions on Multi-NLI hypotheses with random word order using PAWS (paraphrase) and Winogrande (anaphora).
Add a description, image, and links to the mnli-dataset topic page so that developers can more easily learn about it.
To associate your repository with the mnli-dataset topic, visit your repo's landing page and select "manage topics."