Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of Modulated Cross-Entropy in Natural Language Inference
-
Updated
Feb 9, 2023 - Python
Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of Modulated Cross-Entropy in Natural Language Inference
Forth Assignment in ׳Deep Learning for Texts and Sequences' course by Prof. Yoav Goldberg at Bar-Ilan University
JSNLI (Japanese SNLI) dataset for huggingface datasets
I have prepared a Attention Model for Natural Language Inference.
Semantic Similarity on SNLI dataset using BERT as well as TF-IDF+BERT(Pooled) embeddings.
Investigation of how sampling strategies affect Selective Prediction performance in Multi Task Learning
The objective here is to study the plausibility of attention mechanisms in automatic language processing on an NLI (natural naguage inference) task, in transformers (BERT) architecture
In this repository, we deal with the task of implementing Natural Language Inferencing (NLI) using the SNLI dataset. Different methods such as BiLSTM, BiGRU, Attention models and Logistic Regression are experimented.
Add a description, image, and links to the snli-dataset topic page so that developers can more easily learn about it.
To associate your repository with the snli-dataset topic, visit your repo's landing page and select "manage topics."