This repository contains solution of NER task based on BERT withot fine-tuning BERT model.
There are two solutions based on this architecture.
- BSNLP 2019 ACL workshop: solution and paper on multilingual shared task.
- The second place solution of Dialogue AGRR-2019 task.
We didn't search best parametres and obtained the following results.
Model | Data set | Dev F1 tok | Dev F1 span | Test F1 tok | Test F1 span |
---|---|---|---|---|---|
OURS | |||||
M-BERTCRF-IO | FactRuEval | - | - | 0.8598 | 0.7676 |
M-BERTNCRF-IO | FactRuEval | - | - | 0.8603 | 0.7783 |
M-BERTBiLSTMCRF-IO | FactRuEval | - | - | 0.8780 | 0.8108 |
M-BERTBiLSTMCRF-BIO | FactRuEval | - | - | 0.8263 | 0.8051 |
M-BERTBiLSTMNCRF-IO | FactRuEval | - | - | 0.8594 | 0.7842 |
M-BERTAttnCRF-IO | FactRuEval | - | - | 0.8630 | 0.7879 |
M-BERTBiLSTMAttnCRF-IO | FactRuEval | - | - | 0.8851 | 0.8244 |
M-BERTBiLSTMAttnNCRF-IO | FactRuEval | - | - | 0.8609 | 0.7869 |
M-BERTBiLSTMAttnNCRF-fit_BERT-IO | FactRuEval | - | - | 0.8739 | 0.8201 |
- | - | - | - | - | - |
BERTBiLSTMCRF-IO | CoNLL-2003 | 0.9624 | 0.9273 | - | - |
BERTBiLSTMCRF-BIO | CoNLL-2003 | 0.9530 | 0.9236 | - | - |
B-BERTBiLSTMCRF-IO | CoNLL-2003 | 0.9635 | 0.9277 | - | - |
B-BERTBiLSTMCRF-BIO | CoNLL-2003 | 0.9536 | 0.9156 | - | - |
B-BERTBiLSTMAttnCRF-IO | CoNLL-2003 | 0.9571 | 0.9114 | - | - |
B-BERTBiLSTMAttnNCRF-IO | CoNLL-2003 | 0.9631 | 0.9197 | - | - |
Current SOTA | |||||
DeepPavlov-RuBERT-NER | FactRuEval | - | - | - | 0.8266 |
CSE | CoNLL-2003 | - | - | 0.931 | - |
BERT-LARGE | CoNLL-2003 | 0.966 | - | 0.928 | - |
BERT-BASE | CoNLL-2003 | 0.964 | - | 0.924 | - |