NuNER is the family of SOTA Foundation and Zero-shot for Entity Recognition
- NuNER 2.0:
numind/NuNER-v2.0
(MIT) - the most powerful English NER model - NuNER multilingual
numind/NuNER-multilingual-v0.1
(MIT) - the most powerful multilingual NER model - NuNER 1.0
numind/NuNER-v1.0
(MIT) - the main model from our paper - NuNER BERT
numind/NuNER-BERT-v1.0
(MIT) - the model used in the TadNER section of our paper
- GLiNER NuNerZero span:
numind/NuNER_Zero-span
(MIT) - +4.5% more powerful GLiNER Large v2.1 - NuNerZero:
numind/NuNER_Zero
(MIT) - +3% more powerful GLiNER Large v2.1, better suitable to detect multi-word entities - NuNerZero 4k context:
numind/NuNER_Zero-4k
(MIT) - 4k-long-context NuNerZero
The last 2 models are word-level GLiNER models with unlimited entity length supported
@misc{bogdanov2024nuner,
title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data},
author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard},
year={2024},
eprint={2402.15343},
archivePrefix={arXiv},
primaryClass={cs.CL}
}