Source code for AACL-IJCNLP 2020 paper "Neural Gibbs Sampling for Joint Event Argument Extraction".
- python == 3.6.9
- pytorch == 0.6.1
- numpy == 1.15.2
- sklearn == 0.20.2
- pytorch-pretrained-bert == 0.4.0
- nltk
- tqdm
We use the ACE2005 (LDC2006T06) and TAC KBP 2016 (LDC2017E05) as our benchmarks. Due to the LDC license limitation, we cannot share the datasets.
For NGS (CNN), the 100-dim Glove word vectors pre-trained with Wikipedia 2014+Gigaword 5 is used.
The codes are in the NGS-DMCNN
folder.
- run
input.py
to preprocess the data. - run
trigger.py
andargument.py
to train and test the prior models for the ED and EAE. - run
conditional.py
to train and test the conditional neural model. - run
Gibbs_an.py
to run the Gibbs sampling + Simulated annealing. - hyper-parameters and data paths are specified in
constant.py
.
The codes are in the NGS-DMBERT
folder.
- run
input_bert_role.py
to preprocess the data. - run
trigger_bert.py
andargument_bert.py
to train and test the prior models for the ED and EAE. - run
conditional_bert.py
to train and test the conditional neural model. - run
Gibbs_an_bert.py
to run the Gibbs sampling + Simulated annealing. - hyper-parameters and data paths are specified in
constant.py
.
If the codes help you, please cite the following paper:
Neural Gibbs Sampling for Joint Event Argument Extraction. Xiaozhi Wang, Shengyu Jia, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Jie Zhou. AACL-IJCNLP 2020.