Latent Graph Learning with Dual-channel Attention for Relation Extraction
- Python (tested on 3.8.12)
- CUDA (tested on 11.1)
- PyTorch (tested on 1.8.1)
- Transformers (tested on 3.4.0)
- ujson
- tqdm
The TACRED dataset can be obtained from this link. The TACREV and Re-TACRED dataset can be obtained following the instructions in Tacrev and Re-TACRED, respectively. The expected structure of files is:
DA-GPN
|-- dataset
| |-- tacred
| | |-- train.json
| | |-- dev.json
| | |-- test.json
| | |-- dev_rev.json
| | |-- test_rev.json
| |-- retacred
| | |-- train.json
| | |-- dev.json
| | |-- test.json
Train the DA-GPN model:
>> sh run_tacred.sh # TACRED and TACREV
>> sh run_retacred.sh # Re-TACRED
The results on TACRED and TACREV can be obtained in one run as they share the same training set. We use Roberta large as the backbone of BERT module.
This DialogRE dataset can be downloaded at: https://github.com/nlpdata/dialogre. You can download and unzip BERT-base-uncased from https://github.com/google-research/bert
>> sh run_dialog.sh # Dialog
Note: We perform our experiments on GTX 3090 card.
Part of the code is adapted from An Improved Baseline for Sentence-level Relation Extraction.