Skip to content

Commit

Permalink
Merge pull request #40 from adaamko/dev
Browse files Browse the repository at this point in the history
Dataframes and rules in tsv; OpenIE; Rule, Ruleset formats; READMEs; Unit tests
  • Loading branch information
adaamko authored Mar 9, 2022
2 parents 058df3c + 1bc6500 commit a05ddf9
Show file tree
Hide file tree
Showing 29 changed files with 1,797 additions and 666 deletions.
15 changes: 8 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -258,13 +258,14 @@ trainer = GraphTrainer(df)
#extract features
features = trainer.prepare_and_train()

from xpotato.dataset.utils import save_dataframe
from sklearn.model_selection import train_test_split

train, val = train_test_split(df, test_size=0.2, random_state=1234)

#save train and validation, this is important for the frontend to work
train.to_pickle("train_dataset")
val.to_pickle("val_dataset")
save_dataframe(train, 'train.tsv')
save_dataframe(val, 'val.tsv')

import json

Expand All @@ -287,18 +288,18 @@ with open("graphs.pickle", "wb") as f:
If the DataFrame is ready with the parsed graphs, the UI can be started to inspect the extracted rules and modify them. The frontend is a streamlit app, the simplest way of starting it is (the training and the validation dataset must be provided):

```
streamlit run frontend/app.py -- -t notebooks/train_dataset -v notebooks/val_dataset -g ud
streamlit run frontend/app.py -- -t notebooks/train.tsv -v notebooks/val.tsv -g ud
```

it can be also started with the extracted features:

```
streamlit run frontend/app.py -- -t notebooks/train_dataset -v notebooks/val_dataset -g ud -sr notebooks/features.json
streamlit run frontend/app.py -- -t notebooks/train.tsv -v notebooks/val.tsv -g ud -sr notebooks/features.json
```

if you already used the UI and extracted the features manually and you want to load it, you can run:
```
streamlit run frontend/app.py -- -t notebooks/train_dataset -v notebooks/val_dataset -g ud -sr notebooks/features.json -hr notebooks/manual_features.json
streamlit run frontend/app.py -- -t notebooks/train.tsv -v notebooks/val.tsv -g ud -sr notebooks/features.json -hr notebooks/manual_features.json
```

### Advanced mode
Expand Down Expand Up @@ -331,7 +332,7 @@ sentences = [("Governments and industries in nations around the world are pourin

Then, the frontend can be started:
```
streamlit run frontend/app.py -- -t notebooks/unsupervised_dataset -g ud -m advanced
streamlit run frontend/app.py -- -t notebooks/unsupervised_dataset.tsv -g ud -m advanced
```

Once the frontend starts up and you define the labels, you are faced with the annotation interface. You can search elements by clicking on the appropriate column name and applying the desired filter. You can annotate instances by checking the checkbox at the beginning of the line. You can check multiple checkboxs at a time. Once you've selected the utterances you want to annotate, click on the _Annotate_ button. The annotated samples will appear in the lower table. You can clear the annotation of certain elements by selecting them in the second table and clicking _Clear annotation_.
Expand All @@ -345,7 +346,7 @@ Once you have some annotated data, you can train rules by clicking the _Train!_
If you have the features ready and you want to evaluate them on a test set, you can run:

```python
python scripts/evaluate.py -t ud -f notebooks/features.json -d notebooks/val_dataset
python scripts/evaluate.py -t ud -f notebooks/features.json -d notebooks/val.tsv
```

The result will be a _csv_ file with the labels and the matched rules.
Expand Down
4 changes: 2 additions & 2 deletions features/crowdtruth/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,11 @@ Prebuilt rule-systems for both the _cause_ and the _treat_ label are also availa
Then the frontend of POTATO can be started from the __frontend__ directory:

```bash
streamlit run app.py -- -t ../features/crowdtruth/crowdtruth_train_dataset_cause_ud.pickle -v ../features/crowdtruth/crowdtruth_dev_dataset_cause_ud.pickle -hr ../features/crowdtruth/crowd_cause_features_ud.json
streamlit run app.py -- -t ../features/crowdtruth/crowdtruth_train_dataset_cause_ud.tsv -v ../features/crowdtruth/crowdtruth_dev_dataset_cause_ud.tsv -hr ../features/crowdtruth/crowd_cause_features_ud.json
```

If you are done building the rule-system, you can evaluate it on the test data, for this run _evaluate.py_ from the _scripts_ directory.

```bash
python evaluate.py -t ud -f ../features/crowdtruth/crowd_cause_features_ud.json -d ../features/crowdtruth/crowdtruth_train_dataset_cause_ud.pickle
python evaluate.py -t ud -f ../features/crowdtruth/crowd_cause_features_ud.json -d ../features/crowdtruth/crowdtruth_train_dataset_cause_ud.tsv
```
26 changes: 20 additions & 6 deletions features/crowdtruth/crowdtruth.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,19 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "77655d7b",
"metadata": {},
"outputs": [],
"source": [
"!wget -nc -q -O \"ground_truth_cause.csv\" \"https://raw.githubusercontent.com/CrowdTruth/Medical-Relation-Extraction/master/ground_truth_cause.csv\"\n",
"!wget -nc -q -O \"ground_truth_treat.csv\" \"https://raw.githubusercontent.com/CrowdTruth/Medical-Relation-Extraction/master/ground_truth_treat.csv\"\n",
"!wget -nc -q -O \"ground_truth_cause.xlsx\" \"https://github.com/CrowdTruth/Medical-Relation-Extraction/blob/master/train_dev_test/ground_truth_cause.xlsx?raw=true\"\n",
"!wget -nc -q -O \"ground_truth_treat.xlsx\" \"https://github.com/CrowdTruth/Medical-Relation-Extraction/blob/master/train_dev_test/ground_truth_treat.xlsx?raw=true\"\n",
"!wget -nc -q -O \"food_disease_dataset.csv\" \"https://raw.githubusercontent.com/gjorgjinac/food-disease-dataset/main/food_disease_dataset.csv\""
]
},
{
"cell_type": "code",
"execution_count": 16,
Expand Down Expand Up @@ -324,16 +338,16 @@
"metadata": {},
"outputs": [],
"source": [
"\n",
"from xpotato.dataset.utils import save_dataframe\n",
"\n",
"train_df = train_dataset.to_dataframe()\n",
"dev_df = dev_dataset.to_dataframe()\n",
"test_df = test_dataset.to_dataframe()\n",
"\n",
"#train_df.to_pickle(\"crowdtruth_train_dataset_treat_fourlang.pickle\")\n",
"#dev_df.to_pickle(\"crowdtruth_dev_dataset_treat_fourlang.pickle\")\n",
"#test_df.to_pickle(\"crowdtruth_test_dataset_treat_fourlang.pickle\")\n",
"train_df.to_pickle(\"crowdtruth_train_dataset_cause_fourlang.pickle\")\n",
"dev_df.to_pickle(\"crowdtruth_dev_dataset_cause_fourlang.pickle\")\n",
"test_df.to_pickle(\"crowdtruth_test_dataset_cause_fourlang.pickle\")"
"save_dataframe(train_df, \"crowdtruth_train_dataset_cause_fourlang.tsv\")\n",
"save_dataframe(dev_df, \"crowdtruth_dev_dataset_cause_fourlang.tsv\")\n",
"save_dataframe(test_df, \"crowdtruth_test_dataset_cause_fourlang.tsv\")"
]
},
{
Expand Down
24 changes: 12 additions & 12 deletions features/crowdtruth/data.sh
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
wget https://owncloud.tuwien.ac.at/index.php/s/z3IMX2fUNM7Kw6i/download -O crowdtruth_dev_dataset_cause_fourlang.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/C4MOznjvxpcU5Ik/download -O crowdtruth_dev_dataset_cause_ud.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/39s2AsFYTL3Keni/download -O crowdtruth_dev_dataset_treat_fourlang.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/RqC1SzWhRXoKOnn/download -O crowdtruth_dev_dataset_treat_ud.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/WpxGeblkiEhkIib/download -O crowdtruth_test_dataset_cause_fourlang.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/wro8yTxXYK6WpF8/download -O crowdtruth_test_dataset_cause_ud.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/xLz0fOxjb8ORBlR/download -O crowdtruth_test_dataset_treat_fourlang.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/RaCcWl0xVdVpPQZ/download -O crowdtruth_test_dataset_treat_ud.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/i7BuiCMvYWcZlI1/download -O crowdtruth_train_dataset_cause_fourlang.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/NAHY0g1XqYM28LQ/download -O crowdtruth_train_dataset_cause_ud.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/OPzP4kgD4PVwZOA/download -O crowdtruth_train_dataset_treat_fourlang.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/sL3s3uaUgnLdKsy/download -O crowdtruth_train_dataset_treat_ud.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/aHX8ByPg8nN3W5v/download -O crowdtruth_dev_dataset_cause_fourlang.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/1P1OppoaeFPk4iI/download -O crowdtruth_dev_dataset_cause_ud.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/imAYGbrNVtTHCRs/download -O crowdtruth_dev_dataset_treat_fourlang.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/oOOZhWVjC40xxQm/download -O crowdtruth_dev_dataset_treat_ud.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/C2SQeWPqDdQrtXQ/download -O crowdtruth_test_dataset_cause_fourlang.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/3PGrMU6SINTSbfl/download -O crowdtruth_test_dataset_cause_ud.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/hDyM5x4XCcqANt3/download -O crowdtruth_test_dataset_treat_fourlang.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/SGv5zZm5UyulXT1/download -O crowdtruth_test_dataset_treat_ud.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/KcpBVwigbB19H56/download -O crowdtruth_train_dataset_cause_fourlang.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/tjLqzSUl0zU32zu/download -O crowdtruth_train_dataset_cause_ud.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/0cDVR9nz0I4QWvp/download -O crowdtruth_train_dataset_treat_fourlang.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/PTOhXBqxrLmrAzW/download -O crowdtruth_train_dataset_treat_ud.tsv
4 changes: 2 additions & 2 deletions features/food/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,11 @@ Prebuilt rule-systems for both the _cause_ and the _treat_ label are also availa
Then the frontend of POTATO can be started from the __frontend__ directory:

```bash
streamlit run app.py -- -t ../features/food/food_train_dataset_cause_ud.pickle -v ../features/food/food_dev_dataset_cause_ud.pickle -hr ../features/crowdtruth/food_cause_features_ud.json
streamlit run app.py -- -t ../features/food/food_train_dataset_cause_ud.tsv -v ../features/food/food_dev_dataset_cause_ud.tsv -hr ../features/crowdtruth/food_cause_features_ud.json
```

If you are done building the rule-system, you can evaluate it on the test data, for this run _evaluate.py_ from the _scripts_ directory.

```bash
python evaluate.py -t ud -f ../features/food/food_cause_features_ud.json -d ../features/crowdtruth/food_train_dataset_cause_ud.pickle
python evaluate.py -t ud -f ../features/food/food_cause_features_ud.json -d ../features/crowdtruth/food_train_dataset_cause_ud.tsv
```
16 changes: 8 additions & 8 deletions features/food/data.sh
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
wget https://owncloud.tuwien.ac.at/index.php/s/G8pbpWQq6bqYbXp/download -O food_dev_dataset_cause_fourlang.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/zNlkmijP6T0bRT5/download -O food_dev_dataset_cause_ud.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/lJIRnQBkhyn8bQs/download -O food_dev_dataset_treat_fourlang.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/Nj9vpcBs2C4aFMW/download -O food_dev_dataset_treat_ud.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/WFoTXbRrtn1QDqT/download -O food_test_dataset_cause_fourlang.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/dEvaQhhCQ39e2hv/download -O food_test_dataset_cause_ud.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/A9U3iz5SzGwmdW6/download -O food_test_dataset_treat_fourlang.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/d4Q09GVI89XwKuD/download -O food_test_dataset_treat_ud.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/eQHmVCULV3sYVKF/download -O food_dev_dataset_cause_fourlang.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/Jem0O20atHYJYkf/download -O food_dev_dataset_cause_ud.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/62v47pY8KwBwlJj/download -O food_dev_dataset_treat_fourlang.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/3KSW4JUJRcUp5zA/download -O food_dev_dataset_treat_ud.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/EC8qjI6Jo1BTaJ4/download -O food_test_dataset_cause_fourlang.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/LWoP5x2DD0QzM2p/download -O food_test_dataset_cause_ud.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/b8DILcmjJhH7IgP/download -O food_test_dataset_treat_fourlang.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/CDmcKXJlcRv8Wcv/download -O food_test_dataset_treat_ud.tsv
10 changes: 6 additions & 4 deletions features/food/food.ipnyb → features/food/food.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -209,8 +209,10 @@
"metadata": {},
"outputs": [],
"source": [
"train_df.to_pickle(\"food_train_dataset_treat_ud.pickle\")\n",
"dev_df.to_pickle(\"food_dev_dataset_treat_ud.pickle\")"
"from xpotato.dataset.utils import save_dataframe\n",
"\n",
"save_dataframe(train_df, 'food_train_dataset_treat_ud.tsv')\n",
"save_dataframe(dev_df, 'food_dev_dataset_treat_ud.tsv')"
]
},
{
Expand Down Expand Up @@ -255,8 +257,8 @@
"metadata": {},
"outputs": [],
"source": [
"train_df.to_pickle(\"food_train_dataset_cause_fourlang.pickle\")\n",
"dev_df.to_pickle(\"food_dev_dataset_cause_fourang.pickle\")"
"save_dataframe(train_df, 'food_train_dataset_cause_fourlang.tsv')\n",
"save_dataframe(dev_df, 'food_dev_dataset_cause_fourlang.tsv')"
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions features/hasoc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,18 +15,18 @@ Prebuilt rule-systems are available in this directory for the _2019, 2020, 2021_
Then the frontend of POTATO can be started from the __frontend__ directory:

```bash
streamlit run app.py -- -t ../features/hasoc/hasoc_2021_train_amr.pickle -v ../features/hasoc/hasoc_2021_val_amr.pickle -hr ../features/hasoc/2021_train_features_task1.json
streamlit run app.py -- -t ../features/hasoc/hasoc_2021_train_amr.tsv -v ../features/hasoc/hasoc_2021_val_amr.tsv -hr ../features/hasoc/2021_train_features_task1.json
```

If you want to reproduce our output run _evaluate.py_ from the _scripts_ directory.

```bash
python evaluate.py -t amr -f ../features/hasoc/2021_train_features_task1.json -d ../features/hasoc/hasoc_2021_test_amr.pickle
python evaluate.py -t amr -f ../features/hasoc/2021_train_features_task1.json -d ../features/hasoc/hasoc_2021_test_amr.tsv
```

If you want to get the classification report, run the script with the __mode__ (-m) parameter:
```bash
python evaluate.py -t amr -f ../features/hasoc/2021_train_features_task1.json -d ../features/hasoc/hasoc_2021_test_amr.pickle -m report
python evaluate.py -t amr -f ../features/hasoc/2021_train_features_task1.json -d ../features/hasoc/hasoc_2021_test_amr.tsv -m report
```

## Usage and examples on the HASOC data
Expand Down
18 changes: 9 additions & 9 deletions features/hasoc/data.sh
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
wget https://owncloud.tuwien.ac.at/index.php/s/VChBRMu2CghoVEB/download -O hasoc_2019_val_amr.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/80ndwqwAnIqkTKt/download -O hasoc_2019_test_amr.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/PtD2aqtuJtzUoH2/download -O hasoc_2019_train_amr.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/gzlHeqNkp95ehLH/download -O hasoc_2020_val_amr.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/RtiiwCjpyJ1pqdu/download -O hasoc_2020_test_amr.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/mngqfVDaTsW7odk/download -O hasoc_2020_train_amr.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/paqXOSj7bbMd5ZI/download -O hasoc_2021_val_amr.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/oocwRTd0XRhgFYd/download -O hasoc_2021_test_amr.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/Khv85ErE6s0cSAc/download -O hasoc_2021_train_amr.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/sUHFGNdvphCUZsQ/download -O hasoc_2019_val_amr.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/PsaHO8N02K9u8sp/download -O hasoc_2019_test_amr.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/QLsQaME33zdT5Xw/download -O hasoc_2019_train_amr.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/Um7BjFu5847yXmd/download -O hasoc_2020_val_amr.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/47HQ9sKo5PmTCTH/download -O hasoc_2020_test_amr.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/hQ56wvpRKxUzVi8/download -O hasoc_2020_train_amr.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/2w8VNtqm7PXTgTX/download -O hasoc_2021_val_amr.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/5Y1V67KMwNMmLC8/download -O hasoc_2021_test_amr.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/rhTbyW1CbfQuWk0/download -O hasoc_2021_train_amr.tsv
2 changes: 1 addition & 1 deletion features/semeval/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@ bash data.sh
Then the frontend of POTATO can be started from the __frontend__ directory:

```bash
streamlit run app.py -- -t ../features/semeval/semeval_train.pickle -v ../features/semeval/semeval_val.pickle
streamlit run app.py -- -t ../features/semeval/semeval_train.tsv -v ../features/semeval/semeval_val.tsv
```
4 changes: 2 additions & 2 deletions features/semeval/data.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
wget https://owncloud.tuwien.ac.at/index.php/s/6gHDG8XArRuyzDc/download -O semeval_train.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/OgNbqmkUgmcmCTA/download -O semeval_train.tsv
wget https://owncloud.tuwien.ac.at/index.php/s/2ESe3bVKiSjZ8jJ/download -O semeval_train.txt
wget https://owncloud.tuwien.ac.at/index.php/s/Nx3p4BG9xx7FHVQ/download -O semeval_train_4lang_graphs.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/iX8Fmfsyf6vml6t/download -O semeval_val.pickle
wget https://owncloud.tuwien.ac.at/index.php/s/OgNbqmkUgmcmCTA/download -O semeval_val.tsv
21 changes: 9 additions & 12 deletions frontend/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,11 @@
init_extractor,
init_session_states,
rank_and_suggest,
read_train,
read_val,
read_df,
rerun,
rule_chooser,
save_ruleset,
read_ruleset,
save_after_modify,
save_dataframe,
match_texts,
Expand Down Expand Up @@ -62,8 +62,7 @@ def inference_mode(evaluator, hand_made_rules):
st.session_state.download = st.sidebar.selectbox("", options=[False, True], key=2)

if hand_made_rules:
with open(hand_made_rules) as f:
st.session_state.features = json.load(f)
read_ruleset(hand_made_rules)

extractor = init_extractor(lang, graph_format)

Expand Down Expand Up @@ -181,7 +180,7 @@ def inference_mode(evaluator, hand_made_rules):
[";".join(feat[1]) for feat in features_merged],
[feat[2] for feat in features_merged],
)
save_rules = hand_made_rules or "saved_features.json"
save_rules = hand_made_rules or "saved_features.tsv"
save_ruleset(save_rules, st.session_state.features)
rerun()

Expand Down Expand Up @@ -226,8 +225,7 @@ def inference_mode(evaluator, hand_made_rules):

def simple_mode(evaluator, data, val_data, graph_format, feature_path, hand_made_rules):
if hand_made_rules:
with open(hand_made_rules) as f:
st.session_state.features = json.load(f)
read_ruleset(hand_made_rules)

if "df" not in st.session_state:
st.session_state.df = data.copy()
Expand Down Expand Up @@ -634,10 +632,9 @@ def simple_mode(evaluator, data, val_data, graph_format, feature_path, hand_made


def advanced_mode(evaluator, train_data, graph_format, feature_path, hand_made_rules):
data = read_train(train_data)
data = read_df(train_data)
if hand_made_rules:
with open(hand_made_rules) as f:
st.session_state.features = json.load(f)
read_ruleset(hand_made_rules)
if "df" not in st.session_state:
st.session_state.df = data.copy()
if "annotated" not in st.session_state.df:
Expand Down Expand Up @@ -1216,9 +1213,9 @@ def main(args):
init_session_states()
evaluator = init_evaluator()
if args.train_data:
data = read_train(args.train_data, args.label)
data = read_df(args.train_data, args.label)
if args.val_data:
val_data = read_val(args.val_data, args.label)
val_data = read_df(args.val_data, args.label)
graph_format = args.graph_format
feature_path = args.suggested_rules
hand_made_rules = args.hand_rules
Expand Down
Loading

0 comments on commit a05ddf9

Please sign in to comment.