Skip to content

Commit

Permalink
Add data collection scripts
Browse files Browse the repository at this point in the history
  • Loading branch information
VHellendoorn committed Feb 4, 2022
1 parent 428043d commit 8c9b1f3
Show file tree
Hide file tree
Showing 11 changed files with 50,312 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
TopLists/
Code/
Repos/
Preprocessed/
16 changes: 16 additions & 0 deletions Mining/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
## Purpose
Scripts to construct a dataset of code in a similar way to the ones used to train the released models. Note that because of the nature of the GH API, the exact results of each query will be different, so this will not precisely replicate the training data.

## Usage
Update `gh_crawler.py` by adding your GH API token (line 6). Then, run `collect_data.sh`, which invokes the GitHub API crawler (`gh_crawler.py`), followed by a repo cloning script (`clone_repo.sh`, in parallel), which uses `extract_code.py` to extract all source code files in the corresponding language (and filter very long/short files), and finally `deduplicate.py` to remove duplicate files.

Once this is completed, you can use [gpt-neox](https://github.com/EleutherAI/gpt-neox)'s `preprocess_data.py` (currently in `tools/`) to tokenize this dataset for the model, using a either the pretrained code vocabularies by providing the `code-vocab.json` and `code-merges.txt` files, or producing a new one.

At the time of this writing*, the following command processes the entire `Code/` directory to a new directory named `Preprocessed/` using the pretrained vocabularies across 16 parallel workers (assuming that `gpt-neox` is checked out in the current directory):
```
mkdir Preprocessed
sudo python3 gpt-neox/tools/preprocess_data.py --input Code --tokenizer-type GPT2BPETokenizer --vocab-file code-vocab.json --merge-file code-merges.txt --output-prefix Preprocessed/code --workers 16
```
And that's it! Just modify the `local_setup.yml` config in the gpt-neox toolkit to point it to the new vocab & merges file and data directory and it should be able to train.

*I did have to modify the `yield_from_files` function to recursively yield all (shuffled) files from a directory; the default version uses `lm_dataformat`, which balks at code file extensions. The updated function can be found in `yield_from_code_files.py`.
25 changes: 25 additions & 0 deletions Mining/clone_repo.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Clone a given repository, extract any files belonging to the given language, and delete the repository afterwards to save space.
in=$1
language=$2

# Extract the org and name from lines formatted as stars\thttps://github.com/org/name
repo=$(echo $in | cut -d$'\t' -f2);
name_part=$(echo $repo | cut -d"/" -f4-6);
name=$(echo $name_part | cut -d"/" -f2);
org=$(echo $name_part | cut -d"/" -f1);
echo "Cloning $org/$name"
DIR=Repos/$language/$org; \
OUT=Code/$language/$org; \
# Skip repositories for which we already have extracted code files.
if [ -d $OUT/$name ]; then echo "deja vu"; exit; fi;
mkdir -p $DIR; \
mkdir -p $OUT; \

# Clone with depth=1 to only get most recent files, rather than entire history.
if [ ! -d $DIR/$name ]; then
git clone -q --depth 1 https://github.com/$org/$name $DIR/$name;
fi;

# Extract all language-specific code files from the repository and delete it afterwards.
python3 extract_code.py $language $DIR/$name $OUT/$name;
rm -rf $DIR/$name
Loading

0 comments on commit 8c9b1f3

Please sign in to comment.