This is the finalized repo from Serpentrain, the 2020 SerpentineAI team that competed in the 2020 flatland challenge. The technical report of this and other competitions joined by teams from SerpentineAI can be found online.
The competition uses conda as package manager. After installing conda (or miniconda) run the following commands in the repo root (~/../flatland) to download and install all requirements:
conda env create
conda activate flatland-rl
Training our agent takes 3 gpus, as it was what we had access to during the competition. To train our agent run:
PYTHONPATH=. python serpentrain/reinforcement_learning/distributed/main_distributed.py
Warning this might freeze up your system as it is resource heavy.
To see the various training options you can run:
PYTHONPATH=. python serpentrain/reinforcement_learning/distributed/main_distributed.py -h
Adjust the run.py file.
RENDER = True # Whether to render the game
USE_GPU = True # If you have a GPU
DQN_MODEL = True
CHECKPOINT_PATH = "path/to/checkpoint.pt" # E.G. './checkpoints/submission/snapshot-20201104-2201-epoch-1.pt'
Then run:
bash local_run.sh
Adjust the run.py file.
RENDER = True # Whether to render the game
USE_GPU = False # Not necessary
DQN_MODEL = False
CHECKPOINT_PATH = "" # Not necessary
Then run:
bash local_run.sh
SerpentineAI is a student team from the Technical University of Eindhoven. During the competition computational resources provided by VBTI to SerpentineAI were used during training.