Enhancing State Representation Learning through Constant Action Interventions and Survival Analysis for Autonomous Highway Driving
This project develops and evaluates predictive models for autonomous vehicle navigation using deep learning. The goal is to learn ego-centric representations of the future environmental states. It uses CommonRoad-Geometric as the autonomous driving software environment.
Click the thumbnail above to watch the demonstration video on Google Drive.
This repository includes tools for:
- Collecting datasets for training and evaluation.
- Training predictive state representation models.
- Training reinforcement learning agents based on these representations.
To simplify environment management, we use Docker with GPU support. Follow the steps below to get started:
-
Pull the Docker Image
Pull the latest pre-built Docker image:docker pull ge32luk/psr-ad:latest
-
Run the Docker Container
Use the following command to start a Docker container:docker run -d -it --gpus all --name psrad -v $(pwd):/app/psr-ad -v $(pwd)/output:/app/psr-ad/output -v $(pwd)/scenarios:/app/psr-ad/scenarios -v $(pwd)/../../data:/app/psr-ad/data -e CUDA_VISIBLE_DEVICES=0 -e WANDB_API_KEY=... ge32luk/psr-ad:latest
To enter it, then use
docker exec -it psrad /bin/bash
Explanation of Flags:
--gpus all
: Enables GPU access.-v $(pwd):/app/psr-ad
: Mounts the current directory as/app/psr-ad
inside the container.-e CUDA_VISIBLE_DEVICES=0
: Limits GPU usage to a specific device.-e WANDB_API_KEY=<your_key>
: Sets the API key for Weights & Biases logging.
-
Build the Docker Image Locally
If you need to build the image locally:docker-compose build
Create a config.local.yaml
file to specify machine-specific settings. Use the provided template:
cp config.local.template.yaml config.local.yaml
Edit the file to match your local setup (e.g., paths, hyperparameters).
To collect a dataset, ensure your environment is set up (either locally or via Docker). Use the following command:
python collect_dataset.py
Alternatively, for distributed workload using multiple workers:
./parallel_dataset_collection.sh -e 60 -w 2 -r 60 commonroad.scenario_dir="data"
For headless environments (e.g., servers without a display), set the following environment variable:
export PYGLET_HEADLESS=1
Train the predictive state representation model using:
python train_model.py
Once the representation model is trained, train a reinforcement learning agent:
python train_rl_agent.py
- If you encounter issues related to rendering during dataset collection, ensure the following:
- Set
export PYGLET_HEADLESS=1
for headless environments. - Verify all required directories (e.g.,
output
,scenarios
,data
) exist.
- Set
- For faster dataset collection, use parallel workers:
./parallel_dataset_collection.sh -e <num_episodes> -w <num_workers> -r <retries>
- Missing Directories: Verify paths in
config.local.yaml
or the Docker volumes. - Connection Issues with WandB: Ensure your API key is set correctly using the
WANDB_API_KEY
environment variable.
We welcome contributions! If you encounter issues or have feature requests, feel free to open an issue or a pull request.