The sample speed is ~3000 env step per second (~12000 Atari frame per second in fact since we use frame_stack=4) under the normal mode (use a CNN policy and a collector, also storing data into the buffer). The main bottleneck is training the convolutional neural network.
The Atari env seed cannot be fixed due to the discussion here, but it is not a big issue since on Atari it will always have the similar results.
The env wrapper is a crucial thing. Without wrappers, the agent cannot perform well enough on Atari games. Many existing RL codebases use OpenAI wrapper, but it is not the original DeepMind version (related issue). Dopamine has a different wrapper but unfortunately it cannot work very well in our codebase.
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
Note: The eps_train_final
and eps_test
in the original DQN paper is 0.1 and 0.01, but some works found that smaller eps helps improve the performance. Also, a large batchsize (say 64 instead of 32) will help faster convergence but will slow down the training speed.
We haven't tuned this result to the best, so have fun with playing these hyperparameters!
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
Note: The selection of n_step
is based on Figure 6 in the Rainbow paper.
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
To running BCQ algorithm on Atari, you need to do the following things:
- Train an expert, by using the command listed in the above DQN section;
- Generate buffer with noise:
python3 atari_dqn.py --task {your_task} --watch --resume-path log/{your_task}/dqn/policy.pth --eps-test 0.2 --buffer-size 1000000 --save-buffer-name expert.hdf5
(note that 1M Atari buffer cannot be saved as.pkl
format because it is too large and will cause error); - Train BCQ:
python3 atari_bcq.py --task {your_task} --load-buffer-name expert.hdf5
.
We test our BCQ implementation on two example tasks (different from author's version, we use v4 instead of v0; one epoch means 10k gradient step):
Task | Online DQN | Behavioral | BCQ |
---|---|---|---|
PongNoFrameskip-v4 | 21 | 7.7 | 21 (epoch 5) |
BreakoutNoFrameskip-v4 | 303 | 61 | 167.4 (epoch 12, could be higher) |
To running CQL algorithm on Atari, you need to do the following things:
- Train an expert, by using the command listed in the above QRDQN section;
- Generate buffer with noise:
python3 atari_qrdqn.py --task {your_task} --watch --resume-path log/{your_task}/qrdqn/policy.pth --eps-test 0.2 --buffer-size 1000000 --save-buffer-name expert.hdf5
(note that 1M Atari buffer cannot be saved as.pkl
format because it is too large and will cause error); - Train CQL:
python3 atari_cql.py --task {your_task} --load-buffer-name expert.hdf5
.
We test our CQL implementation on two example tasks (different from author's version, we use v4 instead of v0; one epoch means 10k gradient step):
Task | Online QRDQN | Behavioral | CQL | parameters |
---|---|---|---|---|
PongNoFrameskip-v4 | 20.5 | 6.8 | 19.5 (epoch 5) | python3 atari_cql.py --task "PongNoFrameskip-v4" --load-buffer-name log/PongNoFrameskip-v4/qrdqn/expert.hdf5 --epoch 5 |
BreakoutNoFrameskip-v4 | 394.3 | 46.9 | 248.3 (epoch 12) | python3 atari_cql.py --task "BreakoutNoFrameskip-v4" --load-buffer-name log/BreakoutNoFrameskip-v4/qrdqn/expert.hdf5 --epoch 12 --min-q-weight 50 |
We reduce the size of the offline data to 10% and 1% of the above and get:
Buffer size 100000:
Task | Online QRDQN | Behavioral | CQL | parameters |
---|---|---|---|---|
PongNoFrameskip-v4 | 20.5 | 5.8 | 21 (epoch 5) | python3 atari_cql.py --task "PongNoFrameskip-v4" --load-buffer-name log/PongNoFrameskip-v4/qrdqn/expert.size_1e5.hdf5 --epoch 5 |
BreakoutNoFrameskip-v4 | 394.3 | 41.4 | 40.8 (epoch 12) | python3 atari_cql.py --task "BreakoutNoFrameskip-v4" --load-buffer-name log/BreakoutNoFrameskip-v4/qrdqn/expert.size_1e5.hdf5 --epoch 12 --min-q-weight 20 |
Buffer size 10000:
Task | Online QRDQN | Behavioral | CQL | parameters |
---|---|---|---|---|
PongNoFrameskip-v4 | 20.5 | nan | 1.8 (epoch 5) | python3 atari_cql.py --task "PongNoFrameskip-v4" --load-buffer-name log/PongNoFrameskip-v4/qrdqn/expert.size_1e4.hdf5 --epoch 5 --min-q-weight 1 |
BreakoutNoFrameskip-v4 | 394.3 | 31.7 | 22.5 (epoch 12) | python3 atari_cql.py --task "BreakoutNoFrameskip-v4" --load-buffer-name log/BreakoutNoFrameskip-v4/qrdqn/expert.size_1e4.hdf5 --epoch 12 --min-q-weight 10 |
To running CRR algorithm on Atari, you need to do the following things:
- Train an expert, by using the command listed in the above QRDQN section;
- Generate buffer with noise:
python3 atari_qrdqn.py --task {your_task} --watch --resume-path log/{your_task}/qrdqn/policy.pth --eps-test 0.2 --buffer-size 1000000 --save-buffer-name expert.hdf5
(note that 1M Atari buffer cannot be saved as.pkl
format because it is too large and will cause error); - Train CQL:
python3 atari_crr.py --task {your_task} --load-buffer-name expert.hdf5
.
We test our CRR implementation on two example tasks (different from author's version, we use v4 instead of v0; one epoch means 10k gradient step):
Task | Online QRDQN | Behavioral | CRR | CRR w/ CQL | parameters |
---|---|---|---|---|---|
PongNoFrameskip-v4 | 20.5 | 6.8 | -21 (epoch 5) | 16.1 (epoch 5) | python3 atari_crr.py --task "PongNoFrameskip-v4" --load-buffer-name log/PongNoFrameskip-v4/qrdqn/expert.hdf5 --epoch 5 |
BreakoutNoFrameskip-v4 | 394.3 | 46.9 | 26.4 (epoch 12) | 125.0 (epoch 12) | python3 atari_crr.py --task "BreakoutNoFrameskip-v4" --load-buffer-name log/BreakoutNoFrameskip-v4/qrdqn/expert.hdf5 --epoch 12 --min-q-weight 50 |
Note that CRR itself does not work well in Atari tasks but adding CQL loss/regularizer helps.