Major changes
- [Faster pre-training] Fix the issue of EMA updates being infrequent as it is only triggered on the epoch level, not on batch level. Now we may use saved EMA checkpoints from much earlier epochs, such as 3000 for robomimic state input and 1000 for robomimic pixel input.
- [Faster pre-training] Update pre-training configs for all tasks, generally using fewer epochs as the EMA update issue has been fixed
- [Faster fine-tuning] Update fine-tuning configs for robomimic tasks, using higher learning rate and possibly higher update ratio for better sample efficiency
Minor changes
- Clean up D3IL data pre-processing
- Fix data normalization bug in robomimic pre-processing (does not affect existing experiment results)
- Allow saving full observations for plotting in eval agent
- Add a simple implementation of ViT + UNet and provide pre-trained checkpoints on Google Drive
- Fix the isaacgym download path