-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get reproducible deterministic evaluation results? #28
Comments
os.getpid() is not deterministic between different runs
We used a special renderer to improve the aesthetics in our arxiv paper. While for actual env, to accelerate training and minimize the rendering time, we intentionally used a simple renderer. Well-rendered scenes 1. significantly slows down fps 2. still has a large domain gap from real scenes, and requires sim2real vision modules like CycleGANs. |
The key to the rendering includes the following modifications
|
Thanks for your reply, I will try the well-rendered scenes. |
I evaluate the example pre-trained models on 100 trajectories. I set the seed to 0. I run the following command twice:
For the first run, the
Success or Early Stop Rate
is 0.81. For the second time, the result is 0.84.It seems that the generated seed (using following code) is different although I set the seed to 0 explictly.
ManiSkill-Learn/mani_skill_learn/env/evaluation.py
Lines 72 to 74 in 9742da9
So how can I control the determinism through seed?
In addition, I have a queation about the ManiSkill environment. I notice that there are shadows of objects and robots in the rendered image in the first version of your arxiv paper, like this:
But the world frame image I get is like this (I change the resolution to 256*256). How to make the image more realistic like the image shown above?
The text was updated successfully, but these errors were encountered: