Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get reproducible deterministic evaluation results? #28

Open
Yingdong-Hu opened this issue May 10, 2022 · 3 comments
Open

How to get reproducible deterministic evaluation results? #28

Yingdong-Hu opened this issue May 10, 2022 · 3 comments

Comments

@Yingdong-Hu
Copy link

I evaluate the example pre-trained models on 100 trajectories. I set the seed to 0. I run the following command twice:

python -m tools.run_rl configs/bc/mani_skill_point_cloud_transformer.py --gpu-ids=0 --evaluation \
--work-dir=./test/OpenCabinetDrawer_1045_link_0-v0_pcd \
--resume-from=./example_mani_skill_data/OpenCabinetDrawer_1045_link_0-v0_PN_Transformer.ckpt \
--cfg-options "env_cfg.env_name=OpenCabinetDrawer_1045_link_0-v0" \
"eval_cfg.save_video=False" \
"eval_cfg.num=100" \
"eval_cfg.num_procs=10" \
"eval_cfg.use_log=True" \
--seed=0

For the first run, the Success or Early Stop Rate is 0.81. For the second time, the result is 0.84.
It seems that the generated seed (using following code) is different although I set the seed to 0 explictly.

if hasattr(self.env, 'seed'):
# Make sure that envs in different processes have different behaviors
self.env.seed(np.random.randint(0, 10000) + os.getpid())

So how can I control the determinism through seed?

In addition, I have a queation about the ManiSkill environment. I notice that there are shadows of objects and robots in the rendered image in the first version of your arxiv paper, like this:
image

But the world frame image I get is like this (I change the resolution to 256*256). How to make the image more realistic like the image shown above?
word_frame39_5

@xuanlinli17
Copy link
Collaborator

xuanlinli17 commented May 14, 2022

For the first run, the Success or Early Stop Rate is 0.81. For the second time, the result is 0.84.
It seems that the generated seed (using following code) is different although I set the seed to 0 explictly.

os.getpid() is not deterministic between different runs

I notice that there are shadows of objects and robots in the rendered image in the first version of your arxiv paper, like this

We used a special renderer to improve the aesthetics in our arxiv paper. While for actual env, to accelerate training and minimize the rendering time, we intentionally used a simple renderer. Well-rendered scenes 1. significantly slows down fps 2. still has a large domain gap from real scenes, and requires sim2real vision modules like CycleGANs.

@fbxiang
Copy link

fbxiang commented May 14, 2022

The key to the rendering includes the following modifications

  1. Add environment map (https://github.com/haosulab/SAPIEN/blob/12a83f9fd83b81a6211d8b4b6146c80b74fea93f/python/pysapien_content.hpp#L857-L858)
  2. Enable shadows and tune their parameters when adding lights (https://github.com/haosulab/SAPIEN/blob/12a83f9fd83b81a6211d8b4b6146c80b74fea93f/python/pysapien_content.hpp#L816-L829)
  3. Fine tune the material parameters for the object and the ground.

@Yingdong-Hu
Copy link
Author

Thanks for your reply, I will try the well-rendered scenes.
For the first question, I know os.getpid() is not deterministic between different runs. However, I find the number generated by np.random.randint(0, 10000) is different although I set the numpy seed to 0 explictly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants