Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video Recording for Furniture Tasks #2

Open
chenzixuan99 opened this issue Sep 7, 2024 · 3 comments
Open

Video Recording for Furniture Tasks #2

chenzixuan99 opened this issue Sep 7, 2024 · 3 comments

Comments

@chenzixuan99
Copy link

Thank you for sharing such excellent work! I have a small question: How can I save videos for each episode when fine-tuning the furniture tasks? Is there a specific parameter in the configuration file that I can modify to achieve this?

@chenzixuan99 chenzixuan99 changed the title Storing Videos for Furniture Tasks Video Recording for Furniture Tasks Sep 7, 2024
@allenzren
Copy link
Member

allenzren commented Sep 8, 2024

Hi there! Recording Furniture-Bench video is possible if you add +env.specific.record=True to the command. Right now it only records the video of the first environment and it can be slow since the camera observations are now generated in IsaacGym. We don't recommend doing this with the default 1000 parallelized environments.

You can see the recording implemented here in our Furniture-Bench fork. You can modify the implementation to your need.

It is probably a better idea to have a script that loads the checkpoint and generates video with only one environment spawned.

@chenzixuan99
Copy link
Author

chenzixuan99 commented Sep 10, 2024

Hi there! Recording Furniture-Bench video is possible if you add +env.specific.record=True to the command. Right now it only records the video of the first environment and it can be slow since the camera observations are now generated in IsaacGym. We don't recommend doing this with the default 1000 parallelized environments.

You can see the recording implemented here in our Furniture-Bench fork. You can modify the implementation to your need.

It is probably a better idea to have a script that loads the checkpoint and generates video with only one environment spawned.

Hi, thank you for your response. Based on your suggestion, I modified the parameters as follows:

env:
  n_envs: 1
  name: ${env_name}
  env_type: furniture
  max_episode_steps: 700
  best_reward_threshold_for_success: 1
  specific:
    headless: false
    furniture: one_leg
    randomness: low
    normalization_path: ${normalization_path}
    act_steps: ${act_steps}
    sparse_reward: true
    record: true

render:
  freq: 10
  num: 1

However, after making these adjustments, the video still hasn't been recorded during the fine-tuning process. Do you have any other suggestions to resolve this issue? Thank you!

@allenzren
Copy link
Member

Could you try env.specific.headless=True? The instruction on the current README is for visualization with GUI, not for recording videos. You don't need to make any other changes to the config except for adding env.specific.record=True.

If it still does not work, feel free to share the log with me.

allenzren added a commit that referenced this issue Oct 7, 2024
* remove dataset consistency check

* add pretrain configs

* rename

* transport pretrain cfg

* add ibrl

* fix base policy

* set `deterministic=True` when sampling in diffusion evaluation

* minors

* Revert "add rlpd framework"

* Revert "Revert "add rlpd framework"" (#4)

* match rlpd param names

* rename to `StitchedSequenceQLearningDataset`

* add configs

* add `tanh_output` and dropout to gaussians

* fix ibrl

* minors

---------

Co-authored-by: Justin M. Lidard <[email protected]>
Co-authored-by: allenzren <[email protected]>
allenzren added a commit that referenced this issue Oct 7, 2024
* remove dataset consistency check

* add pretrain configs

* rename

* transport pretrain cfg

* add ibrl

* fix base policy

* set `deterministic=True` when sampling in diffusion evaluation

* minors

* Revert "add rlpd framework"

* Revert "Revert "add rlpd framework"" (#4)

* match rlpd param names

* rename to `StitchedSequenceQLearningDataset`

* add configs

* add `tanh_output` and dropout to gaussians

* fix ibrl

* minors

---------

Co-authored-by: Justin M. Lidard <[email protected]>
Co-authored-by: allenzren <[email protected]>
allenzren added a commit that referenced this issue Oct 7, 2024
* remove dataset consistency check

* add pretrain configs

* rename

* transport pretrain cfg

* add ibrl

* fix base policy

* set `deterministic=True` when sampling in diffusion evaluation

* minors

* Revert "add rlpd framework"

* Revert "Revert "add rlpd framework"" (#4)

* match rlpd param names

* rename to `StitchedSequenceQLearningDataset`

* add configs

* add `tanh_output` and dropout to gaussians

* fix ibrl

* minors

---------

Co-authored-by: Justin M. Lidard <[email protected]>
Co-authored-by: allenzren <[email protected]>
allenzren added a commit that referenced this issue Oct 7, 2024
* remove dataset consistency check

* add pretrain configs

* rename

* transport pretrain cfg

* add ibrl

* fix base policy

* set `deterministic=True` when sampling in diffusion evaluation

* minors

* Revert "add rlpd framework"

* Revert "Revert "add rlpd framework"" (#4)

* match rlpd param names

* rename to `StitchedSequenceQLearningDataset`

* add configs

* add `tanh_output` and dropout to gaussians

* fix ibrl

* minors

---------

Co-authored-by: Justin M. Lidard <[email protected]>
Co-authored-by: allenzren <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants