-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating custom dataset for shapo #11
Comments
Thanks @alihankeleser for your interest in our work. I can help out as much as I can (please see responses below) but I would ask you to refer to the original nocs repo and or create a issue on NOCS github as we only use their provided data and do not create any data from scratch on our own. Having said this, here are my answers:
Hope it helps! |
Hi @zubair-irshad, This definitely helps. After going through the code, I found out some things which might be helpful for others:
I am generating another issue on how to create 'sdf_rgb_pretrained' folder and how to pretrain the latent code network with the custom data as you mentioned. I cannot proceed to train the network wihtout that data. Thanks a lot for your answers and support! |
Great to know that you were able to get things to work. Do you mind creating a pull request for points for 1. and 2.? It would help the community if we mention this in the readme as well. Thanks! |
Does your error with the meshes was in the load_obj function? Thank you! |
@Trulli99 hi, do you know how to creat norm.txt, i want to know how to cal scale_factors.. please help |
@peng25zhang if you have the CAD models of real-world images, scale-factors could be calculated by the bounds of the CAD model or pointclouds like this where bbox_dims are the tight bounding box of the real-world CAD model or pointcloud which determines the 3dimensional extent of the shape i.e. W,H,L values of the shape. |
Hello,
First of all, thanks for your contribution!
I am trying to create a custom dataset of different objects which are NOT present in shapenet database from scratch. For that, I am trying to imitate the dataset structure that you shared. I am using BlenderProc to create the synthetic data. I have also downloaded the dataset which you used for training and analysed it.
I have a several doubts regarding the creation of different files in the dataset:
Is there any script present in the repository to create this depth image? I did not understand how the depth information is encoded in an RGB image.
How to create a depth image exactly like in camera_full_depth folder (for e.g. camera_full_depth/train/0000/0000_composed.png) :
How to create bbox.txt for each object file in obj_models?
how to generate camera_train.pkl and camera_val.pkl in obj_models?
Why is mug_meta.pkl present in the obj_models folder?
How to create norm.txt and norm_vertices.txt in obj_models/real_train?
How to generate ‘Results’ folder and all the files in it?
In ‘sdf_rgb_pretrained’ folder, how to generate the ‘Latent Codes’ and all_train_ids.json inside it?
How to generate .pkl files in ‘gts’ folder?
Do we need to store 6D poses of all the objects in the scene somewhere as annotations for all the images?
Thank you in advance for your answers.
The text was updated successfully, but these errors were encountered: