Skip to content

Latest commit

 

History

History
45 lines (39 loc) · 3.07 KB

README.md

File metadata and controls

45 lines (39 loc) · 3.07 KB

Datasets

Provided files

We provide the following zip archives, to be un-zipped within the data/ directory:

  • SurfaceSamples.zip : Point clouds samplings of ShapeNet objects from the 13 categories used in the paper. These are obtained by running the sampling strategy form the authors of DeepSDF on the isosurfaces provided by the authors of DISN.
  • renderings_rgb.zip : RGB renderings of the above ShapeNet shapes, using the pipeline from DISN's authors. As explained in the main paper, we had to re-run the rendering sript because the provided depth maps are clipped (see this issue). As a results, the viewpoints do not correspond to the ones released by the authors of DISN.
  • renderings_depth.zip : full un-clipped depth maps renderings of the above ShapeNet shapes, using the pipeline from DISN's authors.
  • inferred_cameras.zip : predicted cameras for the above viewpoints, as inferred by the auxiliary pose estimator.
  • inferred_depth.zip : predicted depthmaps for the above viewpoints, as inferred by the auxiliary depth estimator.

In addition, please place normalization_parameters.pck directly at the root of data/. This pickled file contains translation and scaling parameters applied to original ShapeNet meshes to match the point clouds given in SurfaceSamples.zip. It is required by the dataloader.

Upon downloading and extracting the above files, the data/ directory structure should look like:

|-- SurfaceSamples
|   |-- ShapeNet
|   |   |-- 02691156
|   |   ...
|-- inferred_cameras
|   |-- 02691156
|   ...
|-- inferred_depth
|   |-- 02691156
|   ...
|-- normalization_parameters.pck
|-- renderings_depth
|   |-- 02691156
|   ...
|-- renderings_rgb
|   |-- ShapeNet
|   |   |-- 02691156
|   |   ...
|-- splits
|   |-- all_13_classes_test.json
|   |-- all_13_classes_train.json
|   |-- cars_test.json
|   `-- cars_train.json

Choose data source for training/testing

Depth maps and cameras inferred by the auxiliary networks are pre-generated. To choose whether to use ground truth or inferred ones for training or testing an UCLID-Net model, please see the options at the top of the dataloader.

Acknowledgements

We warmly thank the authors of ShapeNet, DISN and DeepSDF for their datasets and pre-processing pipelines. Please consider citing them!