Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

texture transfer #4

Open
Guptajakala opened this issue Oct 6, 2021 · 5 comments
Open

texture transfer #4

Guptajakala opened this issue Oct 6, 2021 · 5 comments

Comments

@Guptajakala
Copy link

Hi, thanks for the great work!

I'm doing a research project and your texture transfer is quite interesting and might be applicable for my case. Would you have any plan to release the code about how you produced figure10?

@YuDeng
Copy link
Contributor

YuDeng commented Oct 15, 2021

Hi, sorry that the texture transfer code is not available currently.

To achieve similar texture transfer result, here is a brief approach:

  1. Embed the source and target shapes into DIF-Net's latent space.
  2. Send the surface points of both shapes into the deformation network and get their corresponding deformation flows. Use these flows to deform the surface points into the canonical space.
  3. For the deformed surface points of the target shape, searching for their nearest neighbors of the source shape in the canonical space.
  4. Copy the color of the corresponding nearest points of the source shape to the target surface points.

After all these steps, you can get a texture transfer result from source to target.

@Guptajakala
Copy link
Author

Hi, in figure 2 in the paper, the canonical space do you mean the s_tilta (before adding delta s) or the final s (adding delta s)?

@YuDeng
Copy link
Contributor

YuDeng commented Oct 15, 2021

The texture transfer stage does not need delta s which is a scalar adding on the SDF value of a point. To be exact, the point in the canonical space is p'.

So you first have some surface points p on a certain shape (which is the original shape space), and you can send them into the deform-net to get p' in the canonical space. Then you can use p' to calculate nearest neighbors and do the texture transfer. In this stage, there is no need to use the template field as well as delta s predicted by the deform-net.

@Guptajakala
Copy link
Author

Guptajakala commented Oct 15, 2021

files = sorted(glob.glob('/home/plane/surface_pts_n_normal/*.mat'))
deforms = []
for i in [0,2]:
  shape = loadmat(files[i])['p']
  subject_id = torch.Tensor([i]).squeeze().long().cuda()[None,...]
  latent = model.get_latent_code(subject_id)
  coords = torch.from_numpy(shape[...,:3]).float().cuda().unsqueeze(0)
  deformed = model.get_template_coords(coords,latent)
  deformed = deformed[0].data.cpu().numpy()
  deforms.append(deformed)          # Find nearest points between these 2 deformed point cloud?

Thanks, now I understand better. I implemented this based on my understanding. Is this what you mean? For getting latent code, I'm not sure if the subject_idx is corresponding the same order as the files in the folder. The deformed point cloud looks a bit weird.

@YuDeng
Copy link
Contributor

YuDeng commented Oct 15, 2021

The code is exactly what I mean.

The subject_idx has the same order as the training subjects in split/train/xxx.txt instead of the point cloud provided in this repo (which are evaluation data).

As a result, one way is to first extract the shape surface of training subjects by running generate.py, and then send the surface points into the deform-net to conduct texture transfer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants