We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, thanks for the excellent work.
I'm trying to use your code to do some experiments and find a bug in core/utils/decoder_utils.py:
In line 84, it seems that the grad_outputs param should receive torch.ones_like(sdf) instead of torch.ones_like(points_batch).
So the final code should be :
grad_tensor = torch.autograd.grad(outputs=sdf, inputs=points_batch, grad_outputs=torch.ones_like(sdf), create_graph=True, retain_graph=True).
The original code raises the following error during a test:
Traceback (most recent call last): File "run_multi_pmodata.py", line 115, in main() File "run_multi_pmodata.py", line 100, in main shape_code, optimizer_latent = optimize_multi_view(sdf_renderer, evaluator, shape_code, optimizer_latent, imgs, cameras, weight_list, num_views_per_round=num_views_per_round, num_iters=20, num_sample_points=num_sample_points, visualizer=visualizer, points_gt=points_gt, vis_dir=vis_dir, vis_flag=args.visualize, full_flag=args.full) File "/home/disk/diske/yudeng/DIST-Renderer/core/inv_optimizer/optimize_multi.py", line 76, in optimize_multi_view visualizer=visualizer) File "/home/disk/diske/yudeng/DIST-Renderer/core/inv_optimizer/loss_multi.py", line 27, in compute_loss_color_warp render_output = sdf_renderer.render_warp(shape_code, R1, T1, R2, T2, view1, view2, no_grad_normal=True) File "/home/disk/diske/yudeng/DIST-Renderer/core/sdfrenderer/renderer_warp.py", line 131, in render_warp normal1 = self.render_normal(latent, R1, T1, Zdepth1, valid_mask1, no_grad=no_grad_normal, clamp_dist=clamp_dist, MAX_POINTS=100000) # (3, H*W) File "/home/disk/diske/yudeng/DIST-Renderer/core/sdfrenderer/renderer.py", line 896, in render_normal gradient = decode_sdf_gradient(self.decoder, latent, points.transpose(1,0), clamp_dist=clamp_dist, no_grad=no_grad, MAX_POINTS=MAX_POINTS) # (N, 3) File "/home/disk/diske/yudeng/DIST-Renderer/core/utils/decoder_utils.py", line 84, in decode_sdf_gradient grad_tensor = torch.autograd.grad(outputs=sdf, inputs=points_batch, grad_outputs=torch.ones_like(points_batch), create_graph=True, retain_graph=True) File "/home/yudeng/anaconda3/envs/siren/lib/python3.6/site-packages/torch/autograd/init.py", line 151, in grad grad_outputs = _make_grads(outputs, grad_outputs) File "/home/yudeng/anaconda3/envs/siren/lib/python3.6/site-packages/torch/autograd/init.py", line 30, in _make_grads + str(out.shape) + ".") RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([2006, 3]) and output[0] has a shape of torch.Size([2006, 1]).
The text was updated successfully, but these errors were encountered:
Maybe you can try: grad_tensor = torch.autograd.grad(outputs=sdf, inputs=points_batch, grad_outputs=torch.ones_like(points_batch), create_graph=True, retain_graph=True) to grad_tensor = torch.autograd.grad(outputs=sdf, inputs=points_batch, grad_outputs=torch.ones_like(sdf), create_graph=True, retain_graph=True)
grad_tensor = torch.autograd.grad(outputs=sdf, inputs=points_batch, grad_outputs=torch.ones_like(points_batch), create_graph=True, retain_graph=True)
grad_tensor = torch.autograd.grad(outputs=sdf, inputs=points_batch, grad_outputs=torch.ones_like(sdf), create_graph=True, retain_graph=True)
Sorry, something went wrong.
Thanks @zhongjinluo your fix works for me :D
Apply bugfix from B1ueber2y#4
708d545
No branches or pull requests
Hi, thanks for the excellent work.
I'm trying to use your code to do some experiments and find a bug in core/utils/decoder_utils.py:
In line 84, it seems that the grad_outputs param should receive torch.ones_like(sdf) instead of torch.ones_like(points_batch).
So the final code should be :
grad_tensor = torch.autograd.grad(outputs=sdf, inputs=points_batch, grad_outputs=torch.ones_like(sdf), create_graph=True, retain_graph=True).
The original code raises the following error during a test:
Traceback (most recent call last):
File "run_multi_pmodata.py", line 115, in
main()
File "run_multi_pmodata.py", line 100, in main
shape_code, optimizer_latent = optimize_multi_view(sdf_renderer, evaluator, shape_code, optimizer_latent, imgs, cameras, weight_list, num_views_per_round=num_views_per_round, num_iters=20, num_sample_points=num_sample_points, visualizer=visualizer, points_gt=points_gt, vis_dir=vis_dir, vis_flag=args.visualize, full_flag=args.full)
File "/home/disk/diske/yudeng/DIST-Renderer/core/inv_optimizer/optimize_multi.py", line 76, in optimize_multi_view
visualizer=visualizer)
File "/home/disk/diske/yudeng/DIST-Renderer/core/inv_optimizer/loss_multi.py", line 27, in compute_loss_color_warp
render_output = sdf_renderer.render_warp(shape_code, R1, T1, R2, T2, view1, view2, no_grad_normal=True)
File "/home/disk/diske/yudeng/DIST-Renderer/core/sdfrenderer/renderer_warp.py", line 131, in render_warp
normal1 = self.render_normal(latent, R1, T1, Zdepth1, valid_mask1, no_grad=no_grad_normal, clamp_dist=clamp_dist, MAX_POINTS=100000) # (3, H*W)
File "/home/disk/diske/yudeng/DIST-Renderer/core/sdfrenderer/renderer.py", line 896, in render_normal
gradient = decode_sdf_gradient(self.decoder, latent, points.transpose(1,0), clamp_dist=clamp_dist, no_grad=no_grad, MAX_POINTS=MAX_POINTS) # (N, 3)
File "/home/disk/diske/yudeng/DIST-Renderer/core/utils/decoder_utils.py", line 84, in decode_sdf_gradient
grad_tensor = torch.autograd.grad(outputs=sdf, inputs=points_batch, grad_outputs=torch.ones_like(points_batch), create_graph=True, retain_graph=True)
File "/home/yudeng/anaconda3/envs/siren/lib/python3.6/site-packages/torch/autograd/init.py", line 151, in grad
grad_outputs = _make_grads(outputs, grad_outputs)
File "/home/yudeng/anaconda3/envs/siren/lib/python3.6/site-packages/torch/autograd/init.py", line 30, in _make_grads
+ str(out.shape) + ".")
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([2006, 3]) and output[0] has a shape of torch.Size([2006, 1]).
The text was updated successfully, but these errors were encountered: