Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't expert_portv2 maybe is operation error #37

Open
guoyanzhongwgfufg opened this issue Jul 18, 2024 · 1 comment
Open

Can't expert_portv2 maybe is operation error #37

guoyanzhongwgfufg opened this issue Jul 18, 2024 · 1 comment

Comments

@guoyanzhongwgfufg
Copy link

Thank your releases
I run python3 export_v2.py --encoder vitb --input-size 518,error is as follows, already upgrade urllib3 (2.2.2) or chardet (5.2.0),maybe is operation error ,

error is as follows,

kiosk@ubuntu-c:~/worker/deep estimation/tensorrt_version/Depth-Anything-V2$ python3 export_v2.py --encoder vitb --input-size 518

/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (2.2.2) or chardet (5.2.0) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Traceback (most recent call last):
File "export_v2.py", line 47, in
main()
File "export_v2.py", line 37, in main
example_output = depth_anything.forward(dummy_input)
File "/home/kiosk/worker/deep estimation/tensorrt_version/Depth-Anything-V2/depth_anything_v2/dpt.py", line 179, in forward
features = self.pretrained.get_intermediate_layers(x, self.intermediate_layer_idx[self.encoder], return_class_token=True)
File "/home/kiosk/worker/deep estimation/tensorrt_version/Depth-Anything-V2/depth_anything_v2/dinov2.py", line 308, in get_intermediate_layers
outputs = self._get_intermediate_layers_not_chunked(x, n)
File "/home/kiosk/worker/deep estimation/tensorrt_version/Depth-Anything-V2/depth_anything_v2/dinov2.py", line 277, in _get_intermediate_layers_not_chunked
x = blk(x)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/kiosk/worker/deep estimation/tensorrt_version/Depth-Anything-V2/depth_anything_v2/dinov2_layers/block.py", line 247, in forward
return super().forward(x_or_x_list)
File "/home/kiosk/worker/deep estimation/tensorrt_version/Depth-Anything-V2/depth_anything_v2/dinov2_layers/block.py", line 105, in forward
x = x + attn_residual_func(x)
File "/home/kiosk/worker/deep estimation/tensorrt_version/Depth-Anything-V2/depth_anything_v2/dinov2_layers/block.py", line 84, in attn_residual_func
return self.ls1(self.attn(self.norm1(x)))
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/kiosk/worker/deep estimation/tensorrt_version/Depth-Anything-V2/depth_anything_v2/dinov2_layers/attention.py", line 76, in forward
x = memory_efficient_attention(q, k, v, attn_bias=attn_bias)
File "/home/kiosk/.local/lib/python3.8/site-packages/xformers/ops/fmha/init.py", line 276, in memory_efficient_attention
return _memory_efficient_attention(
File "/home/kiosk/.local/lib/python3.8/site-packages/xformers/ops/fmha/init.py", line 403, in _memory_efficient_attention
return _fMHA.apply(
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py", line 598, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/kiosk/.local/lib/python3.8/site-packages/xformers/ops/fmha/init.py", line 74, in forward
out, op_ctx = _memory_efficient_attention_forward_requires_grad(
File "/home/kiosk/.local/lib/python3.8/site-packages/xformers/ops/fmha/init.py", line 428, in _memory_efficient_attention_forward_requires_grad
op = _dispatch_fw(inp, True)
File "/home/kiosk/.local/lib/python3.8/site-packages/xformers/ops/fmha/dispatch.py", line 119, in _dispatch_fw
return _run_priority_list(
File "/home/kiosk/.local/lib/python3.8/site-packages/xformers/ops/fmha/dispatch.py", line 55, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 1370, 12, 64) (torch.float32)
key : shape=(1, 1370, 12, 64) (torch.float32)
value : shape=(1, 1370, 12, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
[email protected] is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
cutlassF is not supported because:
device=cpu (supported: {'cuda'})
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
device=cpu (supported: {'cuda'})
unsupported embed per head: 64

@J4Q8
Copy link

J4Q8 commented Oct 25, 2024

Hi! I have exactly the same issue. Have you managed to solve it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants