-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torchscript / Pytorch Mobile Support #112
base: master
Are you sure you want to change the base?
Conversation
…ly build (or custom build)
…into cmarschner/convert
Loading model... File "/usr/local/lib/python3.11/site-packages/torch/jit/frontend.py", line 359, in build_param_list |
This is a limitation of TorchScript supported operators which are very limited actually. In fact, TorchScript has a lot of limitations, variable number of arguments is only one of the many things it can't do. For example, you can't have the Rambling side note: Edit: I don't want to discourage of course, as I'd love to see this pull request merged ;) |
Thank you. Problem solved |
Great! Did you manage to run it on Android / iOS @CoderXXLee ? |
Yes, I'm currently running it on Android |
@cyrillkuettel |
This shouldn't happen (I was able to convert things successfully) - did you figure out why this happened @CoderXXLee ? |
does this mean, if we have a different "orig_im_size": torch.tensor([1500, 2250], dtype=torch.float), |
In my case it also worked, splendidly. Not sure what the error might have been. |
I was able to implement it in C++. I decided to share my project to he community Libtorch-MobileSAM-Example. |
Hello, is there any code that implements TensorRT acceleration with C++ inference? |
No this must be a glitch |
It worked fine thank you. I was just wondering what the implications are that this value [1500, 2250] is fixed. |
@cmarschner thanks for doing this! I couldn't get it to run & produce output (same error as @CoderXXLee reported), but this discussion led me to the models @cyrillkuettel shared. Thanks @cyrillkuettel !! |
I'm glad you find it useful. I went through a lot of pain creating these😅 Link to models example-app/models/ |
Description
This PR makes the model compilable using toch.jit.script() and adds a conversion tool that saves the model in a format that can be consumed by pytorch lite on iOS devices.
Changes for Torchscript:
Pytorch mobile conversion
Example
python ./scripts/convert_pytorch_mobile.py output_dir
The result can be loaded as described in https://pytorch.org/tutorials/prototype/ios_gpu_workflow.html
BUT: The current version only runs on CPU on Pytorch Mobile. The metal backend is missing strided convolution as it seems.
The caller still needs to provide input scaling and normalization, as it is done in the predictor example.