-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded #146
Comments
"GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported." lets start by asking which gpu to get the ball rolling and hopefully resolve this issue |
install onnxruntime-gpu as this |
|
cuda, cudnn, versions? https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements |
12.1 and they said that 18 should work with any 12 version
|
Not clear, If you have right versions of torch, onnxruntime, cuda, cudnn, it should work. |
i have the one for 12.1 |
Your torch is right. And your onnxtrumtime finds the CUDA too. No idea what's the reason. I'm using CUDA 12.4, it works. Maybe you can try reinstalling or install different onnxruntime-gpu version to test. |
pip list should show |
i don't want to install cuda 12.4 so i wouldn't accidentally screw other python projects i use |
pyenv every time. dont contaminate other projects make them independant |
yes but because i have a project that uses cuda 12.1, if i would install cuda 12.4 instead of what i have i don't want to make it to stop working |
@shalevc1098 Have you found a solution yet? I'm experiencing a similar issue. Although the application is still running, there is always a notification like the one in the picture.
|
proper python cuda c++ tools installation explained in this tutorial : Essential AI Tools and Libraries: A Guide to Python, Git, C++ Compile Tools, FFmpeg, CUDA, PyTorch |
#321 |
I hava the same problem. here is the library version detail: onnxruntime.version=1.18.0 (liveportrait) D:>pip list | grep torch os:windows 10 |
Also suffering through this error. PATHs have been set, cuda is available, cudnn is installed (multiple were tried to see if it was a cudnn version that wasn't working); and even with the 1 click windows installation, I get the same error posted in this issue. EDIT (SOLUTION 1):I solved it! So... turns out that you don't need to have your torch cuda install MATCH your actual CUDA. (see here: https://discuss.pytorch.org/t/would-pytorch-for-cuda-11-6-work-when-cuda-is-actually-12-0/169569 ) So, what I did is: I created a new environment (using Python 10):
Then, I activate the environment via I then pip installed torch for cuda 11.8
I then installed the requirements file:
And then, I ran the inference.py script and it worked! No CUDNN issues! Other important information that I'm not sure how much it helped, but I'll add here in case it's useful. My environment variables were modified and I included the following:
EDIT 2 ** (SOLUTION 2):Another way to get things to work is to update the onnxruntime. Even though the one on the github suggests onnxrun-gpu to be at 1.18.0, I found that using an updated one ( 1.19.0), did work.
[Unfortunately, even when I was able to build the EDIT 3 (SOLUTION 3 -- wasn't reproducible by me, see EDIT 4)Ok. I got everything to work, both human and animal models! This is how you can do it if you have CUDA 12.1 (and hopefully other versions too). 1) Create a new python environment. [NOTE: Here I used 3.9 because I was experimenting and it seems this only works with Visual Studio python installs?]
2) Activate your environment
3) Install PyTorch [This is copied straight from this github] 4) Install MultiScaleDeformableAttention (but not as is instructed in this repo!) First:
Second: Navigate to the folder containing the dependency (from your LivePortrait clone directory): Third: BUILD only, not install the setup.py script (learned this from here: fundamentalvision/Deformable-DETR#223):
The full terminal command should look something like this: This should start running a bunch of stuff, including warnings, etc. The last few lines of the ouput should look something like:
Fourth: While still inside the \ops directory:
This will install the dependency via pip. This is what the full command and output looked like for me:
Fifth: Check that the dependency is now listed in the environment with:
Under there, you should see: MultiScaleDeformableAttention 1.0 Sixth: You can test that it's working by running Seventh: Now you can cd ../../../../../../../ all the way back to \LivePortrait 5) Install the requirements.txt
6) Force-reinstall onnxruntime to a newer version 1.19.0
7) Downgrade to a working numpy version: 8) Test inference.py
9) Test the animal version:
10) Yay! EDIT 4***:Everyone reading this probably hates me at this point (because this is a large *!@ post). But, I'm writing this to save other people time. I thought I had figured it out (see Edit 3), but I couldn't reproduce my solution! So, because my Edit 3 above, was a magical thing that I could not reproduce I needed to figure out how I could actually get this to work without having to downgrade to a different CUDA or install yet more things like CUDA 11.8 Toolkit . Ok, so I've been able to identify two issues and solutions to fix them: The CUDA_PATH is set but CUDA wasn't able to be loaded This issue can be resolved in many ways. One of them was as I described in "Edit 1" of this post. You just use an older pytorch version for CUDA 11.8 (if you are using CUDA 12.1 for example, like me) and you should be able to run the Another solution to this problem (if you want to use pytorch for CUDA 12.1) is to force-reinstall onnxruntime-gpu to 1.19.0. This will still throw some warnings when you run unable to install xpose with CUDA 12.1 There's many reasons why this could fail. You might get an error that says that The easiest fix for this, the absolute easiest fix I have found... is simply to install this from the original repo: git clone this baby: https://github.com/fundamentalvision/Deformable-DETR ; then while your python environment is active (the one you are using for LivePortrait, and one that has a torch version that matches your active CUDA; for me it was CUDA 12.1), you install the "MultiScaleDeformableAttention" dependency. You can do so with something like this:
If test.py doesn't run then it didn't work and LivePortrait for animal inference won't work either. I've tested this method (of installing from the original repo) on Python 3.9, Python 3.10, torch 2.31.0-cu121, and torch 2.30.0-cu121 -- so I've gathered some confidence in reproducing this. By the way, I didn't have to download an older version of CUDA Toolkit. I hope this helps others that have been struggling with this. |
你好,我在使用LivePortrait遇到了如下的问题:
D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
File "F:\workstation\ComfyUI-aki-v1.3.7\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "F:\workstation\ComfyUI-aki-v1.3.7\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "F:\workstation\ComfyUI-aki-v1.3.7\execution.py", line 66, in map_node_over_list
results.append(getattr(obj, func)(**input_data_all))
File "F:\workstation\ComfyUI-aki-v1.3.7\custom_nodes\comfyui-liveportrait\nodes\live_portrait.py", line 364, in run
live_portrait_pipeline = LivePortraitPipeline(
File "F:\workstation\ComfyUI-aki-v1.3.7\custom_nodes\comfyui-liveportrait\nodes\LivePortrait\src\live_portrait_pipeline.py", line 68, in init
self.cropper = Cropper(crop_cfg=crop_cfg,landmark_runner_ckpt=landmark_runner_ckpt,insightface_pretrained_weights=insightface_pretrained_weights)
File "F:\workstation\ComfyUI-aki-v1.3.7\custom_nodes\comfyui-liveportrait\nodes\LivePortrait\src\utils\cropper.py", line 45, in init
self.landmark_runner = LandmarkRunner(
File "F:\workstation\ComfyUI-aki-v1.3.7\custom_nodes\comfyui-liveportrait\nodes\LivePortrait\src\utils\landmark_runner.py", line 36, in init
self.session = onnxruntime.InferenceSession(
File "F:\workstation\ComfyUI-aki-v1.3.7\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 432, in init
raise fallback_error from e
File "F:\workstation\ComfyUI-aki-v1.3.7\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 427, in init
self._create_inference_session(self._fallback_providers, None)
File "F:\workstation\ComfyUI-aki-v1.3.7\python\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
The text was updated successfully, but these errors were encountered: