-
Notifications
You must be signed in to change notification settings - Fork 310
The cuda problem is not being solved #321
Comments
I think it is related to some Google update. I use Automatic1111 from The Last Ben and also stopped working for the same reason. |
I ran TheLastBens sd auto1111 colab and also since Friday I get the following error message: WARNING[XFORMERS]: Is it within Colab/ notebook version issues or is it in someway connected to my local versions? Because which python --version says I'm running on 3.12. |
Yeah, as I said above almost every colab related to SD is broken due to Google latest update. There is a temporary fix, I don't know if it will work for Kohya. Check out my topic on The Last Ben's discussion group. |
The cuda fix is pushed, it may still showing cuda is not initialized or something but I finished a LoRA training without error, the problem actually in bitsandbytes version. |
It can works, thanks for your efforts. |
I'm able to train using DAdaption but not Adam8bit as a result of this. |
CUDA backend failed to initialize: Found cuDNN version 8700, but JAX was built against version 8904, which is newer. The copy of cuDNN that is installed must be at least as new as the version against which JAX was built. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
Nothing is working with this error. How can I solve this problem?
The text was updated successfully, but these errors were encountered: