-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't not load model.. #34
Comments
I'm having a similar problem. Win10, ggml Alpaca 7B downloaded from huggingface: |
can you try the alpaca-native-enhanced model |
I've had the same problem with the "ggml-model-q4_0.bin" but had no issue with "ggml-model-q4_0_unfiltered.bin". |
Same issue here with: gpt4all-lora-quantized.bin and ggml-alpaca-7b-q4.bin |
Using 2 files from Pi3141, gpt4-x-alpaca-native-13B-ggml worked, alpaca-native-7B-ggml didn't load gpt4-x-alpaca-native-13B-ggml: alpaca-native-7B-ggml: |
Similar, won't load any of my 3 quantized 7B and 13B alpaca variants that worked in dalai or alpaca.cpp. ... |
Same here ggml-alpaca-7b-q4.bin not loading |
Same here, llama7B, llama13B, alpaca,... - all working locally with llama.cpp on the commandline. All hanging on load. Parameters for invoking llama.cpp commandline seem right and commandline status shows apparent completion: But web-ui dialogue hangs with loading. EDIT: running on macOS/Apple Silicon via current git-clone + copying templates folder from ZIP. |
Same here, endless -loading model "ggml-model-q4_0.bin" |
Updated / totally edited for better clarification.
Cause of model-hang for me: Hope this clarifies/helps. |
Update - I got it to work (most of the time) on my Mac by changing alpaca_turbo.py quite a bit. But I don't think it is mergeable into a pull-request, because my solution seems to be one-off just for my situation. |
wait 1 hours, but did not load model..
in windows10
The text was updated successfully, but these errors were encountered: