Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compile bug: Converting the Model to Llama.cpp GGUF #10969

Open
ErdemYavuz55 opened this issue Dec 24, 2024 · 0 comments
Open

Compile bug: Converting the Model to Llama.cpp GGUF #10969

ErdemYavuz55 opened this issue Dec 24, 2024 · 0 comments

Comments

@ErdemYavuz55
Copy link

Git commit

https://github.com/ggerganov/llama.cpp/releases/tag/b4390

Operating systems

Windows

GGML backends

CPU

Problem description & steps to reproduce

https://www.datacamp.com/tutorial/llama3-fine-tuning-locally

I am trying this code in Kaggle Notebook. In this tutorial, I tried to "3. Converting the Model to Llama.cpp GGUF". But I had some issues. Could you help me about these issues ?

%cd /kaggle/working
!git clone --depth=1 https://github.com/ggerganov/llama.cpp.git
%cd /kaggle/working/llama.cpp
!sed -i 's|MK_LDFLAGS += -lcuda|MK_LDFLAGS += -L/usr/local/nvidia/lib64 -lcuda|' Makefile
!LLAMA_CUDA=1 conda run -n base make -j > /dev/null

/kaggle/working
Cloning into 'llama.cpp'...
remote: Enumerating objects: 1217, done.
remote: Counting objects: 100% (1217/1217), done.
remote: Compressing objects: 100% (944/944), done.
remote: Total 1217 (delta 260), reused 765 (delta 221), pack-reused 0 (from 0)
Receiving objects: 100% (1217/1217), 19.22 MiB | 19.39 MiB/s, done.
Resolving deltas: 100% (260/260), done.
/kaggle/working/llama.cpp
Error unknown MAMBA_EXE: "/opt/conda/bin/conda", filename must be mamba or micromamba

CondaError: Run 'conda init' before 'conda activate'

Makefile:2: *** The Makefile build is deprecated. Use the CMake build instead. For more details, see https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md. Stop.

First Bad Commit

No response

Relevant log output

/kaggle/working
Cloning into 'llama.cpp'...
remote: Enumerating objects: 1217, done.
remote: Counting objects: 100% (1217/1217), done.
remote: Compressing objects: 100% (944/944), done.
remote: Total 1217 (delta 260), reused 765 (delta 221), pack-reused 0 (from 0)
Receiving objects: 100% (1217/1217), 19.22 MiB | 19.39 MiB/s, done.
Resolving deltas: 100% (260/260), done.
/kaggle/working/llama.cpp
Error unknown MAMBA_EXE: "/opt/conda/bin/conda", filename must be mamba or micromamba

CondaError: Run 'conda init' before 'conda activate'

Makefile:2: *** The Makefile build is deprecated. Use the CMake build instead. For more details, see https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md.  Stop.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant