-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(tabby): Add experimental vulkan support #1588
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1588 +/- ##
==========================================
+ Coverage 49.98% 50.33% +0.34%
==========================================
Files 112 112
Lines 8994 9068 +74
==========================================
+ Hits 4496 4564 +68
- Misses 4498 4504 +6 ☔ View full report in Codecov by Sentry. |
* feat(tabby): Add experimental vulkan support * Add rocm device * Fix feature flags * Vulkan support working
in src/engine.cc:365 the llama_backend_init(...) function call is missing the mandatory bool argument. cargo build --features cuda cargo:warning=src/engine.cc: In constructor ‘llama::{anonymous}::BackendInitializer::BackendInitializer()’:
cargo:warning=src/engine.cc:365:23: error: too few arguments to function ‘void llama_backend_init(bool)’
cargo:warning= 365 | llama_backend_init();
cargo:warning= | ~~~~~~~~~~~~~~~~~~^~
cargo:warning=In file included from src/engine.cc:11:
cargo:warning=llama.cpp/llama.h:267:20: note: declared here
cargo:warning= 267 | LLAMA_API void llama_backend_init(bool numa);
cargo:warning= | ^~~~~~~~~~~~~~~~~~
exit status: 0
exit status: 1 |
@amne I'm wondering if your llama.cpp submodule is up to date. If you cloned Tabby a while ago, you need to run |
No description provided.