Skip to content

Commit

Permalink
rebase
Browse files Browse the repository at this point in the history
  • Loading branch information
zhouwg committed May 23, 2024
2 parents b2ec4bc + cdb1689 commit b5d809a
Showing 1 changed file with 0 additions and 1 deletion.
1 change: 0 additions & 1 deletion llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -15394,7 +15394,6 @@ bool llama_supports_mlock(void) {
bool llama_supports_gpu_offload(void) {
#if defined(GGML_USE_CUDA) || defined(GGML_USE_CLBLAST) || defined(GGML_USE_METAL) || defined(GGML_USE_VULKAN) || \
defined(GGML_USE_SYCL) || defined(GGML_USE_KOMPUTE) || defined(GGML_USE_RPC) || defined(GGML_USE_QNN)
=======
// Defined when llama.cpp is compiled with support for offloading model layers to GPU.
return true;
#else
Expand Down

0 comments on commit b5d809a

Please sign in to comment.