Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactoring: add helper class to bind qnn tensor -> ggml tensor #2

Open
wants to merge 17 commits into
base: qualcomm_qnn_backend_for_ggml
Choose a base branch
from

Conversation

chraac
Copy link

@chraac chraac commented Jun 17, 2024

  • Self Reported Review Complexity:
    • Review Complexity : Low
    • Review Complexity : Medium
    • Review Complexity : High
  • I have read the contributing guidelines

As I said in your upstream PR, better to have a function for wrapping ggml_tensor into Qnn_Tensor_t.
So here i create a PR for it.

Run test on cpu backend, works well
5338c775ff17bc845aca02c6380446e

Run on npu backend, also works well:
1648ce8fa1b30cb385978edd4840dbd

ggml-qnn.cpp Outdated
QNN_LOG_WARN("alloc rpcmem failure, %s\n", strerror(errno));
QNN_LOG_DEBUG("tensor%p name %s", _qnn_tensor, QNN_TENSOR_GET_NAME(*_qnn_tensor));
_context = nullptr;
// TODO: should we free the tensor here?
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we free the _qnn_tensor here create by tensorCreateGraphTensor (line 1979)?

Copy link
Owner

@zhouwg zhouwg Jun 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不需要。高通QNN SDK貌似没有提供类似的函数。看高通的文档,貌似SDK内部会管理这些内部资源。

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

您在这个PR里提到的问题我此前已经注意到了,暂时没有理解为啥会这样。高通QNN SDK的技术资料比较少,目前只有哪个SDK reference manual.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

个人判断应该是漏了些同步操作,不过确实没啥信息

Copy link
Owner

@zhouwg zhouwg Jun 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不太清楚,我做过各种实验。没有查到公开的有价值的参考资料。

从已有的公开资料来看,国内目前已经实现了高通NPU加速的公司有几家,其中发布了Open MiniCPM-V的面壁智能是其中一家。如果您是商业公司雇员,可以联系对方。如果是我这样的独立开发者,在没有与QTI签定NDA以及得到高通技术支持的情况下,试图完全做出来,难度可能不小:比如哪些出错信息代码,根本不知道具体是啥意思。

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

之前用过Qualcomm的GPU profiler,也是bug一堆,这种问题感觉得等他们自己修了,我们要workaround的话,会花掉很多无谓的时间

Copy link
Owner

@zhouwg zhouwg Jun 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

赞同您的观点。最好得到高通的技术支持。

@zhouwg
Copy link
Owner

zhouwg commented Jun 17, 2024

thanks for your PR. we can discuss this problem in my personal learning&study project:https://github.com/zhouwg/kantv/tree/ggml-qnn-refine/core/ggml/llamacpp/tests/ggml-qnn.

不了解您是啥背景?类似我这样闲着没事干的独立个人开发者?还是AI相关公司雇员?如果您是公司雇员,可以联系面壁智能(他们应该已经做到了高通NPU的加速).

没想到您还对这个问题感兴趣。我现在对上游项目不感兴趣了,如您愿意,可以在我的学习&研究项目里研究ggml-qnn相关问题,可以用中文,这样更方便,最后将研究结果贡献给社区。不用费劲提交给上游了(事实上,自从哪个ggml-rpc.cpp相关的多个pr被合并到master分支后------很失望)。

@chraac
Copy link
Author

chraac commented Jun 17, 2024

thanks for your PR. we can discuss this problem in my personal learning&study project:https://github.com/zhouwg/kantv/tree/ggml-qnn-refine/core/ggml/llamacpp/tests/ggml-qnn.

不了解您是啥背景?类似我这样闲着没事干的独立个人开发者?还是AI相关公司雇员?如果您是公司雇员,可以联系面壁智能(他们应该已经做到了高通NPU的加速).

没想到您还对这个问题感兴趣。我现在对上游项目不感兴趣了,如您愿意,可以在我的学习&研究项目里研究ggml-qnn相关问题,可以用中文,这样更方便,最后将研究结果贡献给社区。不用费劲提交给上游了(事实上,自从哪个ggml-rpc.cpp相关的多个pr被合并到master分支后------我很失望,上游项目已经进入垃圾时间了,没啥核心改进)。

我个人对这个问题还是比较乐观,毕竟rpc那系列pr我也看过,并且日常我也用,和qnn这个,我认为也不冲突。
个人对llama.cpp添加qnn backend还是看好的,另外这个PR确实也大,所以review比较慢,还是请你别灰心,加油!要是有机会的话还是尽量merge到upstream,毕竟这样可以少很多事。

我做这个还是出于个人爱好,主要可以学习交流新东西,兴趣使然。所以也没打算和厂商联系,基于公开资料做做这样。

@chraac
Copy link
Author

chraac commented Jun 17, 2024

  • 这个可能我还是说两句我的经验,我刚开始提PR也是这样,觉得PR的某些评论有点针对个人,后来打交道多了也就想开了,换个位置思考,其实大家都是基于兴趣爱好做点开源的东西,也不是工作,review的人可能也只是兴趣爱好,基于这个出发点,是不是大家合作就会好些。
  • PR review时间长的问题,我也经常遇到(我以前有过经验,一个大PR拖了几个月),这个可能确实无解,因为大家都是业余玩这个,时间也不固定,我一般会把我的思路写下来,然后每部分代码加点注释啥的,这样review也容易些。
  • PR优先级这个确实无解,有可能某些PR就是很重要,但是社区没有足够的有相关背景的人来review,这种情况确实存在,看起来您的PR就是处于类似的位置,所以对其他reviewer可能难度很大,这个还得请老哥别灰心。
  • 一个社区确实是会有不同的声音的,确实会有部分人反对,也是一种声音,但是社区内部还是比较民主的,经过这些年,我开始觉得有个反对的声音挺重要的
  • 另外,老哥,不必妄自菲薄,你能业余时间做些开源贡献,其实已经比很多人厉害了;另外,每天对着代码对着comment,确实会容易有情绪,这种情况确实存在,也只能说尽量控制,不影响判断,也尽量不泄露到其他地方(这个真难,我也做不到,唉),这样反倒容易让不相干的人卷进来。
  • 还有就是语言的问题,其实我们自己都玩llm,现在llm已经能很好的处理语言之间转换的问题了,其实用好了,可以减少很多语言障碍。

感谢老哥回复,你用业余时间做到这样,已经不错了,加油!

@chraac chraac force-pushed the dev-function-to-map-tensor branch from 4d70039 to 65a14d9 Compare June 18, 2024 15:09
@chraac
Copy link
Author

chraac commented Jun 19, 2024

@zhouwg 老哥,有空麻烦看看PR,如果没啥问题能不能帮忙merge到你的分支去。
这个backend还是有人关注的,请不要放弃。
我也会抽点时间继续完善这个分支。
最后再次感谢老哥的努力!

@chraac chraac requested a review from zhouwg June 19, 2024 02:48
@chraac chraac force-pushed the dev-function-to-map-tensor branch from 7a77028 to dfe159f Compare June 19, 2024 03:16
@myan-o
Copy link

myan-o commented Aug 18, 2024

@chraac

Thank you for the development.
i used dev-refactoring branch.
but Creating a tensor for a matmul node fails with an error and doesn't work.

Device:Snapdragon8 Gen3 16GB

llama-server -m models/Kitsunebi-v1-Gemma2-8k-9B.Q4_K_M.gguf -ngl 40

...
[ggml_qnn_graph, 27]: graph name MUL_MAT_3584x2048
x1x1_3584x2x1x1_2048x2x1x1
[ggml_qnn_graph, 75]: can't create qnn graph handl
e with graph name MUL_MAT_3584x2048x1x1_3584x2x1x1
_2048x2x1x1, error = 6003
diff --git a/ggml/src/ggml-backend.c b/ggml/src/gg
ml-backend.c
index a8eafac4..e2b421e2 100644
--- a/ggml/src/ggml-backend.c
+++ b/ggml/src/ggml-backend.c
@@ -287,6 +287,7 @@ bool ggml_backend_supports_op(
ggml_backend_t backend, const struct ggml_tensor *
 }

 bool ggml_backend_supports_buft(ggml_backend_t backend, ggml_backend_buffer_type_t buft) {
+    if (NULL == backend->iface.supports_buft) return true;
  ┆ ┆return backend->iface.supports_buft(backend, buft);
 }

@chraac
Copy link
Author

chraac commented Aug 18, 2024

@chraac

Thank you for the development. i used dev-refactoring branch. but Creating a tensor for a matmul node fails with an error and doesn't work.

Device:Snapdragon8 Gen3 16GB

llama-server -m models/Kitsunebi-v1-Gemma2-8k-9B.Q4_K_M.gguf -ngl 40

...
[ggml_qnn_graph, 27]: graph name MUL_MAT_3584x2048
x1x1_3584x2x1x1_2048x2x1x1
[ggml_qnn_graph, 75]: can't create qnn graph handl
e with graph name MUL_MAT_3584x2048x1x1_3584x2x1x1
_2048x2x1x1, error = 6003
diff --git a/ggml/src/ggml-backend.c b/ggml/src/gg
ml-backend.c
index a8eafac4..e2b421e2 100644
--- a/ggml/src/ggml-backend.c
+++ b/ggml/src/ggml-backend.c
@@ -287,6 +287,7 @@ bool ggml_backend_supports_op(
ggml_backend_t backend, const struct ggml_tensor *
 }

 bool ggml_backend_supports_buft(ggml_backend_t backend, ggml_backend_buffer_type_t buft) {
+    if (NULL == backend->iface.supports_buft) return true;
  ┆ ┆return backend->iface.supports_buft(backend, buft);
 }

Hi @myan-o , thanks your for the feedback, as i said before, in ggml, the input tensor of matmul operator need to be transposed, to achieve that i've a lot more refactoring work to do, so now the mulmat operator is still under constructiion, for more information, could have a look here: chraac@63dc587

@myan-o
Copy link

myan-o commented Aug 18, 2024

@chraac

Thank you for your answer. So does that mean that matmul operations are not implemented yet?

I have also made a pull request for some minor fixes, so please take a look.

@myan-o
Copy link

myan-o commented Aug 18, 2024

@chraac

The termux development environment lacks the c++ std library and the build fails.

  • std::aligned_alloc
  • atomic_is_lock_free
[  5%] Building CXX object ggml/src/CMakeFiles/ggm
l.dir/ggml-qnn/utils.cpp.o
/data/data/com.termux/files/home/git/llama.cpp/ggm
l/src/ggml-qnn/utils.cpp:124:23: error: reference
to unresolved using declaration
  124 |     void *data = std::aligned_alloc(alignm
ent, size_aligned);

@FranzKafkaYu
Copy link

@chraac 抱歉打扰开发者,当前我也在高通骁龙Gen2的设备上基于llama.cpp部署LLM,使用的模型为Qwen2 0.5B,我已经注意到你的仓库中存在多个branch,请问如果我需要测试的话应该使用哪个分支呢?

@chraac
Copy link
Author

chraac commented Aug 28, 2024

@chraac 抱歉打扰开发者,当前我也在高通骁龙Gen2的设备上基于llama.cpp部署LLM,使用的模型为Qwen2 0.5B,我已经注意到你的仓库中存在多个branch,请问如果我需要测试的话应该使用哪个分支呢?

Hi @FranzKafkaYu , 可以用 dev-refactoring 这个分支哈,但是就像我前面说的,现在mulmat跑还有问题,还在改中哈

@FranzKafkaYu
Copy link

@chraac 抱歉打扰开发者,当前我也在高通骁龙Gen2的设备上基于llama.cpp部署LLM,使用的模型为Qwen2 0.5B,我已经注意到你的仓库中存在多个branch,请问如果我需要测试的话应该使用哪个分支呢?

Hi @FranzKafkaYu , 可以用 dev-refactoring 这个分支哈,但是就像我前面说的,现在mulmat跑还有问题,还在改中哈

今天有空测试了一下该分支,无法正常编译通过,编译command:

 mkdir build-android && cd build-android && cmake -DCMAKE_TOOLCHAIN_FILE=$NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=latest -DCMAKE_C_FLAGS=-march=armv8.4a+dotprod -DGGML_QNN=ON -DGGML_QNN_SDK_PATH=/home/franzkafka/Desktop/qnn/qairt/2.22.6.240515  .. && make -j4  

相关报错日志:

-- Using latest available ANDROID_PLATFORM: 33.
-- The C compiler identification is Clang 14.0.7
-- The CXX compiler identification is Clang 14.0.7
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /home/franzkafka/Desktop/ndk/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/bin/clang - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /home/franzkafka/Desktop/ndk/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.34.1") 
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE  
-- Found OpenMP_C: -fopenmp=libomp (found version "5.0") 
-- Found OpenMP_CXX: -fopenmp=libomp (found version "5.0") 
-- Found OpenMP: TRUE (found version "5.0")  
-- OpenMP found
-- Using llamafile
QNN_SDK_PATH: /home/franzkafka/Desktop/qnn/qairt/2.22.6.240515
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: aarch64
-- ARM detected
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
-- Configuring done
-- Generating done
-- Build files have been written to: /home/franzkafka/Desktop/llama/llama.cpp/build-android
[  0%] Generating build details from Git
[  1%] Building C object examples/gguf-hash/CMakeFiles/xxhash.dir/deps/xxhash/xxhash.c.o
[  2%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml.c.o
[  3%] Building C object examples/gguf-hash/CMakeFiles/sha256.dir/deps/sha256/sha256.c.o
-- Found Git: /usr/bin/git (found version "2.34.1") 
[  4%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:2065:5: warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]
    GGML_F16_VEC_REDUCE(sumf, sum);
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1166:41: note: expanded from macro 'GGML_F16_VEC_REDUCE'
    #define GGML_F16_VEC_REDUCE         GGML_F32Cx4_REDUCE
                                        ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1156:38: note: expanded from macro 'GGML_F32Cx4_REDUCE'
    #define GGML_F32Cx4_REDUCE       GGML_F32x4_REDUCE
                                     ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1086:11: note: expanded from macro 'GGML_F32x4_REDUCE'
    res = GGML_F32x4_REDUCE_ONE(x[0]);         \
        ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1071:34: note: expanded from macro 'GGML_F32x4_REDUCE_ONE'
#define GGML_F32x4_REDUCE_ONE(x) vaddvq_f32(x)
                                 ^~~~~~~~~~~~~
[  4%] Built target build_info
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:2113:9: warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]
        GGML_F16_VEC_REDUCE(sumf[k], sum[k]);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1166:41: note: expanded from macro 'GGML_F16_VEC_REDUCE'
    #define GGML_F16_VEC_REDUCE         GGML_F32Cx4_REDUCE
                                        ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1156:38: note: expanded from macro 'GGML_F32Cx4_REDUCE'
    #define GGML_F32Cx4_REDUCE       GGML_F32x4_REDUCE
                                     ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1086:11: note: expanded from macro 'GGML_F32x4_REDUCE'
    res = GGML_F32x4_REDUCE_ONE(x[0]);         \
        ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1071:34: note: expanded from macro 'GGML_F32x4_REDUCE_ONE'
#define GGML_F32x4_REDUCE_ONE(x) vaddvq_f32(x)
                                 ^~~~~~~~~~~~~
[  4%] Building C object examples/gguf-hash/CMakeFiles/sha1.dir/deps/sha1/sha1.c.o
[  4%] Built target sha256
[  4%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o
[  4%] Built target sha1
[  5%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o
[  6%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o
[  6%] Building CXX object ggml/src/CMakeFiles/ggml.dir/llamafile/sgemm.cpp.o
[  6%] Built target xxhash
[  7%] Building CXX object ggml/src/CMakeFiles/ggml.dir/ggml-qnn/backend-ops.cpp.o
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:2:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.hpp:5:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend.hpp:11:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/graph.hpp:13:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/op-config.hpp:9:
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/qnn-lib.hpp:254:47: error: no member named 'variant_npos' in namespace 'std'
        if (_backend_name.find("Htp") != std::variant_npos) {
                                         ~~~~~^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/qnn-lib.hpp:361:47: error: no member named 'variant_npos' in namespace 'std'
        if (_backend_name.find("Htp") != std::variant_npos) {
                                         ~~~~~^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/qnn-lib.hpp:412:47: error: no member named 'variant_npos' in namespace 'std'
        if (_backend_name.find("Htp") != std::variant_npos) {
                                         ~~~~~^
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:2:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.hpp:5:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend.hpp:11:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/graph.hpp:13:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/op-config.hpp:11:
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/tensor.hpp:193:41: error: implicit instantiation of undefined template 'std::array<unsigned int, 4>'
    std::array<uint32_t, GGML_MAX_DIMS> _dimensions = {};
                                        ^
/home/franzkafka/Desktop/ndk/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/__tuple:219:64: note: template is declared here
template <class _Tp, size_t _Size> struct _LIBCPP_TEMPLATE_VIS array;
                                                               ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:246:1: error: static_assert failed due to requirement 'sizeof (kGgmlOpToQnnOp) / sizeof (kGgmlOpToQnnOp[0]) == (GGML_OP_COUNT + GGML_UNARY_OP_COUNT)' "GGML_OP_COUNT does not match the size of the kGgmlOpToQnnOp table"
static_assert(sizeof(kGgmlOpToQnnOp) / sizeof(kGgmlOpToQnnOp[0]) == (GGML_OP_COUNT + GGML_UNARY_OP_COUNT),
^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:248:1: error: static_assert failed due to requirement 'kGgmlOpToQnnOp[GGML_UNARY_OP_GELU + kGgmlUnaryOpStart] != nullptr' "GGML_UNARY_OP_GELU does not correspond to QNN_OP_GELU"
static_assert(kGgmlOpToQnnOp[GGML_UNARY_OP_GELU + kGgmlUnaryOpStart] != nullptr,
^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:294:23: error: no matching function for call to 'get_qnn_graph_from_cache'
    auto *graph_ptr = get_qnn_graph_from_cache<2, 1>(ctx, _GgmlOp, { src0, src1 }, { dst });
                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:252:22: note: candidate function template not viable: cannot convert initializer list argument to 'const std::array<ggml_tensor *, 2UL>'
qnn::ggml_qnn_graph *get_qnn_graph_from_cache(ggml_backend_qnn_context *ctx, size_t op,
                     ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:317:23: error: no matching function for call to 'get_qnn_graph_from_cache'
    auto *graph_ptr = get_qnn_graph_from_cache<1, 1>(ctx, _GgmlOp, { src }, { dst });
                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:252:22: note: candidate function template not viable: cannot convert initializer list argument to 'const std::array<ggml_tensor *, 1UL>'
qnn::ggml_qnn_graph *get_qnn_graph_from_cache(ggml_backend_qnn_context *ctx, size_t op,
                     ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:431:1: error: static_assert failed due to requirement 'sizeof (kQnnUnaryOpsTable) / sizeof (kQnnUnaryOpsTable[0]) == (GGML_OP_COUNT + GGML_UNARY_OP_COUNT)' "GGML_OP_COUNT does not match the size of the kQnnUnaryOpsTable table"
static_assert(sizeof(kQnnUnaryOpsTable) / sizeof(kQnnUnaryOpsTable[0]) == (GGML_OP_COUNT + GGML_UNARY_OP_COUNT),
^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:519:1: error: static_assert failed due to requirement 'sizeof (kQnnBinaryOpsTable) / sizeof (kQnnBinaryOpsTable[0]) == GGML_OP_COUNT' "GGML_OP_COUNT does not match the size of the kQnnBinaryOpsTable table"
static_assert(sizeof(kQnnBinaryOpsTable) / sizeof(kQnnBinaryOpsTable[0]) == GGML_OP_COUNT,
^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:2:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.hpp:5:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend.hpp:4:
/home/franzkafka/Desktop/ndk/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/memory:3927:24: error: incompatible pointer types assigning to 'std::__shared_weak_count *' from 'std::__shared_ptr_emplace<qnn::ggml_qnn_tensor, std::allocator<qnn::ggml_qnn_tensor>> *'
        __r.__cntrl_ = __cntrl;
                       ^~~~~~~
/home/franzkafka/Desktop/ndk/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/memory:4444:29: note: in instantiation of function template specialization 'std::shared_ptr<qnn::ggml_qnn_tensor>::__create_with_control_block<qnn::ggml_qnn_tensor, std::__shared_ptr_emplace<qnn::ggml_qnn_tensor, std::allocator<qnn::ggml_qnn_tensor>>>' requested here
    return shared_ptr<_Tp>::__create_with_control_block(__ptr, __hold2.release());
                            ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/graph.hpp:108:22: note: in instantiation of function template specialization 'std::make_shared<qnn::ggml_qnn_tensor, std::basic_string<char>, const QNNBackend &, void *&, std::shared_ptr<qnn::qnn_instance> &>' requested here
                std::make_shared<ggml_qnn_tensor>(std::string(buffer), _device, _graph_handle, _qnn_instance);
                     ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:312:5: error: static_assert failed due to requirement 'kGgmlOpToQnnOp[86UL] != nullptr' "GGML_OP does not have a corresponding QNN_OP"
    static_assert(kGgmlOpToQnnOp[_GgmlOp] != nullptr, "GGML_OP does not have a corresponding QNN_OP");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:424:5: note: in instantiation of function template specialization '(anonymous namespace)::qnn_unary_op_impl<86UL>' requested here
    qnn_unary_op_impl<GGML_UNARY_OP_GELU + kGgmlUnaryOpStart>, // GGML_UNARY_OP_GELU
    ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:289:5: error: static_assert failed due to requirement 'kGgmlOpToQnnOp[(ggml_op)25U] != nullptr' "GGML_OP does not have a corresponding QNN_OP"
    static_assert(kGgmlOpToQnnOp[_GgmlOp] != nullptr, "GGML_OP does not have a corresponding QNN_OP");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:459:5: note: in instantiation of function template specialization '(anonymous namespace)::qnn_binary_op_impl<GGML_OP_MUL_MAT>' requested here
    qnn_binary_op_impl<GGML_OP_MUL_MAT>, // GGML_OP_MUL_MAT
    ^
13 errors generated.
make[2]: *** [ggml/src/CMakeFiles/ggml.dir/build.make:146: ggml/src/CMakeFiles/ggml.dir/ggml-qnn/backend-ops.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
2 warnings generated.
make[1]: *** [CMakeFiles/Makefile2:1603: ggml/src/CMakeFiles/ggml.dir/all] Error 2
make: *** [Makefile:146: all] Error 2  

不确定具体问题在哪里,也许是QNN SDK不对?

PS:是否可以开放仓库的issue区,相关问题我们可到你的仓库进行讨论,避免打扰其他开发者

@chraac
Copy link
Author

chraac commented Sep 18, 2024

@chraac 抱歉打扰开发者,当前我也在高通骁龙Gen2的设备上基于llama.cpp部署LLM,使用的模型为Qwen2 0.5B,我已经注意到你的仓库中存在多个branch,请问如果我需要测试的话应该使用哪个分支呢?

Hi @FranzKafkaYu , 可以用 dev-refactoring 这个分支哈,但是就像我前面说的,现在mulmat跑还有问题,还在改中哈

今天有空测试了一下该分支,无法正常编译通过,编译command:

 mkdir build-android && cd build-android && cmake -DCMAKE_TOOLCHAIN_FILE=$NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=latest -DCMAKE_C_FLAGS=-march=armv8.4a+dotprod -DGGML_QNN=ON -DGGML_QNN_SDK_PATH=/home/franzkafka/Desktop/qnn/qairt/2.22.6.240515  .. && make -j4  

相关报错日志:

-- Using latest available ANDROID_PLATFORM: 33.
-- The C compiler identification is Clang 14.0.7
-- The CXX compiler identification is Clang 14.0.7
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /home/franzkafka/Desktop/ndk/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/bin/clang - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /home/franzkafka/Desktop/ndk/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/bin/clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.34.1") 
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE  
-- Found OpenMP_C: -fopenmp=libomp (found version "5.0") 
-- Found OpenMP_CXX: -fopenmp=libomp (found version "5.0") 
-- Found OpenMP: TRUE (found version "5.0")  
-- OpenMP found
-- Using llamafile
QNN_SDK_PATH: /home/franzkafka/Desktop/qnn/qairt/2.22.6.240515
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: aarch64
-- ARM detected
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
-- Configuring done
-- Generating done
-- Build files have been written to: /home/franzkafka/Desktop/llama/llama.cpp/build-android
[  0%] Generating build details from Git
[  1%] Building C object examples/gguf-hash/CMakeFiles/xxhash.dir/deps/xxhash/xxhash.c.o
[  2%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml.c.o
[  3%] Building C object examples/gguf-hash/CMakeFiles/sha256.dir/deps/sha256/sha256.c.o
-- Found Git: /usr/bin/git (found version "2.34.1") 
[  4%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:2065:5: warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]
    GGML_F16_VEC_REDUCE(sumf, sum);
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1166:41: note: expanded from macro 'GGML_F16_VEC_REDUCE'
    #define GGML_F16_VEC_REDUCE         GGML_F32Cx4_REDUCE
                                        ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1156:38: note: expanded from macro 'GGML_F32Cx4_REDUCE'
    #define GGML_F32Cx4_REDUCE       GGML_F32x4_REDUCE
                                     ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1086:11: note: expanded from macro 'GGML_F32x4_REDUCE'
    res = GGML_F32x4_REDUCE_ONE(x[0]);         \
        ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1071:34: note: expanded from macro 'GGML_F32x4_REDUCE_ONE'
#define GGML_F32x4_REDUCE_ONE(x) vaddvq_f32(x)
                                 ^~~~~~~~~~~~~
[  4%] Built target build_info
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:2113:9: warning: implicit conversion increases floating-point precision: 'float32_t' (aka 'float') to 'ggml_float' (aka 'double') [-Wdouble-promotion]
        GGML_F16_VEC_REDUCE(sumf[k], sum[k]);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1166:41: note: expanded from macro 'GGML_F16_VEC_REDUCE'
    #define GGML_F16_VEC_REDUCE         GGML_F32Cx4_REDUCE
                                        ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1156:38: note: expanded from macro 'GGML_F32Cx4_REDUCE'
    #define GGML_F32Cx4_REDUCE       GGML_F32x4_REDUCE
                                     ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1086:11: note: expanded from macro 'GGML_F32x4_REDUCE'
    res = GGML_F32x4_REDUCE_ONE(x[0]);         \
        ~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml.c:1071:34: note: expanded from macro 'GGML_F32x4_REDUCE_ONE'
#define GGML_F32x4_REDUCE_ONE(x) vaddvq_f32(x)
                                 ^~~~~~~~~~~~~
[  4%] Building C object examples/gguf-hash/CMakeFiles/sha1.dir/deps/sha1/sha1.c.o
[  4%] Built target sha256
[  4%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-alloc.c.o
[  4%] Built target sha1
[  5%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-backend.c.o
[  6%] Building C object ggml/src/CMakeFiles/ggml.dir/ggml-quants.c.o
[  6%] Building CXX object ggml/src/CMakeFiles/ggml.dir/llamafile/sgemm.cpp.o
[  6%] Built target xxhash
[  7%] Building CXX object ggml/src/CMakeFiles/ggml.dir/ggml-qnn/backend-ops.cpp.o
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:2:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.hpp:5:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend.hpp:11:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/graph.hpp:13:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/op-config.hpp:9:
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/qnn-lib.hpp:254:47: error: no member named 'variant_npos' in namespace 'std'
        if (_backend_name.find("Htp") != std::variant_npos) {
                                         ~~~~~^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/qnn-lib.hpp:361:47: error: no member named 'variant_npos' in namespace 'std'
        if (_backend_name.find("Htp") != std::variant_npos) {
                                         ~~~~~^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/qnn-lib.hpp:412:47: error: no member named 'variant_npos' in namespace 'std'
        if (_backend_name.find("Htp") != std::variant_npos) {
                                         ~~~~~^
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:2:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.hpp:5:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend.hpp:11:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/graph.hpp:13:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/op-config.hpp:11:
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/tensor.hpp:193:41: error: implicit instantiation of undefined template 'std::array<unsigned int, 4>'
    std::array<uint32_t, GGML_MAX_DIMS> _dimensions = {};
                                        ^
/home/franzkafka/Desktop/ndk/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/__tuple:219:64: note: template is declared here
template <class _Tp, size_t _Size> struct _LIBCPP_TEMPLATE_VIS array;
                                                               ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:246:1: error: static_assert failed due to requirement 'sizeof (kGgmlOpToQnnOp) / sizeof (kGgmlOpToQnnOp[0]) == (GGML_OP_COUNT + GGML_UNARY_OP_COUNT)' "GGML_OP_COUNT does not match the size of the kGgmlOpToQnnOp table"
static_assert(sizeof(kGgmlOpToQnnOp) / sizeof(kGgmlOpToQnnOp[0]) == (GGML_OP_COUNT + GGML_UNARY_OP_COUNT),
^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:248:1: error: static_assert failed due to requirement 'kGgmlOpToQnnOp[GGML_UNARY_OP_GELU + kGgmlUnaryOpStart] != nullptr' "GGML_UNARY_OP_GELU does not correspond to QNN_OP_GELU"
static_assert(kGgmlOpToQnnOp[GGML_UNARY_OP_GELU + kGgmlUnaryOpStart] != nullptr,
^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:294:23: error: no matching function for call to 'get_qnn_graph_from_cache'
    auto *graph_ptr = get_qnn_graph_from_cache<2, 1>(ctx, _GgmlOp, { src0, src1 }, { dst });
                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:252:22: note: candidate function template not viable: cannot convert initializer list argument to 'const std::array<ggml_tensor *, 2UL>'
qnn::ggml_qnn_graph *get_qnn_graph_from_cache(ggml_backend_qnn_context *ctx, size_t op,
                     ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:317:23: error: no matching function for call to 'get_qnn_graph_from_cache'
    auto *graph_ptr = get_qnn_graph_from_cache<1, 1>(ctx, _GgmlOp, { src }, { dst });
                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:252:22: note: candidate function template not viable: cannot convert initializer list argument to 'const std::array<ggml_tensor *, 1UL>'
qnn::ggml_qnn_graph *get_qnn_graph_from_cache(ggml_backend_qnn_context *ctx, size_t op,
                     ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:431:1: error: static_assert failed due to requirement 'sizeof (kQnnUnaryOpsTable) / sizeof (kQnnUnaryOpsTable[0]) == (GGML_OP_COUNT + GGML_UNARY_OP_COUNT)' "GGML_OP_COUNT does not match the size of the kQnnUnaryOpsTable table"
static_assert(sizeof(kQnnUnaryOpsTable) / sizeof(kQnnUnaryOpsTable[0]) == (GGML_OP_COUNT + GGML_UNARY_OP_COUNT),
^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:519:1: error: static_assert failed due to requirement 'sizeof (kQnnBinaryOpsTable) / sizeof (kQnnBinaryOpsTable[0]) == GGML_OP_COUNT' "GGML_OP_COUNT does not match the size of the kQnnBinaryOpsTable table"
static_assert(sizeof(kQnnBinaryOpsTable) / sizeof(kQnnBinaryOpsTable[0]) == GGML_OP_COUNT,
^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:2:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.hpp:5:
In file included from /home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend.hpp:4:
/home/franzkafka/Desktop/ndk/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/memory:3927:24: error: incompatible pointer types assigning to 'std::__shared_weak_count *' from 'std::__shared_ptr_emplace<qnn::ggml_qnn_tensor, std::allocator<qnn::ggml_qnn_tensor>> *'
        __r.__cntrl_ = __cntrl;
                       ^~~~~~~
/home/franzkafka/Desktop/ndk/android-ndk-r25c/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/c++/v1/memory:4444:29: note: in instantiation of function template specialization 'std::shared_ptr<qnn::ggml_qnn_tensor>::__create_with_control_block<qnn::ggml_qnn_tensor, std::__shared_ptr_emplace<qnn::ggml_qnn_tensor, std::allocator<qnn::ggml_qnn_tensor>>>' requested here
    return shared_ptr<_Tp>::__create_with_control_block(__ptr, __hold2.release());
                            ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/graph.hpp:108:22: note: in instantiation of function template specialization 'std::make_shared<qnn::ggml_qnn_tensor, std::basic_string<char>, const QNNBackend &, void *&, std::shared_ptr<qnn::qnn_instance> &>' requested here
                std::make_shared<ggml_qnn_tensor>(std::string(buffer), _device, _graph_handle, _qnn_instance);
                     ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:312:5: error: static_assert failed due to requirement 'kGgmlOpToQnnOp[86UL] != nullptr' "GGML_OP does not have a corresponding QNN_OP"
    static_assert(kGgmlOpToQnnOp[_GgmlOp] != nullptr, "GGML_OP does not have a corresponding QNN_OP");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:424:5: note: in instantiation of function template specialization '(anonymous namespace)::qnn_unary_op_impl<86UL>' requested here
    qnn_unary_op_impl<GGML_UNARY_OP_GELU + kGgmlUnaryOpStart>, // GGML_UNARY_OP_GELU
    ^
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:289:5: error: static_assert failed due to requirement 'kGgmlOpToQnnOp[(ggml_op)25U] != nullptr' "GGML_OP does not have a corresponding QNN_OP"
    static_assert(kGgmlOpToQnnOp[_GgmlOp] != nullptr, "GGML_OP does not have a corresponding QNN_OP");
    ^             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/franzkafka/Desktop/llama/llama.cpp/ggml/src/ggml-qnn/backend-ops.cpp:459:5: note: in instantiation of function template specialization '(anonymous namespace)::qnn_binary_op_impl<GGML_OP_MUL_MAT>' requested here
    qnn_binary_op_impl<GGML_OP_MUL_MAT>, // GGML_OP_MUL_MAT
    ^
13 errors generated.
make[2]: *** [ggml/src/CMakeFiles/ggml.dir/build.make:146: ggml/src/CMakeFiles/ggml.dir/ggml-qnn/backend-ops.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
2 warnings generated.
make[1]: *** [CMakeFiles/Makefile2:1603: ggml/src/CMakeFiles/ggml.dir/all] Error 2
make: *** [Makefile:146: all] Error 2  

不确定具体问题在哪里,也许是QNN SDK不对?

PS:是否可以开放仓库的issue区,相关问题我们可到你的仓库进行讨论,避免打扰其他开发者

Hi @FranzKafkaYu , 抱歉现在才回复,issue已经开了,另外你的这个问题也已经fix,这个是static assert,为了防止op增减造成内部op数组index错误故意设计的, 具体的设计可以到我的fork讨论

@jorge-abarca
Copy link

Hello, @chraac and @zhouwg. I wanted to thank you both for your work on this feature, just know that there are others like me that are following this closely. @AndreasKunar has mentioned this effort on his Performance of llama.cpp on Snapdragon X Elite/Plus discussion and the Support for Snapdragon X Elite NPU & GPU issue open in the ollama repo.

There is a bit of interest for those of us who want to use llama.cpp and ollama with Snapdragon X Elite, we are rooting for you!

As I was trying to see if there was anything I could do to give you a hand, I noticed that you seemed to be struggling a bit with a few things that might not be documented such as if the tensor should be freed up or if the SDK managed those resources internally, along with questions related to synchronization operations that might have made you consider waiting for technical support from Qualcomm.

How about engaging @yeonseok-zeticai? As he mentioned in the previously closed PR, he worked at Qualcomm until early this year, he has quite a bit of experience with the Qualcomm AI SDK, and he is interested in getting these things done. (Thank you @yeonseok-zeticai!)

It also appears that Andreas might have more time on October to also take a look at this. Would you like to coordinate any efforts? I know how to program in C++ but I am not as familiar with llama.cpp nor ollama as I would like to be; however, I can do my best to learn and aid, too, in any way possible.

@chraac
Copy link
Author

chraac commented Sep 26, 2024

Hi @jorge-abarca ,

Hello, @chraac and @zhouwg. I wanted to thank you both for your work on this feature, just know that there are others like me that are following this closely. @AndreasKunar has mentioned this effort on his Performance of llama.cpp on Snapdragon X Elite/Plus discussion and the Support for Snapdragon X Elite NPU & GPU issue open in the ollama repo.

There is a bit of interest for those of us who want to use llama.cpp and ollama with Snapdragon X Elite, we are rooting for you!

I'd like to start by thanking everyone for their attention to this project!
While this PR is currently inactive, I'm continuing to work on the refactoring in my own fork: chraac:dev-refactoring. If anyone is interested, please feel free to take a look and provide feedback!

As I was trying to see if there was anything I could do to give you a hand, I noticed that you seemed to be struggling a bit with a few things that might not be documented such as if the tensor should be freed up or if the SDK managed those resources internally, along with questions related to synchronization operations that might have made you consider waiting for technical support from Qualcomm.

We've made significant progress since my last comment. Here's our current status:

  1. The ADD operator is now functional, and the test-backend-ops runs without errors for this operation.
  2. We're currently working on implementing the MUL_MAT operation to pass the test-backend-ops. I've created a PR in my fork addressing this. Your feedback would be greatly appreciated!

How about engaging @yeonseok-zeticai? As he mentioned in the previously closed PR, he worked at Qualcomm until early this year, he has quite a bit of experience with the Qualcomm AI SDK, and he is interested in getting these things done. (Thank you @yeonseok-zeticai!)

Any assistance would be greatly appreciated! Please direct your comments and contributions to my fork: chraac:dev-refactoring

It also appears that Andreas might have more time on October to also take a look at this. Would you like to coordinate any efforts? I know how to program in C++ but I am not as familiar with llama.cpp nor ollama as I would like to be; however, I can do my best to learn and aid, too, in any way possible.

Reviewed the issue, and I'm delighted to hear that someone is interested in contributing to my fork. I'd be happy to discuss this further. Please feel free to raise issues and submit pull requests (PRs) on my fork. Your input is welcome and appreciated. Thank you!

@Pateo-sunnyhuang
Copy link

Pateo-sunnyhuang commented Sep 29, 2024

@chraac 你好,我验证了dev-refactoring分支,使用以下命令编译
mkdir build-android && cd build-android && cmake -DCMAKE_TOOLCHAIN_FILE=$NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=latest -DCMAKE_C_FLAGS=-march=armv8.4a+dotprod -DGGML_QNN=ON -DGGML_QNN_SDK_PATH=$QNN_SDK_ROOT .. && make -j4
编译完成后push到设备,使用llama-cli运行,发现还是使用cpu的,是因为MUL_MAT还没有实现的原因吗?
另外qnn的so如果不copy到/data/local/tmp,仍然能运行,不会报错

@chraac
Copy link
Author

chraac commented Sep 29, 2024

@chraac 你好,我验证了dev-refactoring分支,使用以下命令编译 mkdir build-android && cd build-android && cmake -DCMAKE_TOOLCHAIN_FILE=$NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=latest -DCMAKE_C_FLAGS=-march=armv8.4a+dotprod -DGGML_QNN=ON -DGGML_QNN_SDK_PATH=$QNN_SDK_ROOT .. && make -j4 编译完成后push到设备,使用llama-cli运行,发现还是使用cpu的,是因为MUL_MAT还没有实现的原因吗? 另外qnn的so如果不copy到/data/local/tmp,仍然能运行,不会报错

Hi @Pateo-sunnyhuang , 你好,感谢关注,现在mat_mul的支持还没完成,这里还在进行,所以你这里提到还是用cpu的情况是正常的,另外qnn的so是通过dlopen动态加载的,如果不复制过去,那qnn backend会返回失败,这时候应该会fallback到cpu backend
PS:Android的loader是google自己维护的,记得以前好像不会去LD_LIBRARY_PATH里面查找动态库,不知道现在改没有

@Pateo-sunnyhuang
Copy link

Pateo-sunnyhuang commented Oct 9, 2024

Hi @Pateo-sunnyhuang , 你好,感谢关注,现在mat_mul的支持还没完成,这里还在进行,所以你这里提到还是用cpu的情况是正常的,另外qnn的so是通过dlopen动态加载的,如果不复制过去,那qnn backend会返回失败,这时候应该会fallback到cpu backend PS:Android的loader是google自己维护的,记得以前好像不会去LD_LIBRARY_PATH里面查找动态库,不知道现在改没有

感谢回复,关于推理过程我加了一些日志,发现两个问题
1. 使用cpu推理,是因为在llama_new_context_with_model方法中,ggml_backend_qnn_init没有被调用
在我推理的qwen2.5-1.5b大模型的时候,qnn model->n_gpu_layers=0, model->main_gpu=0,所以if (model->n_gpu_layers > 0)这个条件无法满足,当我去除这个条件时,发现程序运行出错,以下是console的部分日志
`
.........................................................................
llama_new_context_with_model: n_ctx = 32768
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: qnn model->n_gpu_layers=0, model->main_gpu=0
ggml-qnn:[ggml_backend_qnn_init, 390]: ggml_backend_qnn_init(0, (null))
ggml-qnn:[ggml_backend_qnn_init, 393]: extend_lib_search_path is nullptr, will use /data/local/tmp/ as default
ggml-qnn:[ggml_backend_qnn_init, 429]: QNN-CPU backend setenv successfully

ggml-qnn:[load_system, 751]: find a valid qnn system interface

ggml-qnn:[qnn_system_interface, 10]: initialize qnn system successfully

ggml-qnn:[load_backend, 766]: lib_path:/data/local/tmp/libQnnCpu.so

ggml-qnn:[load_backend, 788]: num_providers=1

ggml-qnn:[load_backend, 801]: QNN_API_VERSION_MAJOR=2, major=2, QNN_API_VERSION_MINOR=14, minor=14,

ggml-qnn:[load_backend, 814]: find a valid qnn interface

ggml-qnn:[qnn_init, 248]: device property is not supported

ggml-qnn:[qnn_init, 299]: create QNN device successfully

ggml-qnn:[ggml_backend_qnn_init, 449]: qnn device name QNN-CPU
llama_kv_cache_init: CPU KV buffer size = 896.00 MiB
llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.58 MiB
Segmentation fault
`
2. 希望使用npu而不是cpu/gpu进行推理
但是如上面打印的日志,传入ggml_backend_qnn_init方法的device参数是0不是2,如何才能使用NPU呢?

是否可以给出这两个问题的排查的方向或者指导。
另外,ggml_backend_registry_init中的代码似乎没走到,这里的qnn代码有什么作用。

@scguang301
Copy link

Hi @Pateo-sunnyhuang , 你好,感谢关注,现在mat_mul的支持还没完成,这里还在进行,所以你这里提到还是用cpu的情况是正常的,另外qnn的so是通过dlopen动态加载的,如果不复制过去,那qnn backend会返回失败,这时候应该会fallback到cpu backend PS:Android的loader是google自己维护的,记得以前好像不会去LD_LIBRARY_PATH里面查找动态库,不知道现在改没有

感谢回复,关于推理过程我加了一些日志,发现两个问题 1. 使用cpu推理,是因为在llama_new_context_with_model方法中,ggml_backend_qnn_init没有被调用 在我推理的qwen2.5-1.5b大模型的时候,qnn model->n_gpu_layers=0, model->main_gpu=0,所以if (model->n_gpu_layers > 0)这个条件无法满足,当我去除这个条件时,发现程序运行出错,以下是console的部分日志 ` ......................................................................... llama_new_context_with_model: n_ctx = 32768llama_new_context_with_model:n_ctx = 32768 llama_new_context_with_model: n_batch = 2048llama_new_context_with_model:n_batch = 2048 llama_new_context_with_model: n_ubatch = 512llama_new_context_with_model:n_ubatch = 512 llama_new_context_with_model: flash_attn = 0llama_new_context_with_model:flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0llama_new_context_with_model:freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1llama_new_context_with_model:freq_scale = 1 llama_new_context_with_model: qnn model->n_gpu_layers=0, model->main_gpu=0llama_new_context_with_model:qnn model->n_gpu_layers=0, model->main_gpu=0 ggml-qnn:[ggml_backend_qnn_init, 390]: ggml_backend_qnn_init(0, (null))ggml-qnn:[ggml_backend_qnn_init, 390]: ggml_backend_qnn_init(0, (null)) ggml-qnn:[ggml_backend_qnn_init, 393]: extend_lib_search_path is nullptr, will use /data/local/tmp/ as defaultggml-qnn:[ggml_backend_qnn_init, 393]:extend_lib_search_path为 nullptr,默认使用 /data/local/tmp/ ggml-qnn:[ggml_backend_qnn_init, 429]: QNN-CPU backend setenv successfullyggml-qnn:[ggml_backend_qnn_init, 429]:QNN-CPU 后端 setenv 成功

ggml-qnn:[load_system, 751]: find a valid qnn system interfaceggml-qnn:[load_system, 751]:查找有效的 QNN 系统接口

ggml-qnn:[qnn_system_interface, 10]: initialize qnn system successfullyggml-qnn:[qnn_system_interface, 10]: 初始化 QNN 系统成功

ggml-qnn:[load_backend, 766]: lib_path:/data/local/tmp/libQnnCpu.soggml-qnn:[load_backend, 766]: lib_path:/data/local/tmp/libQnnCpu.so

ggml-qnn:[load_backend, 788]: num_providers=1ggml-qnn:[load_backend, 788]: num_providers=1

ggml-qnn:[load_backend, 801]: QNN_API_VERSION_MAJOR=2, major=2, QNN_API_VERSION_MINOR=14, minor=14,GGML-QNN:[load_backend, 801]: QNN_API_VERSION_MAJOR=2, 主要=2, QNN_API_VERSION_MINOR=14, 次要=14,

ggml-qnn:[load_backend, 814]: find a valid qnn interfaceGGGML-qnn:[load_backend, 814]: 查找有效的 QNN 接口

ggml-qnn:[qnn_init, 248]: device property is not supportedggml-qnn:[qnn_init, 248]:不支持 device 属性

ggml-qnn:[qnn_init, 299]: create QNN device successfullyggml-qnn:[qnn_init, 299]: 成功创建 QNN 设备

ggml-qnn:[ggml_backend_qnn_init, 449]: qnn device name QNN-CPUggml-qnn:[ggml_backend_qnn_init, 449]:qnn 设备名称 QNN-CPU llama_kv_cache_init: CPU KV buffer size = 896.00 MiBllama_kv_cache_init:CPU KV 缓冲区大小 = 896.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiBllama_new_context_with_model:KV 自身大小 = 896.00 MiB,K (f16):448.00 MiB,V (f16):448.00 MiB llama_new_context_with_model: CPU output buffer size = 0.58 MiBllama_new_context_with_model:CPU 输出缓冲区大小 = 0.58 MiB Segmentation fault 分段错误 ` 2. 希望使用npu而不是cpu/gpu进行推理 但是如上面打印的日志,传入ggml_backend_qnn_init方法的device参数是0不是2,如何才能使用NPU呢?

是否可以给出这两个问题的排查的方向或者指导。 另外,ggml_backend_registry_init中的代码似乎没走到,这里的qnn代码有什么作用。

Hi @Pateo-sunnyhuang , 你好,感谢关注,现在mat_mul的支持还没完成,这里还在进行,所以你这里提到还是用cpu的情况是正常的,另外qnn的so是通过dlopen动态加载的,如果不复制过去,那qnn backend会返回失败,这时候应该会fallback到cpu backend PS:Android的loader是google自己维护的,记得以前好像不会去LD_LIBRARY_PATH里面查找动态库,不知道现在改没有

感谢回复,关于推理过程我加了一些日志,发现两个问题 1. 使用cpu推理,是因为在llama_new_context_with_model方法中,ggml_backend_qnn_init没有被调用 在我推理的qwen2.5-1.5b大模型的时候,qnn model->n_gpu_layers=0, model->main_gpu=0,所以if (model->n_gpu_layers > 0)这个条件无法满足,当我去除这个条件时,发现程序运行出错,以下是console的部分日志 ` ......................................................................... llama_new_context_with_model: n_ctx = 32768 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: qnn model->n_gpu_layers=0, model->main_gpu=0 ggml-qnn:[ggml_backend_qnn_init, 390]: ggml_backend_qnn_init(0, (null)) ggml-qnn:[ggml_backend_qnn_init, 393]: extend_lib_search_path is nullptr, will use /data/local/tmp/ as default ggml-qnn:[ggml_backend_qnn_init, 429]: QNN-CPU backend setenv successfully

ggml-qnn:[load_system, 751]: find a valid qnn system interface

ggml-qnn:[qnn_system_interface, 10]: initialize qnn system successfully

ggml-qnn:[load_backend, 766]: lib_path:/data/local/tmp/libQnnCpu.so

ggml-qnn:[load_backend, 788]: num_providers=1

ggml-qnn:[load_backend, 801]: QNN_API_VERSION_MAJOR=2, major=2, QNN_API_VERSION_MINOR=14, minor=14,

ggml-qnn:[load_backend, 814]: find a valid qnn interface

ggml-qnn:[qnn_init, 248]: device property is not supported

ggml-qnn:[qnn_init, 299]: create QNN device successfully

ggml-qnn:[ggml_backend_qnn_init, 449]: qnn device name QNN-CPU llama_kv_cache_init: CPU KV buffer size = 896.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_new_context_with_model: CPU output buffer size = 0.58 MiB Segmentation fault ` 2. 希望使用npu而不是cpu/gpu进行推理 但是如上面打印的日志,传入ggml_backend_qnn_init方法的device参数是0不是2,如何才能使用NPU呢?

是否可以给出这两个问题的排查的方向或者指导。 另外,ggml_backend_registry_init中的代码似乎没走到,这里的qnn代码有什么作用。

llama-cli 有参数选择使用deviceid, 参数是 -mg, 可以设置为2

@chraac
Copy link
Author

chraac commented Oct 12, 2024

Hi @Pateo-sunnyhuang , 你好,感谢关注,现在mat_mul的支持还没完成,这里还在进行,所以你这里提到还是用cpu的情况是正常的,另外qnn的so是通过dlopen动态加载的,如果不复制过去,那qnn backend会返回失败,这时候应该会fallback到cpu backend PS:Android的loader是google自己维护的,记得以前好像不会去LD_LIBRARY_PATH里面查找动态库,不知道现在改没有

感谢回复,关于推理过程我加了一些日志,发现两个问题 1. 使用cpu推理,是因为在llama_new_context_with_model方法中,ggml_backend_qnn_init没有被调用 在我推理的qwen2.5-1.5b大模型的时候,qnn model->n_gpu_layers=0, model->main_gpu=0,所以if (model->n_gpu_layers > 0)这个条件无法满足,当我去除这个条件时,发现程序运行出错,以下是console的部分日志 ` ......................................................................... llama_new_context_with_model: n_ctx = 32768 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: qnn model->n_gpu_layers=0, model->main_gpu=0 ggml-qnn:[ggml_backend_qnn_init, 390]: ggml_backend_qnn_init(0, (null)) ggml-qnn:[ggml_backend_qnn_init, 393]: extend_lib_search_path is nullptr, will use /data/local/tmp/ as default ggml-qnn:[ggml_backend_qnn_init, 429]: QNN-CPU backend setenv successfully

ggml-qnn:[load_system, 751]: find a valid qnn system interface

ggml-qnn:[qnn_system_interface, 10]: initialize qnn system successfully

ggml-qnn:[load_backend, 766]: lib_path:/data/local/tmp/libQnnCpu.so

ggml-qnn:[load_backend, 788]: num_providers=1

ggml-qnn:[load_backend, 801]: QNN_API_VERSION_MAJOR=2, major=2, QNN_API_VERSION_MINOR=14, minor=14,

ggml-qnn:[load_backend, 814]: find a valid qnn interface

ggml-qnn:[qnn_init, 248]: device property is not supported

ggml-qnn:[qnn_init, 299]: create QNN device successfully

ggml-qnn:[ggml_backend_qnn_init, 449]: qnn device name QNN-CPU llama_kv_cache_init: CPU KV buffer size = 896.00 MiB llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB llama_new_context_with_model: CPU output buffer size = 0.58 MiB Segmentation fault ` 2. 希望使用npu而不是cpu/gpu进行推理 但是如上面打印的日志,传入ggml_backend_qnn_init方法的device参数是0不是2,如何才能使用NPU呢?

是否可以给出这两个问题的排查的方向或者指导。 另外,ggml_backend_registry_init中的代码似乎没走到,这里的qnn代码有什么作用。

Hi,你好!不好意思,最近比较忙,回复比较慢:

针对第一点这个情况,upstream的backend registry最近一直在重构,所以这里一直有变动,我也在根据新的接口适配,具体可以看upstream的这个project:https://github.com/users/ggerganov/projects/12

第二点这个,现在我主要精力还是集中在让 mat_mul 能通过 test-backend-ops 测试,这个test其实和llama-cli一样也是通过registry创建的backend,所以用法应该和test-backend-ops一致,具体可以参考 @scguang301 的comment

另外如果关注 mat_mul 支持的进度,可以看我的fork的这个PR: chraac#2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants