You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the updates of Qualcomm Direct SDK 2.29.0.241129 release, Llama-v2-7B-Chat model can be supported by Qnn-Gpu Backend as well as Qnn-HTP backends.
As far as I know, when I build a Llama-v2-7B-Chat via ai-hub, it supports Qnn-HTP.
Could you explain how to make a Llama-v2-7B-Chat model for supporting Qnn-Gpu backend?
The text was updated successfully, but these errors were encountered:
mestrona-3
added
the
question
Please ask any questions on Slack. This issue will be closed once responded to.
label
Dec 13, 2024
According to the updates of Qualcomm Direct SDK 2.29.0.241129 release, Llama-v2-7B-Chat model can be supported by Qnn-Gpu Backend as well as Qnn-HTP backends.
As far as I know, when I build a Llama-v2-7B-Chat via ai-hub, it supports Qnn-HTP.
Could you explain how to make a Llama-v2-7B-Chat model for supporting Qnn-Gpu backend?
The text was updated successfully, but these errors were encountered: