Skip to content

Latest commit

 

History

History
31 lines (19 loc) · 1.97 KB

File metadata and controls

31 lines (19 loc) · 1.97 KB

Qualcomm® AI Hub Models

Baichuan-7B is a family of LLMs. It achieves the state-of-the-art performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU). 4-bit weights and 16-bit activations making it suitable for on-device The model is quantized to deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-KVCache-Quantized's latency.

This is based on the implementation of Baichuan-7B found here. This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found here.

Sign up for early access to run these models on a hosted Qualcomm® device.

License

  • The license for the original implementation of Baichuan-7B can be found here.
  • The license for the compiled assets for on-device deployment can be found here

References

Community