Baichuan-7B: Large language model achieving state-of-the-art performance on Chinese and English language benchmarks
Baichuan-7B is a family of LLMs. It achieves the state-of-the-art performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU). 4-bit weights and 16-bit activations making it suitable for on-device The model is quantized to deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-KVCache-Quantized's latency.
This is based on the implementation of Baichuan-7B found here. This repository contains scripts for optimized on-device export suitable to run on Qualcomm® devices. More details on model performance accross various devices, can be found here.
Sign up for early access to run these models on a hosted Qualcomm® device.
- The license for the original implementation of Baichuan-7B can be found here.
- The license for the compiled assets for on-device deployment can be found here
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.