llama.cpp是一个高性能的开源大语言模型推理框架,由开发者Georgi Gerganov基于 Meta的LLaMA模型开发,使用纯C/C++实现,旨在让大语言模型能够在各种硬件平台上高效运行,特别是消费级设备和边缘设备。
主要特点:
Arm KleidiAI是一个开源库,旨在为AI框架开发者提供针对Arm CPU优化的性能关键型例程,也称为微内核(micro-kernels或uKernels)。这些微内核是实现高性能加速给定算子所需的近乎最小规模的软件。
KleidiAI的主要特性包括:
git clone https://github.com/ggml-org/llama.cpp.git
sudo apt install cmake gcc g++
瑞莎星睿O6 搭载Arm-v9 CPU,可添加armv9-a` 编译选项进行硬件级优化。KleidiAI库提供了针对sve、i8mm和点积加速等硬件功能优化的矩阵乘法内核。
可以使用构建选项GGML_CPU_KLEIDIAI`启用该功能。
cmake -B build -DGGML_NATIVE=OFF -DGGML_CPU_ARM_ARCH=armv9-a+i8mm+dotprod+sve+sve2+fp16 -DGGML_CPU_KLEIDIAI=ON
cmake --build build --config Release -j8
这里以 DeepSeek-R1-Distill-Qwen-1.5B 为例子。
请使用 git LFS 克隆仓库
git lfs install
git clone https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
:::tip
推荐使用 python3.11 以上版本
:::
cd llama.cpp
pip3 install -r ./requirements.txt
python3 convert_hf_to_gguf.py DeepSeek-R1-Distill-Qwen-1.5B/
cd build/bin
./llama-quantize DeepSeek-R1-Distill-Qwen-1.5B-F16.gguf DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf Q4_0
量化可选类型
2 or Q4_0 : 4.34G, +0.4685 ppl @ Llama-3-8B
3 or Q4_1 : 4.78G, +0.4511 ppl @ Llama-3-8B
8 or Q5_0 : 5.21G, +0.1316 ppl @ Llama-3-8B
9 or Q5_1 : 5.65G, +0.1062 ppl @ Llama-3-8B
19 or IQ2_XXS : 2.06 bpw quantization
20 or IQ2_XS : 2.31 bpw quantization
28 or IQ2_S : 2.5 bpw quantization
29 or IQ2_M : 2.7 bpw quantization
24 or IQ1_S : 1.56 bpw quantization
31 or IQ1_M : 1.75 bpw quantization
36 or TQ1_0 : 1.69 bpw ternarization
37 or TQ2_0 : 2.06 bpw ternarization
10 or Q2_K : 2.96G, +3.5199 ppl @ Llama-3-8B
21 or Q2_K_S : 2.96G, +3.1836 ppl @ Llama-3-8B
23 or IQ3_XXS : 3.06 bpw quantization
26 or IQ3_S : 3.44 bpw quantization
27 or IQ3_M : 3.66 bpw quantization mix
12 or Q3_K : alias for Q3_K_M
22 or IQ3_XS : 3.3 bpw quantization
11 or Q3_K_S : 3.41G, +1.6321 ppl @ Llama-3-8B
12 or Q3_K_M : 3.74G, +0.6569 ppl @ Llama-3-8B
13 or Q3_K_L : 4.03G, +0.5562 ppl @ Llama-3-8B
25 or IQ4_NL : 4.50 bpw non-linear quantization
30 or IQ4_XS : 4.25 bpw non-linear quantization
15 or Q4_K : alias for Q4_K_M
14 or Q4_K_S : 4.37G, +0.2689 ppl @ Llama-3-8B
15 or Q4_K_M : 4.58G, +0.1754 ppl @ Llama-3-8B
17 or Q5_K : alias for Q5_K_M
16 or Q5_K_S : 5.21G, +0.1049 ppl @ Llama-3-8B
17 or Q5_K_M : 5.33G, +0.0569 ppl @ Llama-3-8B
18 or Q6_K : 6.14G, +0.0217 ppl @ Llama-3-8B
7 or Q8_0 : 7.96G, +0.0026 ppl @ Llama-3-8B
1 or F16 : 14.00G, +0.0020 ppl @ Mistral-7B
32 or BF16 : 14.00G, -0.0050 ppl @ Mistral-7B
0 or F32 : 26.00G @ 7B
COPY : only copy tensors, no quantizing
cd build/bin
./llama-cli -m DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf
> hi, who are you
<think>
</think>
Hi!I'm DeepSeek-R1, an artificial intelligence assistant created by DeepSeek. I'm at your service and would be delighted to assist you with any inquiries or tasks you may have.
radxa@orion-o6:~/llama.cpp/build_kai/bin$ ./llama-bench -m ~/DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf -t 8
| model | size | params | backend | threads | test | t/s |
| ------------------------------ | ----------: | ---------: | ---------- | ------: | ------------: | -------------------: |
| qwen2 1.5B Q4_0 | 1013.62 MiB | 1.78 B | CPU | 8 | pp512 | 186.14 ± 0.62 |
| qwen2 1.5B Q4_0 | 1013.62 MiB | 1.78 B | CPU | 8 | tg128 | 32.96 ± 0.20 |
去掉编译选项DGGML_CPU_KLEIDIAI=ON
重新编译llama.cpp,进行对比测试
radxa@orion-o6:~/llama.cpp/build/bin$ ./llama-bench -m ~/DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf -t 8
| model | size | params | backend | threads | test | t/s |
| ------------------------------ | ----------: | ---------: | ---------- | ------: | ------------: | -------------------: |
| qwen2 1.5B Q4_0 | 1013.62 MiB | 1.78 B | CPU | 8 | pp512 | 36.09 ± 0.01 |
| qwen2 1.5B Q4_0 | 1013.62 MiB | 1.78 B | CPU | 8 | tg128 | 26.44 ± 0.08 |
Prefill阶段是Compute Bounded, 从以上对比测试可以看出,KleidiAI对于prefill阶段的加速效果明显。