Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vulkan: scale caching for k quants + misc fixes #11081

Merged
merged 25 commits into from
Jan 15, 2025

Conversation

netrunnereve
Copy link
Collaborator

We can make inference run a bit faster by extracting the scales in parallel and saving them to shared memory, where they'll be used by all the threads working on the superblock. This came out of the experiments in #10999.

This was not done for Q4_K and Q5_K as their scales are packed in a complicated way which makes this method even slower.

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   232.89 us/run - 117.44 MFLOP/run - 504.27 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   359.69 us/run - 117.44 MFLOP/run - 326.50 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   234.78 us/run - 117.44 MFLOP/run - 500.22 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   313.31 us/run - 117.44 MFLOP/run - 374.84 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   333.78 us/run - 117.44 MFLOP/run - 351.85 GFLOPS
model size params backend ngl threads main_gpu sm test t/s
llama 8B Q2_K - Medium 2.95 GiB 8.03 B Vulkan 100 8 1 none tg128 24.78 ± 0.03
llama 8B Q3_K - Medium 3.74 GiB 8.03 B Vulkan 100 8 1 none tg128 21.98 ± 0.02
llama 7B Q6_K 5.53 GiB 7.24 B Vulkan 100 8 1 none tg128 22.27 ± 0.01

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   241.10 us/run - 117.44 MFLOP/run - 487.09 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   449.01 us/run - 117.44 MFLOP/run - 261.56 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   235.58 us/run - 117.44 MFLOP/run - 498.51 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   315.21 us/run - 117.44 MFLOP/run - 372.58 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   365.79 us/run - 117.44 MFLOP/run - 321.06 GFLOPS
model size params backend ngl threads main_gpu sm test t/s
llama 8B Q2_K - Medium 2.95 GiB 8.03 B Vulkan 100 8 1 none tg128 22.15 ± 0.01
llama 8B Q3_K - Medium 3.74 GiB 8.03 B Vulkan 100 8 1 none tg128 18.97 ± 0.00
llama 7B Q6_K 5.53 GiB 7.24 B Vulkan 100 8 1 none tg128 20.38 ± 0.00

@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Jan 5, 2025
@netrunnereve netrunnereve requested a review from 0cc4m January 5, 2025 02:26
@github-actions github-actions bot added script Script related python python script changes Apple Metal https://en.wikipedia.org/wiki/Metal_(API) labels Jan 5, 2025
@netrunnereve netrunnereve removed script Script related python python script changes Apple Metal https://en.wikipedia.org/wiki/Metal_(API) labels Jan 5, 2025
@jeffbolznv jeffbolznv self-requested a review January 5, 2025 05:36
@jeffbolznv
Copy link
Collaborator

RTX 4070 results. Keep in mind there's a lot of variability in the results, but at first glance it seems like an improvement for Q3_K but worse for the others:

after:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17040 runs -    60.51 us/run - 117.44 MFLOP/run -   1.94 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10650 runs -    95.48 us/run - 234.88 MFLOP/run -   2.46 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7952 runs -   126.38 us/run - 352.32 MFLOP/run -   2.79 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6603 runs -   153.03 us/run - 469.76 MFLOP/run -   3.07 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6498 runs -   157.50 us/run - 587.20 MFLOP/run -   3.73 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2568 runs -   402.20 us/run - 939.52 MFLOP/run -   2.34 TFLOPS

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    79.41 us/run - 117.44 MFLOP/run -   1.48 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9798 runs -   105.30 us/run - 234.88 MFLOP/run -   2.23 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   148.98 us/run - 352.32 MFLOP/run -   2.36 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   172.48 us/run - 469.76 MFLOP/run -   2.72 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4617 runs -   220.06 us/run - 587.20 MFLOP/run -   2.67 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2675 runs -   384.43 us/run - 939.52 MFLOP/run -   2.44 TFLOPS

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   131.17 us/run - 117.44 MFLOP/run - 895.32 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   133.59 us/run - 234.88 MFLOP/run -   1.76 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6248 runs -   163.28 us/run - 352.32 MFLOP/run -   2.16 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6603 runs -   153.27 us/run - 469.76 MFLOP/run -   3.06 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6156 runs -   164.57 us/run - 587.20 MFLOP/run -   3.57 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3959 runs -   253.39 us/run - 939.52 MFLOP/run -   3.71 TFLOPS

before:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  18744 runs -    54.18 us/run - 117.44 MFLOP/run -   2.17 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  13632 runs -    73.36 us/run - 234.88 MFLOP/run -   3.20 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10508 runs -    95.71 us/run - 352.32 MFLOP/run -   3.68 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8307 runs -   122.29 us/run - 469.76 MFLOP/run -   3.84 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7011 runs -   145.50 us/run - 587.20 MFLOP/run -   4.04 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3531 runs -   284.81 us/run - 939.52 MFLOP/run -   3.30 TFLOPS

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   104.74 us/run - 117.44 MFLOP/run -   1.12 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   123.00 us/run - 234.88 MFLOP/run -   1.91 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   148.62 us/run - 352.32 MFLOP/run -   2.37 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6177 runs -   162.05 us/run - 469.76 MFLOP/run -   2.90 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4959 runs -   203.90 us/run - 587.20 MFLOP/run -   2.88 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3317 runs -   309.30 us/run - 939.52 MFLOP/run -   3.04 TFLOPS

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   105.69 us/run - 117.44 MFLOP/run -   1.11 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   110.35 us/run - 234.88 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8236 runs -   122.10 us/run - 352.32 MFLOP/run -   2.89 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7029 runs -   142.71 us/run - 469.76 MFLOP/run -   3.29 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6156 runs -   166.92 us/run - 587.20 MFLOP/run -   3.52 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4601 runs -   217.50 us/run - 939.52 MFLOP/run -   4.32 TFLOPS

@netrunnereve
Copy link
Collaborator Author

For multiple ns I'm seeing clear improvements with Q3_K and Q6_K, but Q2_K is much less consistent and is in some cases slower than master.

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   337.74 us/run - 234.88 MFLOP/run - 695.45 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   441.88 us/run - 352.32 MFLOP/run - 797.33 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   566.21 us/run - 469.76 MFLOP/run - 829.66 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   740.15 us/run - 587.20 MFLOP/run - 793.36 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1064.79 us/run - 939.52 MFLOP/run - 882.36 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   454.94 us/run - 234.88 MFLOP/run - 516.29 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   539.48 us/run - 352.32 MFLOP/run - 653.08 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1491 runs -   754.86 us/run - 469.76 MFLOP/run - 622.32 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1197 runs -   862.33 us/run - 587.20 MFLOP/run - 680.95 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    856 runs -  1182.12 us/run - 939.52 MFLOP/run - 794.78 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   388.88 us/run - 234.88 MFLOP/run - 603.99 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   464.96 us/run - 352.32 MFLOP/run - 757.74 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   550.29 us/run - 469.76 MFLOP/run - 853.67 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   675.49 us/run - 587.20 MFLOP/run - 869.30 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1070 runs -   966.01 us/run - 939.52 MFLOP/run - 972.59 GFLOPS

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   336.28 us/run - 234.88 MFLOP/run - 698.47 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   458.81 us/run - 352.32 MFLOP/run - 767.90 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   573.96 us/run - 469.76 MFLOP/run - 818.45 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   727.08 us/run - 587.20 MFLOP/run - 807.62 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1067.80 us/run - 939.52 MFLOP/run - 879.87 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   543.67 us/run - 234.88 MFLOP/run - 432.03 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   642.54 us/run - 352.32 MFLOP/run - 548.33 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1278 runs -   885.94 us/run - 469.76 MFLOP/run - 530.24 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1026 runs -  1004.95 us/run - 587.20 MFLOP/run - 584.31 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    856 runs -  1270.78 us/run - 939.52 MFLOP/run - 739.33 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   425.50 us/run - 234.88 MFLOP/run - 552.01 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   537.97 us/run - 352.32 MFLOP/run - 654.91 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   625.29 us/run - 469.76 MFLOP/run - 751.28 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   771.12 us/run - 587.20 MFLOP/run - 761.49 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1076.21 us/run - 939.52 MFLOP/run - 872.99 GFLOPS

I tried calculating the A * scale multiplication ahead of time for Q2_K, but it didn't do much. That also should reduce the number of shared memory reads as the products are stored in registers.

A * scale multiplication cached in registers:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   332.67 us/run - 234.88 MFLOP/run - 706.06 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   443.91 us/run - 352.32 MFLOP/run - 793.69 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   565.84 us/run - 469.76 MFLOP/run - 830.20 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   741.69 us/run - 587.20 MFLOP/run - 791.71 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1071.39 us/run - 939.52 MFLOP/run - 876.92 GFLOPS

@0cc4m
Copy link
Collaborator

0cc4m commented Jan 7, 2025

I'll post benchmarks at a later point, but this reduces performance on RTX 3090 for q2_k and q6_k. I see small improvements on Radeon Pro VII. Intel still crashes, but only in test-backend-ops -o MUL_MAT. I don't know what's going on there, since test-backend-ops -o MUL_MAT perf passes just fine. Looking at the perf results, it's a small improvement on A770, too.

@jeffbolznv
Copy link
Collaborator

IMO the crash is still very likely related to the barriers in nonuniform control flow. It really needs to be fixed if we're going to use shared memory here. If the additional branches are causing too many problems then maybe we could change how the work is spread across a workgroup so that the number of iterations is uniform, but that could also affect perf (likely making it worse, I'd guess).

@netrunnereve
Copy link
Collaborator Author

If the additional branches are causing too many problems then maybe we could change how the work is spread across a workgroup so that the number of iterations is uniform, but that could also affect perf

To get rid of the branches we could just have the main i loop run with no checks as long as we have enough blocks remaining to use all threads, and then switch to a separate code path for the final multiplications. There's no need to redo the algorithm.

@netrunnereve
Copy link
Collaborator Author

netrunnereve commented Jan 8, 2025

Okay I've fixed up Q6_K to handle the early return case, and it's now running at 23.3 t/s with a few extra tweaks. @0cc4m can you try this on Intel to see if it prevents the crash?

@jeffbolznv
Copy link
Collaborator

I tested the latest Q6_K changes on RTX 4070. For llama-bench with llama-2-7b.Q6_K, the perf is basically unchanged, which is not surprising since it's just memory bandwidth-limited. The directed perf results are more interesting:

before:
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  46860 runs -   107.44 us/run - 117.44 MFLOP/run -   1.09 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  45582 runs -   110.08 us/run - 234.88 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  39760 runs -   126.70 us/run - 352.32 MFLOP/run -   2.78 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  33654 runs -   149.37 us/run - 469.76 MFLOP/run -   3.15 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  30438 runs -   164.95 us/run - 587.20 MFLOP/run -   3.56 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22684 runs -   221.28 us/run - 939.52 MFLOP/run -   4.25 TFLOPS

after:
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  45156 runs -   112.21 us/run - 117.44 MFLOP/run -   1.05 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  46860 runs -   106.90 us/run - 234.88 MFLOP/run -   2.20 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  43168 runs -   116.55 us/run - 352.32 MFLOP/run -   3.02 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  44304 runs -   113.42 us/run - 469.76 MFLOP/run -   4.14 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  37962 runs -   132.16 us/run - 587.20 MFLOP/run -   4.44 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9202 runs -   544.83 us/run - 939.52 MFLOP/run -   1.72 TFLOPS

So there's a nice boost for larger n, but it just falls off a cliff for n=8. I looked into this, and what's happening is the barriers are causing all the loads of the B matrix to be bunched together, and it's using too many registers. I tried moving all the B loads to the start of the function and saving them in local arrays, and that seems to resolve the issue:

with loads at the top:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  48564 runs -   104.69 us/run - 117.44 MFLOP/run -   1.12 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  47286 runs -   106.60 us/run - 234.88 MFLOP/run -   2.20 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  40328 runs -   124.44 us/run - 352.32 MFLOP/run -   2.83 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  44091 runs -   113.45 us/run - 469.76 MFLOP/run -   4.14 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  39159 runs -   127.93 us/run - 587.20 MFLOP/run -   4.59 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22791 runs -   220.12 us/run - 939.52 MFLOP/run -   4.27 TFLOPS

@netrunnereve
Copy link
Collaborator Author

netrunnereve commented Jan 8, 2025

So there's a nice boost for larger n, but it just falls off a cliff for n=8.

Hmm this looks like an Nvidia only issue, and I didn't see this on my end when I was testing my changes. AMD's tools report that 82/256 vector registers are used in the 64 block size, 4 rows, and 8 columns case.

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   31
2.90 us/run - 117.44 MFLOP/run - 375.33 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   37
2.20 us/run - 234.88 MFLOP/run - 631.06 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   45
4.42 us/run - 352.32 MFLOP/run - 775.32 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   55
0.29 us/run - 469.76 MFLOP/run - 853.66 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   66
9.48 us/run - 587.20 MFLOP/run - 877.11 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1070 runs -   979.36 us/run - 939.52 MFLOP/run - 959.33 GFLOPS

I checked the assembly and at least for me the compiler is interleaving the B loads and the sum FMAs rather than doing them all at once. Also if I do a quick estimation:

temp: 8*4 = 32 registers
B: 4*4 = 16 registers
sum: 4 registers
scales: 4 registers
qs: 4*4 = 16 registers

That's 72 vector registers, and I guess we can go up to 100 ish when we include the indexes and so forth. If we assume that the compiler is loading all the B columns together then that's 16*8 = 128 registers, which would bring the total number over 200. However the compiler in this case doesn't need to load all the B values into registers in one go, and it should be smart enough to not spill over.

BTW I definitely can make this change to fix the n=8 performance, and I'll do these tweaks in one go once I get confirmation that Intel is working. It's just weird that the compiler is running out of registers in this case, which hinders performance more than smaller loads would.

@netrunnereve netrunnereve removed server SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language Apple Metal https://en.wikipedia.org/wiki/Metal_(API) labels Jan 9, 2025
@0cc4m
Copy link
Collaborator

0cc4m commented Jan 9, 2025

Okay I've fixed up Q6_K to handle the early return case, and it's now running at 23.3 t/s with a few extra tweaks. @0cc4m can you try this on Intel to see if it prevents the crash?

I tested it on my A770, now Q6_K passes and it crashes on a later Q2_K test, so it was correct.

@netrunnereve
Copy link
Collaborator Author

New numbers after fixing the early returns and making some more changes:

model size params backend ngl threads main_gpu sm test t/s
llama 8B Q2_K - Medium 2.95 GiB 8.03 B Vulkan 100 8 1 none tg128 28.47 ± 0.05
llama 8B Q3_K - Medium 3.74 GiB 8.03 B Vulkan 100 8 1 none tg128 25.06 ± 0.05
llama 7B Q6_K 5.53 GiB 7.24 B Vulkan 100 8 1 none tg128 23.42 ± 0.03
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   210.87 us/run - 117.44 MFLOP/run - 556.94 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   275.80 us/run - 117.44 MFLOP/run - 425.81 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   224.08 us/run - 117.44 MFLOP/run - 524.09 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   300.91 us/run - 117.44 MFLOP/run - 390.29 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   312.18 us/run - 117.44 MFLOP/run - 376.20 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   321.61 us/run - 234.88 MFLOP/run - 730.32 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   367.27 us/run - 234.88 MFLOP/run - 639.53 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   314.11 us/run - 234.88 MFLOP/run - 747.77 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   393.84 us/run - 234.88 MFLOP/run - 596.39 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   364.45 us/run - 234.88 MFLOP/run - 644.48 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   420.31 us/run - 352.32 MFLOP/run - 838.23 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   463.35 us/run - 352.32 MFLOP/run - 760.38 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   423.65 us/run - 352.32 MFLOP/run - 831.63 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   482.50 us/run - 352.32 MFLOP/run - 730.20 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   444.87 us/run - 352.32 MFLOP/run - 791.97 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   609.99 us/run - 469.76 MFLOP/run - 770.11 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   662.96 us/run - 469.76 MFLOP/run - 708.59 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   538.73 us/run - 469.76 MFLOP/run - 871.98 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   627.62 us/run - 469.76 MFLOP/run - 748.48 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   538.51 us/run - 469.76 MFLOP/run - 872.34 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   717.78 us/run - 587.20 MFLOP/run - 818.08 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   765.61 us/run - 587.20 MFLOP/run - 766.97 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   683.26 us/run - 587.20 MFLOP/run - 859.41 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   737.19 us/run - 587.20 MFLOP/run - 796.54 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   664.04 us/run - 587.20 MFLOP/run - 884.28 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1049.30 us/run - 939.52 MFLOP/run - 895.38 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1077.66 us/run - 939.52 MFLOP/run - 871.82 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1060.65 us/run - 939.52 MFLOP/run - 885.80 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1086.90 us/run - 939.52 MFLOP/run - 864.40 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1070 runs -   974.52 us/run - 939.52 MFLOP/run - 964.09 GFLOPS

@netrunnereve
Copy link
Collaborator Author

Okay this should be everything I think.

@0cc4m
Copy link
Collaborator

0cc4m commented Jan 12, 2025

Looks good overall now, the biggest positive effect I saw was on q2_k and q3_k on RX 6800 XT and A770. RTX 3090 has some regressions on q2_k, but the model benchmark didn't show a big difference. I think it's fine.

RTX 3090

llama-bench

model size params backend ngl test t/s Master t/s PR
llama 1B Q2_K - Medium 411.41 MiB 1.10 B Vulkan 99 tg128 240.59 ± 1.69 238.62 ± 9.33
llama 1B Q3_K - Medium 523.67 MiB 1.10 B Vulkan 99 tg128 241.33 ± 0.78 241.57 ± 1.29
llama 1B Q4_K - Medium 636.18 MiB 1.10 B Vulkan 99 tg128 259.55 ± 1.39 255.27 ± 1.66
llama 1B Q5_K - Medium 745.11 MiB 1.10 B Vulkan 99 tg128 253.01 ± 0.91 256.02 ± 11.17
llama 1B Q6_K 860.86 MiB 1.10 B Vulkan 99 tg128 247.65 ± 1.30 244.02 ± 0.84
test-backend-ops perf

q2_K

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17892 runs -    56.22 us/run - 117.44 MFLOP/run -   2.09 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14058 runs -    71.63 us/run - 234.88 MFLOP/run -   3.28 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11644 runs -    87.65 us/run - 352.32 MFLOP/run -   4.02 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9159 runs -   110.74 us/run - 469.76 MFLOP/run -   4.24 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7353 runs -   136.55 us/run - 587.20 MFLOP/run -   4.30 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5136 runs -   198.87 us/run - 939.52 MFLOP/run -   4.72 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  736 runs -  1361.04 us/run -  60.13 GFLOP/run -  44.18 TFLOPS

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  13632 runs -    76.75 us/run - 117.44 MFLOP/run -   1.53 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    93.39 us/run - 234.88 MFLOP/run -   2.52 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   120.35 us/run - 352.32 MFLOP/run -   2.93 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6603 runs -   153.05 us/run - 469.76 MFLOP/run -   3.07 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5643 runs -   182.61 us/run - 587.20 MFLOP/run -   3.22 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2889 runs -   354.30 us/run - 939.52 MFLOP/run -   2.65 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  722 runs -  1386.19 us/run -  60.13 GFLOP/run -  43.38 TFLOPS

q3_K

Master:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -    98.18 us/run - 117.44 MFLOP/run -   1.20 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8946 runs -   113.49 us/run - 234.88 MFLOP/run -   2.07 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   131.34 us/run - 352.32 MFLOP/run -   2.68 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7029 runs -   145.76 us/run - 469.76 MFLOP/run -   3.22 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5472 runs -   187.01 us/run - 587.20 MFLOP/run -   3.14 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2996 runs -   337.69 us/run - 939.52 MFLOP/run -   2.78 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  288 runs -  3493.50 us/run -  60.13 GFLOP/run -  17.21 TFLOPS

PR:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    80.31 us/run - 117.44 MFLOP/run -   1.46 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9798 runs -   102.13 us/run - 234.88 MFLOP/run -   2.30 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   152.26 us/run - 352.32 MFLOP/run -   2.31 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   203.74 us/run - 469.76 MFLOP/run -   2.31 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4104 runs -   252.48 us/run - 587.20 MFLOP/run -   2.33 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2889 runs -   359.58 us/run - 939.52 MFLOP/run -   2.61 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  288 runs -  3476.28 us/run -  60.13 GFLOP/run -  17.30 TFLOPS

q6_K

Master:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14484 runs -    72.46 us/run - 117.44 MFLOP/run -   1.62 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11928 runs -    86.31 us/run - 234.88 MFLOP/run -   2.72 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9656 runs -   103.78 us/run - 352.32 MFLOP/run -   3.39 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   118.89 us/run - 469.76 MFLOP/run -   3.95 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7695 runs -   132.78 us/run - 587.20 MFLOP/run -   4.42 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5243 runs -   193.19 us/run - 939.52 MFLOP/run -   4.86 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  582 runs -  1720.06 us/run -  60.13 GFLOP/run -  34.96 TFLOPS

PR:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14484 runs -    72.02 us/run - 117.44 MFLOP/run -   1.63 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11928 runs -    84.35 us/run - 234.88 MFLOP/run -   2.78 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10508 runs -    96.18 us/run - 352.32 MFLOP/run -   3.66 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8946 runs -   114.39 us/run - 469.76 MFLOP/run -   4.11 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7524 runs -   133.18 us/run - 587.20 MFLOP/run -   4.41 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4708 runs -   217.04 us/run - 939.52 MFLOP/run -   4.33 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  574 runs -  1747.00 us/run -  60.13 GFLOP/run -  34.42 TFLOPS

AMD Radeon RX 6800 XT

llama-bench

model size params backend ngl test t/s Master t/s PR
llama 1B Q2_K - Medium 411.41 MiB 1.10 B Vulkan 99 tg128 360.12 ± 0.60 412.16 ± 0.40
llama 1B Q3_K - Medium 523.67 MiB 1.10 B Vulkan 99 tg128 326.16 ± 0.20 360.38 ± 1.04
llama 1B Q4_K - Medium 636.18 MiB 1.10 B Vulkan 99 tg128 335.30 ± 1.09 338.49 ± 1.86
llama 1B Q5_K - Medium 745.11 MiB 1.10 B Vulkan 99 tg128 296.91 ± 0.99 296.82 ± 1.70
llama 1B Q6_K 860.86 MiB 1.10 B Vulkan 99 tg128 281.05 ± 0.59 280.95 ± 1.12
test-backend-ops perf

q2_K

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  19596 runs -    52.78 us/run - 117.44 MFLOP/run -   2.22 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14484 runs -    69.55 us/run - 234.88 MFLOP/run -   3.38 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11644 runs -    87.84 us/run - 352.32 MFLOP/run -   4.01 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   107.01 us/run - 469.76 MFLOP/run -   4.39 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7866 runs -   128.39 us/run - 587.20 MFLOP/run -   4.57 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4922 runs -   203.35 us/run - 939.52 MFLOP/run -   4.62 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  170 runs -  5904.59 us/run -  60.13 GFLOP/run -  10.18 TFLOPS

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22152 runs -    45.51 us/run - 117.44 MFLOP/run -   2.58 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  15336 runs -    65.29 us/run - 234.88 MFLOP/run -   3.60 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11928 runs -    84.27 us/run - 352.32 MFLOP/run -   4.18 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9585 runs -   105.70 us/run - 469.76 MFLOP/run -   4.44 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7695 runs -   131.98 us/run - 587.20 MFLOP/run -   4.45 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4494 runs -   222.98 us/run - 939.52 MFLOP/run -   4.21 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  170 runs -  5896.34 us/run -  60.13 GFLOP/run -  10.20 TFLOPS

q3_K

Master:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    93.01 us/run - 117.44 MFLOP/run -   1.26 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   110.52 us/run - 234.88 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7952 runs -   127.70 us/run - 352.32 MFLOP/run -   2.76 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7029 runs -   145.58 us/run - 469.76 MFLOP/run -   3.23 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6156 runs -   165.48 us/run - 587.20 MFLOP/run -   3.55 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4387 runs -   230.45 us/run - 939.52 MFLOP/run -   4.08 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  140 runs -  7176.04 us/run -  60.13 GFLOP/run -   8.38 TFLOPS

PR:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17040 runs -    61.66 us/run - 117.44 MFLOP/run -   1.90 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    79.17 us/run - 234.88 MFLOP/run -   2.97 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10508 runs -    96.90 us/run - 352.32 MFLOP/run -   3.64 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   119.89 us/run - 469.76 MFLOP/run -   3.92 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6840 runs -   148.39 us/run - 587.20 MFLOP/run -   3.96 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4387 runs -   230.15 us/run - 939.52 MFLOP/run -   4.08 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  140 runs -  7168.31 us/run -  60.13 GFLOP/run -   8.39 TFLOPS

q6_K

Master:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  19596 runs -    53.32 us/run - 117.44 MFLOP/run -   2.20 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14484 runs -    69.96 us/run - 234.88 MFLOP/run -   3.36 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11928 runs -    85.78 us/run - 352.32 MFLOP/run -   4.11 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9798 runs -   103.90 us/run - 469.76 MFLOP/run -   4.52 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8208 runs -   123.18 us/run - 587.20 MFLOP/run -   4.77 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5457 runs -   184.79 us/run - 939.52 MFLOP/run -   5.08 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  158 runs -  6363.64 us/run -  60.13 GFLOP/run -   9.45 TFLOPS

PR:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22152 runs -    46.43 us/run - 117.44 MFLOP/run -   2.53 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17040 runs -    59.73 us/run - 234.88 MFLOP/run -   3.93 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14200 runs -    71.68 us/run - 352.32 MFLOP/run -   4.92 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10650 runs -    95.09 us/run - 469.76 MFLOP/run -   4.94 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8379 runs -   120.81 us/run - 587.20 MFLOP/run -   4.86 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4922 runs -   204.12 us/run - 939.52 MFLOP/run -   4.60 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  158 runs -  6353.53 us/run -  60.13 GFLOP/run -   9.46 TFLOPS

AMD Radeon Pro VII

llama-bench

model size params backend ngl test t/s Master t/s PR
llama 1B Q2_K - Medium 411.41 MiB 1.10 B Vulkan 99 tg128 173.38 ± 0.35 188.96 ± 0.67
llama 1B Q3_K - Medium 523.67 MiB 1.10 B Vulkan 99 tg128 187.53 ± 0.80 185.90 ± 0.48
llama 1B Q4_K - Medium 636.18 MiB 1.10 B Vulkan 99 tg128 189.13 ± 0.65 194.97 ± 0.86
llama 1B Q5_K - Medium 745.11 MiB 1.10 B Vulkan 99 tg128 180.68 ± 0.54 185.56 ± 0.55
llama 1B Q6_K 860.86 MiB 1.10 B Vulkan 99 tg128 177.98 ± 2.14 177.90 ± 0.55
test-backend-ops perf

q2_K

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   101.66 us/run - 117.44 MFLOP/run -   1.16 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   136.25 us/run - 234.88 MFLOP/run -   1.72 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   174.24 us/run - 352.32 MFLOP/run -   2.02 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4686 runs -   215.95 us/run - 469.76 MFLOP/run -   2.18 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3591 runs -   281.98 us/run - 587.20 MFLOP/run -   2.08 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2354 runs -   432.78 us/run - 939.52 MFLOP/run -   2.17 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   66 runs - 15289.68 us/run -  60.13 GFLOP/run -   3.93 TFLOPS

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11928 runs -    88.16 us/run - 117.44 MFLOP/run -   1.33 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8094 runs -   124.83 us/run - 234.88 MFLOP/run -   1.88 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6248 runs -   161.34 us/run - 352.32 MFLOP/run -   2.18 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4686 runs -   220.51 us/run - 469.76 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3249 runs -   321.18 us/run - 587.20 MFLOP/run -   1.83 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2140 runs -   484.22 us/run - 939.52 MFLOP/run -   1.94 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   66 runs - 15289.14 us/run -  60.13 GFLOP/run -   3.93 TFLOPS

q3_K

Master:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   177.18 us/run - 117.44 MFLOP/run - 662.83 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   211.21 us/run - 234.88 MFLOP/run -   1.11 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   246.23 us/run - 352.32 MFLOP/run -   1.43 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3195 runs -   329.12 us/run - 469.76 MFLOP/run -   1.43 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2736 runs -   385.56 us/run - 587.20 MFLOP/run -   1.52 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   516.76 us/run - 939.52 MFLOP/run -   1.82 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   56 runs - 18338.54 us/run -  60.13 GFLOP/run -   3.28 TFLOPS

PR:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   119.03 us/run - 117.44 MFLOP/run - 986.63 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   155.41 us/run - 234.88 MFLOP/run -   1.51 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   202.12 us/run - 352.32 MFLOP/run -   1.74 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3621 runs -   276.86 us/run - 469.76 MFLOP/run -   1.70 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2736 runs -   367.02 us/run - 587.20 MFLOP/run -   1.60 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2140 runs -   467.44 us/run - 939.52 MFLOP/run -   2.01 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   56 runs - 18452.18 us/run -  60.13 GFLOP/run -   3.26 TFLOPS

q6_K

Master:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   153.24 us/run - 117.44 MFLOP/run - 766.41 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   177.88 us/run - 234.88 MFLOP/run -   1.32 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   206.07 us/run - 352.32 MFLOP/run -   1.71 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3834 runs -   261.47 us/run - 469.76 MFLOP/run -   1.80 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3249 runs -   310.98 us/run - 587.20 MFLOP/run -   1.89 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2247 runs -   449.79 us/run - 939.52 MFLOP/run -   2.09 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   62 runs - 16564.45 us/run -  60.13 GFLOP/run -   3.63 TFLOPS

PR:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   142.09 us/run - 117.44 MFLOP/run - 826.51 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   165.11 us/run - 234.88 MFLOP/run -   1.42 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5396 runs -   188.69 us/run - 352.32 MFLOP/run -   1.87 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   243.86 us/run - 469.76 MFLOP/run -   1.93 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3591 runs -   282.84 us/run - 587.20 MFLOP/run -   2.08 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2461 runs -   409.23 us/run - 939.52 MFLOP/run -   2.30 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   62 runs - 16584.63 us/run -  60.13 GFLOP/run -   3.63 TFLOPS

Intel A770

llama-bench

model size params backend ngl test t/s Master t/s PR
llama 1B Q2_K - Medium 411.41 MiB 1.10 B Vulkan 99 tg128 89.81 ± 0.06 102.47 ± 0.13
llama 1B Q3_K - Medium 523.67 MiB 1.10 B Vulkan 99 tg128 82.08 ± 0.05 97.08 ± 0.11
llama 1B Q4_K - Medium 636.18 MiB 1.10 B Vulkan 99 tg128 88.36 ± 0.04 87.97 ± 0.03
llama 1B Q5_K - Medium 745.11 MiB 1.10 B Vulkan 99 tg128 77.66 ± 0.25 91.44 ± 0.26
llama 1B Q6_K 860.86 MiB 1.10 B Vulkan 99 tg128 62.35 ± 0.08 63.25 ± 0.08
test-backend-ops perf

q2_K

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   286.83 us/run - 117.44 MFLOP/run - 409.45 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   165.47 us/run - 234.88 MFLOP/run -   1.42 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6532 runs -   157.65 us/run - 352.32 MFLOP/run -   2.23 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5325 runs -   190.01 us/run - 469.76 MFLOP/run -   2.47 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3591 runs -   279.26 us/run - 587.20 MFLOP/run -   2.10 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    535 runs -  2034.61 us/run - 939.52 MFLOP/run - 461.77 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 47439.59 us/run -  60.13 GFLOP/run -   1.27 TFLOPS

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   245.55 us/run - 117.44 MFLOP/run - 478.27 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   158.57 us/run - 234.88 MFLOP/run -   1.48 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5396 runs -   188.83 us/run - 352.32 MFLOP/run -   1.87 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4473 runs -   226.73 us/run - 469.76 MFLOP/run -   2.07 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2907 runs -   347.94 us/run - 587.20 MFLOP/run -   1.69 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3652.90 us/run - 939.52 MFLOP/run - 257.20 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 46193.82 us/run -  60.13 GFLOP/run -   1.30 TFLOPS

q3_K

Master:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   347.65 us/run - 117.44 MFLOP/run - 337.82 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   367.95 us/run - 234.88 MFLOP/run - 638.35 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2840 runs -   365.56 us/run - 352.32 MFLOP/run - 963.79 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2343 runs -   439.27 us/run - 469.76 MFLOP/run -   1.07 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   782.66 us/run - 587.20 MFLOP/run - 750.27 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    428 runs -  2421.19 us/run - 939.52 MFLOP/run - 388.04 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 47517.68 us/run -  60.13 GFLOP/run -   1.27 TFLOPS

PR:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   238.87 us/run - 117.44 MFLOP/run - 491.66 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   178.86 us/run - 234.88 MFLOP/run -   1.31 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   241.10 us/run - 352.32 MFLOP/run -   1.46 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3195 runs -   325.11 us/run - 469.76 MFLOP/run -   1.44 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1710 runs -   605.50 us/run - 587.20 MFLOP/run - 969.77 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3766.98 us/run - 939.52 MFLOP/run - 249.41 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 46954.27 us/run -  60.13 GFLOP/run -   1.28 TFLOPS

q6_K

Master:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   670.19 us/run - 117.44 MFLOP/run - 175.23 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   510.59 us/run - 234.88 MFLOP/run - 460.02 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   640.81 us/run - 352.32 MFLOP/run - 549.81 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   661.87 us/run - 469.76 MFLOP/run - 709.75 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   691.92 us/run - 587.20 MFLOP/run - 848.66 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    642 runs -  1702.87 us/run - 939.52 MFLOP/run - 551.73 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 46237.27 us/run -  60.13 GFLOP/run -   1.30 TFLOPS

PR:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   745.08 us/run - 117.44 MFLOP/run - 157.62 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   630.85 us/run - 234.88 MFLOP/run - 372.33 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   665.55 us/run - 352.32 MFLOP/run - 529.37 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1491 runs -   750.69 us/run - 469.76 MFLOP/run - 625.77 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   805.75 us/run - 587.20 MFLOP/run - 728.77 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    642 runs -  1771.64 us/run - 939.52 MFLOP/run - 530.31 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 46139.23 us/run -  60.13 GFLOP/run -   1.30 TFLOPS

@netrunnereve
Copy link
Collaborator Author

I generally expect this to benefit the most on compute limited hardware but the drop in performance for the 3090 is really strange indeed. My guess is that the GPU runs so fast that the bit operations and four unpacks end up taking less time than the barrier overhead. At least on AMD I'm seeing that the sccache values end up being kept in registers for the FMA loop so it's not like the GPU is reading from shared memory every time it does a calculation.

@jeffbolznv
Copy link
Collaborator

Results with the latest change on RTX 4070. I tried the phi3 Q4_K model (includes Q5_K and Q6_K) and perf was basically unchanged, which is expected. For the directed tests:

before
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  16188 runs -    62.13 us/run - 117.44 MFLOP/run -   1.89 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10650 runs -    96.44 us/run - 234.88 MFLOP/run -   2.44 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10792 runs -    95.13 us/run - 352.32 MFLOP/run -   3.70 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7242 runs -   140.11 us/run - 469.76 MFLOP/run -   3.35 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5301 runs -   191.16 us/run - 587.20 MFLOP/run -   3.07 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3317 runs -   302.70 us/run - 939.52 MFLOP/run -   3.10 TFLOPS
  
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   104.41 us/run - 117.44 MFLOP/run -   1.12 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8094 runs -   123.99 us/run - 234.88 MFLOP/run -   1.89 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   148.84 us/run - 352.32 MFLOP/run -   2.37 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   161.33 us/run - 469.76 MFLOP/run -   2.91 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4959 runs -   203.25 us/run - 587.20 MFLOP/run -   2.89 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3210 runs -   313.03 us/run - 939.52 MFLOP/run -   3.00 TFLOPS
  
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  20448 runs -    50.53 us/run - 117.44 MFLOP/run -   2.32 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14910 runs -    68.59 us/run - 234.88 MFLOP/run -   3.42 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    92.24 us/run - 352.32 MFLOP/run -   3.82 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7455 runs -   134.65 us/run - 469.76 MFLOP/run -   3.49 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7524 runs -   133.64 us/run - 587.20 MFLOP/run -   4.39 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5029 runs -   202.53 us/run - 939.52 MFLOP/run -   4.64 TFLOPS
  
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   118.81 us/run - 117.44 MFLOP/run - 988.48 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11502 runs -    87.61 us/run - 234.88 MFLOP/run -   2.68 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   109.53 us/run - 352.32 MFLOP/run -   3.22 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   132.13 us/run - 469.76 MFLOP/run -   3.56 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5301 runs -   190.75 us/run - 587.20 MFLOP/run -   3.08 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3531 runs -   284.79 us/run - 939.52 MFLOP/run -   3.30 TFLOPS
  
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   107.66 us/run - 117.44 MFLOP/run -   1.09 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   109.77 us/run - 234.88 MFLOP/run -   2.14 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   121.36 us/run - 352.32 MFLOP/run -   2.90 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7029 runs -   142.58 us/run - 469.76 MFLOP/run -   3.29 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6327 runs -   161.89 us/run - 587.20 MFLOP/run -   3.63 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4601 runs -   221.37 us/run - 939.52 MFLOP/run -   4.24 TFLOPS
  
after
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14484 runs -    69.36 us/run - 117.44 MFLOP/run -   1.69 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12354 runs -    81.76 us/run - 234.88 MFLOP/run -   2.87 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7952 runs -   129.14 us/run - 352.32 MFLOP/run -   2.73 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4899 runs -   204.16 us/run - 469.76 MFLOP/run -   2.30 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4617 runs -   220.59 us/run - 587.20 MFLOP/run -   2.66 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3103 runs -   331.47 us/run - 939.52 MFLOP/run -   2.83 TFLOPS
  
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    78.82 us/run - 117.44 MFLOP/run -   1.49 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   102.02 us/run - 234.88 MFLOP/run -   2.30 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   134.63 us/run - 352.32 MFLOP/run -   2.62 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   158.03 us/run - 469.76 MFLOP/run -   2.97 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4959 runs -   205.47 us/run - 587.20 MFLOP/run -   2.86 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3103 runs -   324.29 us/run - 939.52 MFLOP/run -   2.90 TFLOPS
  
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17892 runs -    58.00 us/run - 117.44 MFLOP/run -   2.02 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12354 runs -    82.57 us/run - 234.88 MFLOP/run -   2.84 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11360 runs -    89.97 us/run - 352.32 MFLOP/run -   3.92 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8946 runs -   112.37 us/run - 469.76 MFLOP/run -   4.18 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6156 runs -   163.33 us/run - 587.20 MFLOP/run -   3.60 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4173 runs -   239.70 us/run - 939.52 MFLOP/run -   3.92 TFLOPS
  
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   107.72 us/run - 117.44 MFLOP/run -   1.09 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11502 runs -    87.66 us/run - 234.88 MFLOP/run -   2.68 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6532 runs -   156.18 us/run - 352.32 MFLOP/run -   2.26 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7881 runs -   126.93 us/run - 469.76 MFLOP/run -   3.70 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5130 runs -   195.14 us/run - 587.20 MFLOP/run -   3.01 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3531 runs -   289.89 us/run - 939.52 MFLOP/run -   3.24 TFLOPS
  
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   105.40 us/run - 117.44 MFLOP/run -   1.11 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   106.70 us/run - 234.88 MFLOP/run -   2.20 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   117.63 us/run - 352.32 MFLOP/run -   3.00 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8946 runs -   113.02 us/run - 469.76 MFLOP/run -   4.16 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8037 runs -   126.20 us/run - 587.20 MFLOP/run -   4.65 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1819 runs -   560.48 us/run - 939.52 MFLOP/run -   1.68 TFLOPS

Still a regression for NUM_COLS==8 with Q6_K, maybe also to a lesser extent for Q4_K? I guess it doesn't need to block this change.

@netrunnereve
Copy link
Collaborator Author

Still a regression for NUM_COLS==8 with Q6_K, maybe also to a lesser extent for Q4_K?

The Q6_K case is getting stranger especially since the 3090 is handling the 8 column case fine. I still think it's a compiler issue though.

For Q4_K I have no idea why this is happening when we do the same types of loads and literally reduce the instruction count for the scale calculation.

@netrunnereve
Copy link
Collaborator Author

I don't know why the full CI isn't starting up after the approval, so I've run it in my fork and it's passing there.

@netrunnereve netrunnereve merged commit adc5dd9 into ggerganov:master Jan 15, 2025
2 checks passed
@netrunnereve netrunnereve deleted the vulkan branch January 15, 2025 19:52
@slaren
Copy link
Collaborator

slaren commented Jan 15, 2025

The CI only runs when a source file changes, it should be extended to include the vulkan shaders.

pull_request:
types: [opened, synchronize, reopened]
paths: ['.github/workflows/build.yml', '**/CMakeLists.txt', '**/Makefile', '**/*.h', '**/*.hpp', '**/*.c', '**/*.cpp', '**/*.cu', '**/*.cuh', '**/*.swift', '**/*.m', '**/*.metal']

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants