8000 Release b5377 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

b5377

Compare
Choose a tag to compare
@github-actions github-actions released this 14 May 10:27
24e86ca
vulkan: KHR_coopmat flash attention (#13506)

This shader uses coopmat1 to do the Q*K^T multiply. The P*V multiply is more
difficult for various reasons so I haven't done it. Performance for this
shader is around 2.5x better than for the scalar shader when doing prompt
processing. Some of the benefit may be from other optimizations like staging
through shared memory, or splitting by rows.
0