8000 metal: implement flash attention kernel for quantized KV cache by FanShupei · Pull Request #9735 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

metal: implement flash attention kernel for quantized KV cache #9735

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

[metal] (HACK!!!) force use kernel_flash_attn_ext_scalar_f16 in FA

d436f5b
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Closed

metal: implement flash attention kernel for quantized KV cache #9735

[metal] (HACK!!!) force use kernel_flash_attn_ext_scalar_f16 in FA
d436f5b
Select commit
Loading
Failed to load commit list.

The logs for this run have expired and are no longer available.

0