10000 context : allow cache-less context for embeddings by ggerganov · Pull Request #13108 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

context : allow cache-less context for embeddings #13108

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 8, 2025

Conversation

ggerganov
Copy link
Member
@ggerganov ggerganov commented Apr 25, 2025

target #12799

There is no need to create a KV cache when using embedding models such as BERT. This saves memory compared to master.

API Changes

  • The llama_encode() method is now the recommended way to compute embeddings and rerank.
  • llama_decode() can still be used to compute embeddings as before.
  • For embedding models such as BERT, llama_decode() fallbacks to llama_encode() and prints a warning.

In short, whenever the KV cache is not needed - use llama_encode(). Otherwise - use llama_decode(). The changes are backwards compatible.

@ggerganov ggerganov force-pushed the gg/llama-kv-cache-v6 branch 5 times, most recently from 58115a2 to 7e79a42 Compare May 2, 2025 13:02
Base automatically changed from gg/llama-kv-cache-v6 to master May 2, 2025 14:48
@ggerganov
Copy link
Member Author

I'll work on rebasing and merging this next - it should be a good improvement for embedding models by reducing the allocated memory during inference.

@ggerganov ggerganov force-pushed the gg/embeddings-no-kv branch from 4f0ea9b to c14ee72 Compare May 3, 2025 08:23
@ggerganov ggerganov marked this pull request as ready for review May 3, 2025 15:23
@ggerganov ggerganov requested a review from ngxson as a code owner May 3, 2025 15:23
@aviallon
Copy link
Contributor
aviallon commented May 4, 2025

Thanks for your awesome work Georgi.
Is there a donation / sponsoring page btw?

@ggerganov ggerganov merged commit 6562e5a into master May 8, 2025
1 check passed
@ggerganov ggerganov deleted the gg/embeddings-no-kv branch May 8, 2025 11:28
Comment on lines +798 to +809
// extract the rerank score - a single float per sequence
auto & embd_seq_out = embd_seq;

for (uint32_t s = 0; s < ubatch.n_seqs; ++s) {
const llama_seq_id seq_id = ubatch.seq_id[s][0];
if (embd_seq_out.find(seq_id) != embd_seq_out.end()) {
continue;
}
embd_seq_out[seq_id].resize(1);
ggml_backend_tensor_get_async(backend_embd, t_embd, embd_seq_out[seq_id].data(), (seq_id)*sizeof(float), sizeof(float));
}
} break;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ggerganov shouldn't you include a documentation change stating that rank pooling is now supported?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was already supported before this change when using llama_decode(). Now llama_encode() also supports it. Not sure we need to document - do you have something in mind where to add docs about this?

gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request May 9, 2025
* origin/master: (39 commits)
server : vision support via libmtmd (ggml-org#12898)
sycl : implementation of reordered Q4_0 MMVQ for Intel GPUs (ggml-org#12858)
metal : optimize MoE for large batches (ggml-org#13388)
CUDA: FA support for Deepseek (Ampere or newer) (ggml-org#13306)
llama : do not crash if there is no CPU backend (ggml-org#13395)
CUDA: fix crash on large batch size for MoE models (ggml-org#13384)
imatrix : Add --parse-special for enabling parsing of special tokens in imatrix calculation (ggml-org#13389)
llama-run: add support for downloading models from ModelScope (ggml-org#13370)
mtmd : fix batch_view for m-rope (ggml-org#13397)
llama : one-off chat template fix for Mistral-Small-2503 (ggml-org#13398)
rpc : add rpc_msg_set_tensor_hash_req (ggml-org#13353)
vulkan: Allow up to 4096 elements for mul_mat_id row_ids (ggml-org#13326)
server : (webui) rename has_multimodal --> modalities (ggml-org#13393)
ci : limit write permission to only the release step + fixes (ggml-org#13392)
mtmd : Expose helper_decode_image_chunk (ggml-org#13366)
server : (webui) fix a very small misalignment (ggml-org#13387)
server : (webui) revamp the input area, plus many small UI improvements (ggml-org#13365)
convert : support rope_scaling type and rope_type (ggml-org#13349)
mtmd : fix the calculation of n_tokens for smolvlm (ggml-org#13381)
context : allow cache-less context for embeddings (ggml-org#13108)
...
@cebtenzzre
Copy link
Collaborator

This PR breaks the ability to request embeddings for typical, decoder-only models with causal attention from the llama.cpp server (using --embedding --pooling last and /v1/embeddings). I get only NaNs with e5-mistral now. See #7477 which originally made this possible.

@ggerganov
Copy link
Member Author
ggerganov commented May 18, 2025

Does it work correctly if you remove the --embedding CLI arg of the llama-server?

@cebtenzzre
Copy link
Collaborator
cebtenzzre commented May 18, 2025

Does it work correctly if you remove the --embedding CLI arg of the llama-server?

Yes, it does. I wasn't aware that --embedding became optional in #10135.

NaNs aside, I don't understand why an e5-mistral (arch=llama) or BiQwen2 (arch=qwen2vl) embedding model with {arch}.attention.causal not set (so, defaulted to True) gets non-causal attention as of this PR just because completions are disabled. That used to be something you could override explicitly with --attention non-causal, but this was removed from the server in #9308 for some reason.

See also #13247.

@ggerganov
Copy link
Member Author
ggerganov commented May 19, 2025

The main reason is for performance and less memory usage. Non-causal models do not require to have a KV cache to be evaluated and the concept of batching into micro-batches is not relevant for them. So it is better to avoid doing these extra steps that are done in llama_decode().

Such models should be updated to construct a context that does not use a memory (i.e. a KV cache). See the BERT models as an example. (nvm, see comment below)

@ggerganov
Copy link
Member Author

NaNs aside, I don't understand why an e5-mistral (arch=llama) or BiQwen2 (arch=qwen2vl) embedding model with {arch}.attention.causal not set (so, defaulted to True) gets non-causal attention as of this PR just because completions are disabled.

@cebtenzzre Should be fixed with #13797

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0