8000 llama : initial Mamba-2 support by compilade · Pull Request #9126 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

llama : initial Mamba-2 support #9126

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 33 commits into
base: master
Choose a base branch
from
Open

llama : initial Mamba-2 support #9126

wants to merge 33 commits into from

Conversation

compilade
Copy link
Collaborator
@compilade compilade commented Aug 21, 2024

Follow-up from #8519 (comment). This should fix #7727 and fix #8519.

I've implemented the fully recurrent mode of Mamba-2, because it's very similar to Mamba-1, and also because it seems like the most appropriate mode for text generation.

This does not implement the sequentially semistructured matrix mode, because I'm not yet sure how the block decomposition would fit within the batch and ubatch framework of llama.cpp, and how the chunk size should be chosen. If the recurrent mode is faster at single-user auto-regressive text generation, then I'm not sure how to keep the graph node structure constant when using the most appropriate technique for the batch size.

If the sequentially semistructured matrix mode is eventually implemented, it should help with prompt processing speed for large prompts.

What to expect

(mostly taken from #8519 (comment))

The state in Mamba-2 is bigger than I thought; Mamba-Codestral-7B-v0.1 takes 263.5 MiB (in F32) per sequence (e.g. with -np 1), compared to 38 MiB (also in F32) for Falcon-Mamba-7B (which is based on Mamba-1). But that remains constant whatever the context size. Mamba-2 is easier to implement efficiently, so the bigger state does not really impede inference speed.

However, a big downside right now with recurrent models in llama.cpp is the lack of state rollback (which is implemented through state checkpoints in #7531, but needs to be re-adapted to #8526), so the prompt will be reprocessed a lot if using llama-server. I think using llama-cli in conversation mode does not have this problem, however (or maybe only the bare interactive mode with --in-prefix and --in-suffix, not sure).

This initial implementation is CPU-only, but uses SIMD for the SSM scan, so even though the state is bigger than for Mamba-1 models, in my tests, the speed of Mamba2-130M is similar or better than Mamba-130M (but still not that fast compared to transformer-based models with an empty context), when both are run on CPU.

The speed of Mamba-2 models seems comparable to Transformer-based models when the latter have 2k to 4k tokens in their context.

Summary of changes

  • Add support for Mamba2ForCausalLM (including the official Mamba-2 models, and Mamba-Codestral-7B-v0.1)
    • Note that config.json needs to contain "architectures": ["Mamba2ForCausalLM"], for the convert script to properly detect the architecture.
  • View Mamba-1 as having d_inner (aka 2 * n_embd) heads of size 1.
    • This simplifies the handling of shapes in ggml_ssm_scan
  • ggml
    • Implement Mamba-2's selective state update in ggml_ssm_scan.
      • Re-using the same operator as Mamba-1, because it's pretty much the same operation. (except for how ssm_a is broadcast)
    • Fuse the operation with ssm_d into ggml_ssm_scan
      • Otherwise it would need to be transposed, because the dot-products are done head-wise.
    • Implement Mamba-2's SSM scan with GGML_SIMD.
      • This is possible because there is no element-wise expf in the state update unlike with Mamba-1.
    • Avoid state copies for the SSM state (both for Mamba-1 and Mamba-2) by passing state ids to ggml_ssm_scan.
      • Mamba-2 states are huge. Otherwise masking and copying took close to 10% of the CPU time according to perf.

Other

Here's my favorite quote from Section 3.3 of https://arxiv.org/abs/2405.21060:

Furthermore—by a twist of fate—structured state space models and sequentially semiseparable matrices have the same acronyms, underscoring their equivalence! Conveniently we can use any of these acronyms SSM (state space model or semiseparable matrix), SSS (structured state space or sequentially semiseparable), or SS (state space or semiseparable) interchangeably to unambiguously refer to either concept.

TODO

  • Rebase onto master after merging llama : simplify Mamba with advanced batch splits #8526.
  • Avoid unnecessary moves of the state
  • Adapt the Metal kernels and the tests from ggml : add SSM Metal kernels #8546 to the updated ggml_ssm_scan
  • Remove the new GGML_MUL fast broadcast path because it's not used anymore to mask the states.
  • Maybe use a new metadata key instead of {arch}.ssm.time_step_rank for the number of heads of Mamba-2, because it's not really the rank of the time step (well, maybe kind of).
    • The meaning of the number of heads and the time-step rank is overlapping enough in Mamba-2 that I think this is fine.
  • Maybe not fuse the multiplication with ssm_d in ggml_ssm_scan?
  • Maybe split ggml_ssm_scan to separate the implementations for Mamba-1 and Mamba-2, although they do have a lot in common.
    • Seems like they can be distinguished easily enough at the time of kernel dispatch.

@compilade compilade marked this pull request as draft August 21, 2024 21:51
@github-actions github-actions bot added python python script changes ggml changes relating to the ggml tensor library for machine learning labels Aug 21, 2024
* ggml : improve ggml_mul speed when masking recurrent states
* ggml : make the ggml_mul fast broadcast path more consistently formatted
@compilade compilade changed the base branch from compilade/batch-splits to master August 21, 2024 22:02
@compilade compilade marked this pull request as ready for review August 21, 2024 22:02
@compilade compilade added the Review Complexity : Medium Generally require more time to grok but manageable by beginner to medium expertise level label Aug 21, 2024
@ngxson
Copy link
Collaborator
ngxson commented Aug 22, 2024

Hey @compilade , thanks for implementing this!

I tried converting https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1 using convert_hf_to_gguf.py, but it gives error:

    with open(dir_model / "config.json", "r", encoding="utf-8") as f:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'config.json'

Nevertheless, I successfully converted a Mamba-Codestral transformers-compatible model: https://huggingface.co/Molbap/code2 (Need to comment out the line raise NotImplementedError("BPE pre-tokenizer was not recognized - update get_vocab_base_pre()") in convert_hf_to_gguf.py)

Run it output model (remember to select the correct chat template, since the model does not come with one):

make llama-cli -j && ./llama-cli -m ../models/mcode-7.3B-Q8_0.gguf -cnv -p "You are a helpful assistant" --chat-template mistral -ngl 0

The result looks promising, but I have no idea why there are [UNK_BYTE_0x29681...]. It seems like the there is a problem with space character:

<<SYS>>Youareahelpfulassistant<</SYS>>
> hi
[UNK_BYTE_0xe29681▁Hello]Hello![UNK_BYTE_0xe29681▁How]How[UNK_BYTE_0xe29681▁can]can[UNK_BYTE_0xe29681▁I]I[UNK_BYTE_0xe29681▁assist]assist[UNK_BYTE_0xe29681▁you]you[UNK_BYTE_0xe29681▁today]today?

Link to download GGUF: https://huggingface.co/ngxson/codestral-mamba-llamacpp-test/tree/main

@compilade
Copy link
Collaborator Author
compilade commented Aug 22, 2024

Hey @compilade , thanks for implementing this!

I tried converting https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1 using convert_hf_to_gguf.py, but it gives error:

    with open(dir_model / "config.json", "r", encoding="utf-8") as f:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'config.json'

@ngxson

The steps I took to convert Mamba-Codestral-7B-v0.1 are the following:

  1. Rename consolidated.safetensors to model.safetensors
  2. Rename params.json to config.json
  3. Add the line "architectures": ["Mamba2ForCausalLM"], in config.json
  4. Rename tokenizer.model.v3 to tokenizer.model
  5. Use convert_hf_to_gguf.py as usual.

I did not have tokenization problems in my tests. Maybe because I was using the original SentencePiece tokenizer instead of a BPE tokenizer.

That tokenizer.json in the transformers-compatible version seems to have problematic spaces. It uses the SentencePiece space escaping instead of the BPE one. Its normalizer seems to revert the escaping, but that's not handled in llama.cpp.

There are probably still problems with the SentencePiece tokenizer too, like the lack of special tokens (control tokens seem to be identified correctly, the only difference seems to be with the 20 [REFERENCE_DOC_{n}] tokens (where n is 0 to 19), which tokenzier.json identifies as non-special added tokens (maps to USER_DEFINED for llama.cpp), while tokenizer.model identifies them as NORMAL tokens).

I think the SentencePiece tokenizer should be preferred for this model; it should be easier to handle without workarounds. I should change that in convert_hf_to_gguf.py. Meanwhile either not include tokenizer.json or rename it to something else.

The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.
@ngxson
Copy link
Collaborator
ngxson commented Aug 23, 2024

Thanks for the guide! I've successfully converted the original repository the gguf by following your steps.

For the transformers-compatible, I will try to contact the one who made it. Hopefully it will be fixed soon.

I'm wondering if convert_hf_to_gguf.py can automatically handle the renaming of params.json, consolidated.safetensors and tokenizer.model.v3? For now, my fear is that someone who use automated tools like gguf-my-repo will be stuck due to this issue.

(Also cc @Vaibhavs10 since he's the maintainer of gguf-my-repo.)

Copy link
Collaborator
@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @compilade/ @ngxson - JFYI - the transformers weights are now merged in the main repo: https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1

If you face any issues with the conversion with this could you open an issue on the repo for us to track! 🤗

@1ns0mni4c
Copy link

Any updates on when Codestral Mamba should be supported?

@learning-chip
Copy link

Nice work! Just a note on the ssm_scan kernel performance: a better fused implementation by the flash-linear-attention project can give the equivalent functionality as Mamba2's original kernel: fla-org/flash-linear-attention#49 , and runs 2x faster: fla-org/flash-linear-attention#50

@molbap
Copy link
molbap commented Sep 16, 2024

Hi @compilade ! I worked on repo conversion for the transformers-compatible mamba2 version, let us know if you need anything from us to move forward with this PR :)

@HanClinto
Copy link
Collaborator

I'm wondering if convert_hf_to_gguf.py can automatically handle the renaming of params.json, consolidated.safetensors and tokenizer.model.v3? For now, my fear is that someone who use automated tools like gguf-my-repo will be stuck due to this issue.

(Also cc @Vaibhavs10 since he's the maintainer of gguf-my-repo.)

It sounds like having a simple fallback of expected filenames would be a reasonable thing to include here? I don't know that we want to maintain a ton of different ones, but adding a second layer of fallbacks for alternate filenames doesn't feel arduous.

@compilade
Copy link
Collaborator Author

It sounds like having a simple fallback of expected filenames would be a reasonable thing to include here? I don't know that we want to maintain a ton of different ones, but adding a second layer of fallbacks for alternate filenames doesn't feel arduous.

@HanClinto

That's not really a problem anymore (at least for Mamba-Codestral) since the official repo was updated in https://huggingface.co/mistralai/Mamba-Codestral-7B-v0.1/commit/88085f9cdfa832c3aca8a0315a4520cf7558c947 to use more standard names.

What is currently blocking this is that the Metal and CUDA kernels for ggml_ssm_scan need to be updated BUT before that, I want to refactor the operator to completely avoid copying Mamba-2 states (because otherwise the unnecessary copies use a non-negligible fraction of the memory bandwidth (10% of total text generation inference time on my laptop), since Mamba-2 states are big).

@hg0428
Copy link
hg0428 commented Oct 1, 2024

Any updates on this?

@github-actions github-actions bot added the testing Everything test related label Oct 1, 2024
@Tangshengku
Copy link
Tangshengku commented Feb 28, 2025

@compilade So, it means we can do

  1. modify the computation graph for the support, but obviously many codes will be reused, which messes up the codebase, or
  2. write custom data type for such binary tensor, and use the original mamba2 codebase for support.

Seems like the 2nd one could be a better choice for us.

If you want, I can start making a prototype for a binary type in ggml, but I encourage you to give it a try.

Yeah, I would definitely like to have a try. I am excited about it. Will post again if there is any issue.

Thanks a lot!


BTW, what do you mean about 'TQ1_0 and TQ2_0' are not good to this model? You mean the ppl will be bad or the speed &memory will be bad? I tried Bi-Mamba with both tq1_0 and tq2_0, the ppl is fine, as expected. I guess you refers much to the memory usage and speed.

@compilade
Copy link
Collaborator Author
compilade commented Feb 28, 2025

BTW, what do you mean about 'TQ1_0 and TQ2_0' are not good to this model? You mean the ppl will be bad or the speed &memory will be bad? I tried Bi-Mamba with both tq1_0 and tq2_0, the ppl is fine, as expected. I guess you refers much to the memory usage and speed.

@Tangshengku
I mean the ppl is bad when using the unmodified Mamba-2 model graph for Bi-Mamba with TQ2_0 and TQ1_0 when fusing the scale and bias in the model weights. That is because these types don't have a bias term, and so they can't properly encode Bi-Mamba.

But yes, speed and memory can also be better with a dedicated binary type.

@Tangshengku
Copy link

@compilade Hello, I am back for Bi-Mamba data type implementation. Some codes are written with the help of ChatGPT. The implementation is here: https://github.com/Tangshengku/llama.cpp/tree/compilade/mamba2

Current status:

  1. Implemented the binary datatype with proper quant and dequant functions.
  2. During inference, the scaling and bias factors are stored independently and multiplied during forward function. Check here: https://github.com/Tangshengku/llama.cpp/blob/compilade/mamba2/src/llama.cpp#L10522.

I have tested this implementation with Bi-Mamba 2.7B model, the perplexity is fine and the same. The speed is optimized compared with directly using q4_0 or tq2_0:

model size params backend threads test t/s
bimamba2 3B bi 426.85 MiB 2.74 B Metal,BLAS 8 pp512 200.03 ± 5.33
bimamba2 3B bi 426.85 MiB 2.74 B Metal,BLAS 8 tg128 77.78 ± 1.28
model size params backend threads test t/s
bimamba2 3B TQ2_0 - 2.06 bpw ternary 752.57 MiB 2.74 B Metal,BLAS 8 pp512 197.47 ± 3.82
bimamba2 3B TQ2_0 - 2.06 bpw ternary 752.57 MiB 2.74 B Metal,BLAS 8 tg128 62.70 ± 1.36
model size params backend threads test t/s
bimamba2 3B Q4_0 1.46 GiB 2.74 B Metal,BLAS 8 pp512 263.75 ± 5.73
bimamba2 3B Q4_0 1.46 GiB 2.74 B Metal,BLAS 8 tg128 61.47 ± 0.90

The speed is tested on M4 Pro CPU.

Further optimization:

  1. I tried to merge the scaling and bias factors into the data type. But I found that scaling and bias factors are column-wise vectors instead of a scalar in Bitnet. I am not sure how to fuse it into the data type.
  2. I rewrite the forward function and named it as bi-mamba, but it seems unnecessary to do so. Possible solutions: 1) Implement the data type fused with the scaling and bias factors and use your mamba2 forward function directly, or 2) modify your mamba2 forward function with scaling factor conditions (if exists, then compute.)
  3. Not sure if there's any further way to optimize the memory and speed.

How to reproduce the results:

  1. Convert the original bi-mamba weight (not the fused safetensor) with:
python convert_hf_to_gguf.py xxx/bimamba/2.7B --model-name mamba2-2.7B.gguf \
    --outfile ./ckpt/mamba2-2.7B/ --outtype f16
  1. Quant the model like:
./build/bin/llama-quantize ./ckpt/mamba2-2.7B bi_0
  1. Run bench or ppl:
./build/bin/llama-bench -m ./ckpt/ggml-model-BI_0.gguf   --n-gpu-layers 0

@gabe-l-hart
Copy link
Contributor

Cross-posting progress on sync'ing this branch with master (b1dd4d0): #7531 (comment)

@compilade
Copy link
Collaborator Author

There is a problem with multi-user (and/or parallel sequence) inference for recurrent models (also on master, so might have inherited the problem by merging the latest changes).

I'll try to figure out what's the problem.

Like I said in #7531 (comment), there's a problematic early return true in the recurrent case in seq_rm (fixed here), but there's also something else which makes it seem like recurrent states of sequences are not properly isolated. This is also a problem on master. I'm not sure what introduced the problem exactly, but I'll report back if/when I find a fix.

And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.
@compilade
Copy link
Collaborator Author
compilade commented May 2, 2025

but there's also something else which makes it seem like recurrent states of sequences are not properly isolated

I found the problem! It was introduced in #12181

const uint32_t cell_id = i + kv_self->head;
//////////////////////////////////////////////
// TODO: this should not mutate the KV cache !
llama_kv_cell & kv_cell = const_cast<class llama_kv_cache_unified *>(kv_self)->cells[i];

The problem is not actually the const_cast, but that i is used as the cell index when it should have been cell_id.

But in 94c3d53 I've also removed the const_cast by staging kv_cell.src into kv_cell.src0.

Now the only thing left is to adapt the CUDA kernel for the SSM scan (added in #10558) to Mamba2.

@ggerganov
Copy link
Member

Thanks for tracking this down. I'll see if I can merge the #12799 today, so we can start building on top of it.

@gabe-l-hart
Copy link
Contributor

Hi @compilade @ggerganov, quick check on the plan for this branch. I'm continuing to push towards Granite 4 support and I think I'm close to an initially functional version of bamba with the hybrid cache, but it's dependent on this branch, so I'd love to understand if there's a plan for getting this branch over the line.

@compilade
Copy link
Collaborator Author

@gabe-l-hart

I've been attempting to adapt the CUDA implementation of the SSM_SCAN operator to how it's modified for Mamba-2 (some shape changes and an extra input tensor for the state ids (to allow avoiding unnecessary copies when reordering the states)). It might conflict with #13291, but it should not be hard to adapt those changes to the new structure of the operator if it gets merged first.

This is pretty much the last step I think, unless the proposed changes need more fundamental modifications (e.g. splitting the SSM_SCAN operator into multiple operators).

I'll try to let you know how this progresses. Right now in my local changes the Mamba(1) part of the CUDA operator works with the new structure, but not yet for Mamba2.
I will push work-in-progress changes in a branch (and notify you here) hopefully soon when I have the time (busy week, but likely tomorrow (or the day after)).

@gabe-l-hart
Copy link
Contributor

Thanks for the update, and much appreciated on the hard work!

@gabe-l-hart gabe-l-hart mentioned this pull request May 14, 2025
2 tasks
@Tangshengku
Copy link

@compilade Hi, I am wondering if I can merge my Bi-mamba implementation to this branch? In CPU, it works well from my side or I open another merge request after your GPU implementation?

@gabe-l-hart
Copy link
Contributor
gabe-l-hart commented May 27, 2025

Hi @compilade, I just wanted to check in and see how things are looking for mamba2 and if there's any kind of planned timeline for getting it merged. Since this is upstream of all Granite 4 work, I would love to see the structure of mamba2 merged soon in order to get CPU inference fully supported while the metal/cuda implementations continue to be refined. Would it be possible to move forward with these as separate pieces of work?

@ggerganov
Copy link
Member

We should first merge #13746 since it significantly reworks the KV cache logic and interface. There are some comments there about the recurrent cache implementation that I think would be nice to be addressed first (I think they might have been already addressed in this PR, but they can be upstreamed separately from the Mamba implementation).

After that is done, we should be able to merge the rest of the code from this PR.

@gabe-l-hart
Copy link
Contributor

@ggerganov Thanks for the update! Is #13746 the end of the chain for KV-cache refactors before we want to address hybrid caching (#13276) directly? I'm trying to keep the Granite 4 pieces as up-to-date as possible with the inbound changes, so just trying to get a handle on what else to expect.

@compilade
Copy link
Collaborator Author
compilade commented May 27, 2025

There are some comments there about the recurrent cache implementation that I think would be nice to be addressed first (I think they might have been already addressed in this PR, but they can be upstreamed separately from the Mamba implementation).

@ggerganov I assume you likely mean making the kv-cells fully read-only when setting the inputs and maybe also the removal of inp_s_mask? (both of which are implemented here already)
Would you prefer a separate PR targeting master or targeting #13746?

@ggerganov
Copy link
Member

@gabe-l-hart I hope it is very near to the end. I have at least one more PR queued after that, related to the KV cache, with some more minor changes.

@compilade Let's target #13746

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Apple Metal https://en.wikipedia.org/wiki/Metal_(API) ggml changes relating to the ggml tensor library for machine learning python python script changes Review Complexity : Medium Generally require more time to grok but manageable by beginner to medium expertise level testing Everything test related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature Request: Support Codestral Mamba llama : support Mamba-2
0