8000 sync: minja by ochafik · Pull Request #11352 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

sync: minja #11352

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 22, 2025
Merged

sync: minja #11352

merged 1 commit into from
Jan 22, 2025

Conversation

ochafik
Copy link
Collaborator
@ochafik ochafik commented Jan 22, 2025

Template of MiniMaxAI/MiniMax-Text-01 fixed (in --jinja mode) by google/minja#29 (ref, cc/ @fairydreaming )

@ochafik ochafik changed the title contrib: sync minja to support MiniMaxAI/MiniMax-Text-01 template w/ --jinja sync: minja Jan 22, 2025
@ochafik ochafik marked this pull request as ready for review January 22, 2025 15:33
@fairydreaming
Copy link
Collaborator

I tested combination of #11016 and #11352 in my minimax-text-01 branch and:

  • llama.cpp no longer crashes during model load
  • after passing -p "You are a helpful assistant" -cnv --jinja to llama-cli I can chat with the model (without --jinja it throws exception "this custom template is not supported" as expected)

jinja support is pretty cool, thanks for implementing this!

@ochafik ochafik merged commit c64d2be into ggml-org:master Jan 22, 2025
45 checks passed
anagri pushed a commit to BodhiSearch/llama.cpp that referenced this pull request Jan 26, 2025
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Feb 13, 2025
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Feb 26, 2025
mglambda pushed a commit to mglambda/llama.cpp that referenced this pull request Mar 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3989 Development

Successfully merging this pull request may close these issues.

2 participants
0