8000 fix: llama_grammar_accept_token arg order (#1649) · CISC/llama-cpp-python@5575fed · GitHub
[go: up one dir, main page]

Skip to content

Commit 5575fed

Browse files
authored
fix: llama_grammar_accept_token arg order (abetlen#1649)
Old was: llama_grammar_accept_token(ctx, grammar, token) Now this is: llama_grammar_accept_token(grammar, ctx, token)
1 parent f7b9e6d commit 5575fed

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llama_cpp/_internals.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -511,7 +511,7 @@ def sample_token(self, candidates: "_LlamaTokenDataArray") -> int:
511511
def grammar_accept_token(self, grammar: LlamaGrammar, token: int):
512512
assert self.ctx is not None
513513
assert grammar.grammar is not None
514-
llama_cpp.llama_grammar_accept_token(self.ctx, grammar.grammar, token)
514+
llama_cpp.llama_grammar_accept_token(grammar.grammar, self.ctx, token)
515515

516516
def reset_timings(self):
517517
assert self.ctx is not None

0 commit comments

Comments
 (0)
0