8000 fix: Additional fixes for speculative decoding · lmbelo/llama-cpp-python@e975dab · GitHub
[go: up one dir, main page]

Skip to content

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit e975dab

Browse files
committed
fix: Additional fixes for speculative decoding
1 parent 11d9562 commit e975dab

File tree

Collapse file tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llama_cpp/llama.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -930,7 +930,7 @@ def generate(
930930

931931
sample_idx += 1
932932
if stopping_criteria is not None and stopping_criteria(
933-
self._input_ids, self._scores[-1, :]
933+
self._input_ids[: sample_idx], self._scores[sample_idx - self.n_tokens, :]
934934
):
935935
return
936936
tokens_or_none = yield token

0 commit comments

Comments
 (0)
0