8000 Fix issue of missing words due to buffer overflow · twobob/llama-cpp-python@4100bde · GitHub
[go: up one dir, main page]

Skip to content

Commit 4100bde

Browse files
authored
Fix issue of missing words due to buffer overflow
1 parent 9ab49bc commit 4100bde

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llama_cpp/llama.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -445,7 +445,7 @@ def detokenize(self, tokens: List[int]) -> bytes:
445445
"""
446446
assert self.model is not None
447447
output = b""
448-
size = 8
448+
size = 16
449449
buffer = (ctypes.c_char * size)()
450450
for token in tokens:
451451
n = llama_cpp.< 37A7 span class=pl-c1>llama_token_to_str_with_model(

0 commit comments

Comments
 (0)
0