8000 Merge pull request #215 from zxybazh/main · ashrafulsbmcbd/llama-cpp-python@1a13d76 · GitHub
[go: up one dir, main page]

Skip to content

Commit 1a13d76

Browse files
authored
Merge pull request abetlen#215 from zxybazh/main
Update README.md
2 parents e0cca84 + 408dd14 commit 1a13d76

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ Below is a short example demonstrating how to use the low-level API to tokenize
126126
>>> ctx = llama_cpp.llama_init_from_file(b"./models/7b/ggml-model.bin", params)
127127
>>> max_tokens = params.n_ctx
128128
# use ctypes arrays for array params
129-
>>> tokens = (llama_cppp.llama_token * int(max_tokens))()
129+
>>> tokens = (llama_cpp.llama_token * int(max_tokens))()
130130
>>> n_tokens = llama_cpp.llama_tokenize(ctx, b"Q: Name the planets in the solar system? A: ", tokens, max_tokens, add_bos=llama_cpp.c_bool(True))
131131
>>> llama_cpp.llama_free(ctx)
132132
```

0 commit comments

Comments
 (0)
0