8000 Fix example documentation · coderonion/llama-cpp-python@eef627c · GitHub
[go: up one dir, main page]

Skip to content

Commit eef627c

Browse files
committed
Fix example documentation
1 parent a836639 commit eef627c

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

llama_cpp/llama.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -127,10 +127,11 @@ def generate(
127127
]:
128128
"""Generate tokens.
129129
130-
>>> llama = Llama("models/117M")
131-
>>> tokens = llama.tokenize(b"Hello, world!")
132-
>>> for token in llama.generate(tokens, top_k=40, top_p=0.95, temp=1.0, repeat_penalty=1.1):
133-
... print(llama.detokenize([token]))
130+
Examples:
131+
>>> llama = Llama("models/ggml-7b.bin")
132+
>>> tokens = llama.tokenize(b"Hello, world!")
133+
>>> for token in llama.generate(tokens, top_k=40, top_p=0.95, temp=1.0, repeat_penalty=1.1):
134+
... print(llama.detokenize([token]))
134135
135136
Args:
136137
tokens: The prompt tokens.

0 commit comments

Comments
 (0)
0