8000
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 9f528f4 commit d410f12Copy full SHA for d410f12
llama_cpp/llama.py
@@ -228,7 +228,7 @@ def __init__(
228
model_path: Path to the model.
229
n_ctx: Maximum context size.
230
n_parts: Number of parts to split the model into. If -1, the number of parts is automatically determined.
231
- seed: Random seed. 0 for random.
+ seed: Random seed. -1 for random.
232
f16_kv: Use half-precision for key/value cache.
233
logits_all: Return logits for all tokens, not just the last token.
234
vocab_only: Only load the vocabulary no weights.
0 commit comments