8000 Formatting · coderonion/llama-cpp-python@0067c1a · GitHub
[go: up one dir, main page]

Skip to content

Commit 0067c1a

Browse files
committed
Formatting
1 parent 0a5c551 commit 0067c1a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llama_cpp/server/__main__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ class Settings(BaseSettings):
3030
n_batch: int = 8
3131
n_threads: int = ((os.cpu_count() or 2) // 2) or 1
3232
f16_kv: bool = True
33-
use_mlock: bool = False # This causes a silent failure on platforms that don't support mlock (e.g. Windows) took forever to figure out...
33+
use_mlock: bool = False # This causes a silent failure on platforms that don't support mlock (e.g. Windows) took forever to figure out...
3434
embedding: bool = True
3535
last_n_tokens_size: int = 64
3636

0 commit comments

Comments
 (0)
0