8000 fix: tensor_split should be optional list · MobinX/llama-cpp-python@118b7f6 · GitHub
[go: up one dir, main page]

Skip to content

Commit 118b7f6

Browse files
committed
fix: tensor_split should be optional list
1 parent 25b3494 commit 118b7f6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llama_cpp/server/app.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ class Settings(BaseSettings):
3131
ge=0,
3232
description="The number of layers to put on the GPU. The rest will be on the CPU.",
3333
)
34-
tensor_split: List[float] = Field(
34+
tensor_split: Optional[List[float]] = Field(
3535
default=None,
3636
description="Split layers across multiple GPUs in proportion.",
3737
)

0 commit comments

Comments
 (0)
0