8000 fix(server): Update `embeddings=False` by default. Embeddings should … · coderonion/llama-cpp-python@bf5e0bb · GitHub
[go: up one dir, main page]

Skip to content
8000

Commit bf5e0bb

Browse files
committed
fix(server): Update embeddings=False by default. Embeddings should be enabled by default for embedding models.
1 parent 117cbb2 commit bf5e0bb

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llama_cpp/server/settings.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ class ModelSettings(BaseSettings):
9696
default=True, description="if true, use experimental mul_mat_q kernels"
9797
)
9898
logits_all: bool = Field(default=True, description="Whether to return logits.")
99-
embedding: bool = Field(default=True, description="Whether to use embeddings.")
99+
embedding: bool = Field(default=False, description="Whether to use embeddings.")
100100
offload_kqv: bool = Field(
101101
default=True, description="Whether to offload kqv to the GPU."
102102
)

0 commit comments

Comments
 (0)
0