8000 Fix server doc arguments (#892) · tk-master/llama-cpp-python@fb1f956 · GitHub
[go: up one dir, main page]

Skip to content
< 8000 header class="HeaderMktg header-logged-out js-details-container js-header Details f4 py-3" role="banner" data-is-top="true" data-color-mode=light data-light-theme=light data-dark-theme=dark>

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit fb1f956

Browse files
authored
Fix server doc arguments (abetlen#892)
1 parent 80f4162 commit fb1f956

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/server.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ You'll first need to download one of the available function calling models in GG
4545
Then when you run the server you'll need to also specify the `functionary-7b-v1` chat_format
4646

4747
```bash
48-
python3 -m llama_cpp.server --model <model_path> --chat-format functionary
48+
python3 -m llama_cpp.server --model <model_path> --chat_format functionary
4949
```
5050

5151
### Multimodal Models
@@ -61,7 +61,7 @@ You'll first need to download one of the available multi-modal models in GGUF fo
6161
Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the `llava-1-5` chat_format
6262

6363
```bash
64-
python3 -m llama_cpp.server --model <model_path> --clip-model-path <clip_model_path> --chat-format llava-1-5
64+
python3 -m llama_cpp.server --model <model_path> --clip_model_path <clip_model_path> --chat_format llava-1-5
6565
```
6666

6767
Then you can just use the OpenAI API as normal

0 commit comments

Comments
 (0)
0