File tree Expand file tree Collapse file tree 1 file changed +2
-2
lines changed Expand file tree Collapse file tree 1 file changed +2
-2
lines changed Original file line number Diff line number Diff line change @@ -44,10 +44,10 @@ You'll first need to download one of the available multi-modal models in GGUF fo
44
44
- [ llava1.5 7b] ( https://huggingface.co/mys/ggml_llava-v1.5-7b )
45
45
- [ llava1.5 13b] ( https://huggingface.co/mys/ggml_llava-v1.5-13b )
46
46
47
- Then when you run the server you'll need to also specify the path to the clip model used for image embedding
47
+ Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the ` llava-1-5 ` chat_format
48
48
49
49
``` bash
50
- python3 -m llama_cpp.server --model < model_path> --clip-model-path < clip_model_path>
50
+ python3 -m llama_cpp.server --model < model_path> --clip-model-path < clip_model_path> --chat-format llava-1-5
51
51
```
52
52
53
53
Then you can just use the OpenAI API as normal
You can’t perform that action at this time.
0 commit comments