8000 Add link to bakllava gguf model · furyhawk/llama-cpp-python@8207280 · GitHub
[go: up one dir, main page]

Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 8207280

Browse files
committed
Add link to bakllava gguf model
1 parent baeb7b3 commit 8207280

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

docs/server.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,7 @@ You'll first need to download one of the available multi-modal models in GGUF fo
5757

5858
- [llava-v1.5-7b](https://huggingface.co/mys/ggml_llava-v1.5-7b)
5959
- [llava-v1.5-13b](https://huggingface.co/mys/ggml_llava-v1.5-13b)
60+
- [bakllava-1-7b](https://huggingface.co/mys/ggml_bakllava-1)
6061

6162
Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the `llava-1-5` chat_format
6263

0 commit comments

Comments
 (0)
0