8000 feat: Add Gemma3 chat handler by bot08 · Pull Request #2 · bot08/llama-cpp-python · GitHub
[go: up one dir, main page]

Skip to content

feat: Add Gemma3 chat handler #2

New issue

8000 Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Apr 7, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
fix: modify the gemma3 chat template to be compatible with openai api
  • Loading branch information
kossum committed Apr 4, 2025
commit 025e7fa44bfd071eb36b5641448c4e80a0b29917
17 changes: 1 addition & 16 deletions llama_cpp/llama_chat_format.py
Original file line number Diff line number Diff line change
Expand Up @@ -3409,7 +3409,7 @@ class Gemma3ChatHandler(Llava15ChatHandler):
"{{ message['content'] | trim }}"
"{%- elif message['content'] is iterable -%}"
"{%- for item in message['content'] -%}"
"{%- if item['type'] == 'image' -%}"
"{%- if item['type'] == 'image_url' -%}"
"{{ '<start_of_image>' }}"
"{%- elif item['type'] == 'text' -%}"
"{{ item['text'] | trim }}"
Expand Down Expand Up @@ -3449,21 +3449,6 @@ def split_text_on_image_urls(text: str, image_urls: List[str]):
remaining = ""
return split_text

@staticmethod
def get_image_urls(messages: List[llama_types.ChatCompletionRequestMessage]):
image_urls: List[str] = []
for message in messages:
if message["role"] == "user":
if message.get("content") is None:
continue
for content in message["content"]:
if isinstance(content, dict) and content.get("type") == "image":
if isinstance(content.get("image"), dict) and isinstance(content["image"].get("url"), str):
image_urls.append(content["image"]["url"])
elif isinstance(content.get("url"), str):
image_urls.append(content["url"])
return image_urls

def eval_image(self, llama: llama.Llama, image_url: str):
import llama_cpp

Expand Down
3 changes: 2 additions & 1 deletion llama_cpp/llava_cpp.py
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,8 @@ def clip_image_batch_encode(
ctx: clip_ctx_p,
n_threads: c_int,
imgs: "_Pointer[clip_image_f32_batch]",
vec: c_void_p
vec: c_void_p,
/,
) -> bool:
...

Expand Down
0