8000 Better chat format for Qwen2.5-VL by alcoftTAO · Pull Request #2040 · abetlen/llama-cpp-python · GitHub 8000
[go: up one dir, main page]

Skip to content

Conversation

@alcoftTAO
Copy link
Contributor
CHAT_FORMAT = (
    "<|im_start|>system\n"
    "You are a helpful assistant.<|im_end|>\n"
    "{% for message in messages %}"
    "{% if message['role'] == 'user' %}"
    "<|im_start|>user\n"
    "{% if message['content'] is string %}"
    "{{ message['content'] }}"
    "{% else %}"
    "{% for content in message['content'] %}"
    "{% if content['type'] == 'text' %}"
    "{{ content['text'] }}"
    "{% elif content['type'] == 'image_url' %}"
    "{% if content.image_url is string %}"
    "{{ content.image_url }}"
    "{% else %}"
    "{{ content.image_url.url }}"
    "{% endif %}"
    "{% endif %}"
    "{% endfor %}"
    "{% endif %}"
    "<|im_end|>\n"
    "{% endif %}"
    "{% endfor %}"
    "<|im_start|>assistant\n"
)

This is the current chat format for Qwen2.5-VL, it works but sometimes the model prints a 'random' first token. Example: : In this image, I see...

This new chat format fixes it and also adds custom system messages for the model.

@abetlen abetlen merged commit c8579d7 into abetlen:main Jul 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

0