### Prerequisites - [x] I am running the latest code. Mention the version if possible as well. - [x] I carefully followed the [README.md](https://github.com/ggml-org/llama.cpp/blob/master/README.md). - [x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed). - [x] I reviewed the [Discussions](https://github.com/ggml-org/llama.cpp/discussions), and have a new and useful enhancement to share. ### Feature Description $ ./convert_hf_to_gguf.py /data/models/Qwen2.5-VL-7B-Instruct --outtype auto --verbose --dry-run INFO:hf-to-gguf:Loading model: Qwen2.5-VL-7B-Instruct ERROR:hf-to-gguf:Model Qwen2_5_VLForConditionalGeneration is not supported The model git lfs link: https://cnb.cool/ai-models/Qwen/Qwen2.5-VL-7B-Instruct.git ### Motivation The more type of model to support, the better. ### Possible Implementation _No response_