-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Open
Description
Is your feature request related to a problem? Please describe.
LiquidAI provides LFM2-VL models (450M, 1.6B, 3B) as GGUF formats (Q4_0, Q8_0, F16) supported in llama-cpp.
It would be great to make them work in Python via llama-cpp-python, i.e. implement the multi-modal handler for them:
https://llama-cpp-python.readthedocs.io/en/latest/#multi-modal-models
Describe the solution you'd like
Support for LFM2-VL models (handler allowing to use mmproj of the vision encoder).
Metadata
Metadata
Assignees
Labels
No labels