You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
pyright-based changes for tools/mtmd/legacy-models/convert_image_encoder_to_gguf.py
Fix torch 2.5.1 / numpy 2.x compatibility in convert_image_encoder_to_gguf.py
- Updated Tensor-to-array conversions to use `np.asarray(..., dtype=...)` per NumPy 2.x migration rules (avoids copy error on float16).
- Used explicit typing and `cast(...)` to guide Pyright/Pylance under torch 2.5.1:
- Annotated `model` as PreTrainedModel.
- Re-cast `model.vision_model` to `CLIPVisionTransformer` to safely access `.encoder.layers`.
- Replaced slice assignment with `__init__` to reset ModuleList contents.
- Verified compatibility by converting `openai/clip-vit-base-patch32` using `--clip-model-is-openclip`.
0 commit comments