8000 Eval bug:GGUF Conversion from LLaVA 1.6(LLaVA NeXT) doesn't work · Issue #13593 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content
Eval bug:GGUF Conversion from LLaVA 1.6(LLaVA NeXT) doesn't work #13593
Open
@eagle705

Description

@eagle705

Name and Version

This tutorial doesn't work

This line need to be fixed

  • python ./examples/convert_legacy_llama.py ../llava-v1.6-vicuna-7b/ --skip-unknown

Operating systems

Linux

GGML backends

CUDA

Hardware

A100

Models

No response

Problem description & steps to reproduce

https://github.com/ggml-org/llama.cpp/blob/master/docs/multimodal/llava.md#llava-16-gguf-conversion

This is not working

First Bad Commit

No response

Relevant log output

Most errors are related to mismatched key errors such as image_newline

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0