8000 Please support the also official Falcon-rw-1b and Falcon-rw-7b model variants · Issue #2868 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content
Please support the also official Falcon-rw-1b and Falcon-rw-7b model variants #2868
Closed as not planned
@maddes8cht

Description

@maddes8cht

Falcon consists not only of the versions 7b and 40b, but also of the two refined Web variants Falcon-RW-1B and Falcon-RW-7B.
These are official Versions as can be seen on https://huggingface.co/tiiuae.

I have successfully converted and quantized the 7b models with convert-falcon-hf-to-gguf.py, but the refined web variants result in the following abort messages:

python convert-falcon-hf-to-gguf.py
gguf: loading model falcon-rw-1b
Model architecture not supported: FalconForCausalLM
Basename: tiiuae-falcon-rw-1b

The message for the rw 7b model is identical except for the filename.
Do you want to support these models as well, or are there special difficulties?

A Falcon 1.3b model would be an incredible fast model for small and easy tasks. It would be great to have this model.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0