8000 Perplexity script for non GGUF quantization · Issue #13015 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content
Perplexity script for non GGUF quantization #13015
@JohnConnor123

Description

@JohnConnor123

Can I use perplexity script for quantizations like bnb, gptq, awq, ex-llamav2/v3, or does your script work correctly only on GGUF/notquantized models?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0