10000 Eval bug: does llama.cpp support Intel AMX instruction? how to enable it · Issue #12003 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content
Eval bug: does llama.cpp support Intel AMX instruction? how to enable it #12003
@montagetao

Description

@montagetao

Name and Version

llama-cli

Operating systems

Linux

GGML backends

AMX

Hardware

XEON 8452Y + NV A40

Models

No response

Problem description & steps to reproduce

as title

First Bad Commit

No response

Relevant log output

as title

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0