10000 Package llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

llama.cpp server-cuda-b5974 Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:server-cuda-b5974

Recent tagged image versions

  • Published about 6 hours ago · Digest
    sha256:6b552422be5eebabffc2aefadb5f1ccad515be0e76fcb61c13eb66ed90d147ae
    92 Version downloads
  • Published about 6 hours ago · Digest
    sha256:b6c54d1e3dd7f8bae01f07083ba43a1ed15952b67feb8c25e63406a7f1f3cbf2
    2 Version downloads
  • Published about 6 hours ago · Digest
    sha256:62bb7a61645331da9e5d89348b9a522bf0d60584deb9a35b538ad9bfba2e7799
    17 Version downloads
  • Published about 7 hours ago · Digest
    sha256:ce19ef1ad1033ba4e3f5ba5628598577b4d05608ea4840e1220262b822046681
    7 Version downloads
  • Published about 7 hours ago · Digest
    sha256:c84800c9ace9d389c091519e3f00015da2a89b49ffcd3ab9acec936d7044ed41
    0 Version downloads

Loading

Details


Last published

6 hours ago

Discussions

2.42K

Issues

751

Total downloads

319K



0