8000 supports running on CPU for GGML_USE_CUBLAS=ON build by wsxiaoys · Pull Request #3946 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

supports running on CPU for GGML_USE_CUBLAS=ON build#3946

Merged
ggerganov merged 3 commits intoggml-org:masterfrom
TabbyML:support-cuda-separate-init
Nov 7, 2023
Merged

supports running on CPU for GGML_USE_CUBLAS=ON build#3946
ggerganov merged 3 commits intoggml-org:masterfrom
TabbyML:support-cuda-separate-init

Commits

Commits on Nov 6, 2023

0