8000 supports running on CPU for GGML_USE_CUBLAS=ON build by wsxiaoys · Pull Request #3946 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

supports running on CPU for GGML_USE_CUBLAS=ON build #3946

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Nov 7, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
doc: add comments to ggml_cublas_loaded()
  • Loading branch information
wsxiaoys committed Nov 6, 2023
commit fe381b060b9f88e8901a8b54ff9ab8967b52be76
3 changes: 3 additions & 0 deletions ggml-cuda.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,10 @@ extern "C" {

#define GGML_CUDA_MAX_DEVICES 16

// Always success. To check if CUDA is actually loaded, use `ggml_cublas_loaded`.
GGML_API void ggml_init_cublas(void);

// Returns `true` if there are available CUDA devices and cublas loads successfully; otherwise, it returns `false`.
GGML_API bool ggml_cublas_loaded(void);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add comment that we differentiate between "initialized" and "loaded" state. The former means we have called ggml_init_cublas() but it's not guaranteed that there has been a CUDA device available, in which case ggml_cublas_loaded() is false.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


GGML_API void * ggml_cuda_host_malloc(size_t size);
Expand Down
0