8000 Merge pull request #501 from a10y/patch-1 · Keeyahto/llama-cpp-python@3687262 · GitHub
[go: up one dir, main page]

Skip to content

Commit 3687262

Browse files
authored
Merge pull request abetlen#501 from a10y/patch-1
Update install instructions for Linux OpenBLAS
2 parents a05cfaf + b6b2071 commit 3687262

File tree

Collapse file tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,10 +47,10 @@ Otherwise, while installing it will build the llama.ccp x86 version which will b
4747
`llama.cpp` supports multiple BLAS backends for faster processing.
4848
Use the `FORCE_CMAKE=1` environment variable to force the use of `cmake` and install the pip package for the desired BLAS backend.
4949

50-
To install with OpenBLAS, set the `LLAMA_OPENBLAS=1` environment variable before installing:
50+
To install with OpenBLAS, set the `LLAMA_BLAS and LLAMA_BLAS_VENDOR` environment variables before installing:
5151

5252
```bash
53-
CMAKE_ARGS="-DLLAMA_OPENBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
53+
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" FORCE_CMAKE=1 pip install llama-cpp-python
5454
```
5555

5656
To install with cuBLAS, set the `LLAMA_CUBLAS=1` environment variable before installing:

0 commit comments

Comments
 (0)
0