10000 Add ROCm / AMD instructions to docs · Tatrabbit/llama-cpp-python@895f84f · GitHub
[go: up one dir, main page]

Skip to content

Commit 895f84f

Browse files
committed
Add ROCm / AMD instructions to docs
1 parent 3f8bc41 commit 895f84f

File tree

1 file changed

+8
-2
lines changed

1 file changed

+8
-2
lines changed

README.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Documentation is available at [https://llama-cpp-python.readthedocs.io/en/latest
2121
> Starting with version 0.1.79 the model format has changed from `ggmlv3` to `gguf`. Old model files can be converted using the `convert-llama-ggmlv3-to-gguf.py` script in [`llama.cpp`](https://github.com/ggerganov/llama.cpp)
2222
2323

24-
## Installation from PyPI (recommended)
24+
## Installation from PyPI
2525

2626
Install from PyPI (requires a c compiler):
2727

@@ -45,7 +45,7 @@ bash Miniforge3-MacOSX-arm64.sh
4545
```
4646
Otherwise, while installing it will build the llama.ccp x86 version which will be 10x slower on Apple Silicon (M1) Mac.
4747

48-
### Installation with OpenBLAS / cuBLAS / CLBlast / Metal
48+
### Installation with Hardware Acceleration
4949

5050
`llama.cpp` supports multiple BLAS backends for faster processing.
5151
Use the `FORCE_CMAKE=1` environment variable to force the use of `cmake` and install the pip package for the desired BLAS backend.
@@ -74,6 +74,12 @@ To install with Metal (MPS), set the `LLAMA_METAL=on` environment variable befor
7474
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python
7575
```
7676

77+
To install with hipBLAS / ROCm support for AMD cards, set the `LLAMA_HIPBLAS=on` environment variable before installing:
78+
79+
```bash
80+
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
81+
```
82+
7783
#### Windows remarks
7884

7985
To set the variables `CMAKE_ARGS` and `FORCE_CMAKE` in PowerShell, follow the next steps (Example using, OpenBLAS):

0 commit comments

Comments
 (0)
0