8000 Merge pull request #452 from audreyfeldroy/update-macos-metal-gpu-step-4 · MobinX/llama-cpp-python@7952ca5 · GitHub
[go: up one dir, main page]

Skip to content

Commit 7952ca5

Browse files
authored
Merge pull request abetlen#452 from audreyfeldroy/update-macos-metal-gpu-step-4
Update macOS Metal GPU step 4
2 parents b8e0bed + d270ec2 commit 7952ca5

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/install/macos.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,19 +26,19 @@ conda create -n llama python=3.9.16
2626
conda activate llama
2727
```
2828

29-
**(4) Install the LATEST llama-cpp-python.. which, as of just today, happily supports MacOS Metal GPU**
29+
**(4) Install the LATEST llama-cpp-python...which happily supports MacOS Metal GPU as of version 0.1.62**
3030
*(you needed xcode installed in order pip to build/compile the C++ code)*
3131
```
3232
pip uninstall llama-cpp-python -y
3333
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
3434
pip install 'llama-cpp-python[server]'
3535
36-
# you should now have llama-cpp-python v0.1.62 installed
37-
llama-cpp-python         0.1.62     
36+
# you should now have llama-cpp-python v0.1.62 or higher installed
37+
llama-cpp-python         0.1.68
3838
3939
```
4040

41-
**(4) Download a v3 ggml model**
41+
**(5) Download a v3 ggml model**
4242
- **ggmlv3**
4343
- file name ends with **q4_0.bin** - indicating it is 4bit quantized, with quantisation method 0
4444

0 commit comments

Comments
 (0)
0