File tree Expand file tree Collapse file tree 1 file changed +4
-4
lines changed Expand file tree Collapse file tree 1 file changed +4
-4
lines changed Original file line number Diff line number Diff line change @@ -26,19 +26,19 @@ conda create -n llama python=3.9.16
26
26
conda activate llama
27
27
```
28
28
29
- ** (4) Install the LATEST llama-cpp-python.. which, as of just today, happily supports MacOS Metal GPU**
29
+ ** (4) Install the LATEST llama-cpp-python... which happily supports MacOS Metal GPU as of version 0.1.62 **
30
30
* (you needed xcode installed in order pip to build/compile the C++ code)*
31
31
```
32
32
pip uninstall llama-cpp-python -y
33
33
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
34
34
pip install 'llama-cpp-python[server]'
35
35
36
- # you should now have llama-cpp-python v0.1.62 installed
37
- llama-cpp-python 0.1.62
36
+ # you should now have llama-cpp-python v0.1.62 or higher installed
37
+ llama-cpp-python 0.1.68
38
38
39
39
```
40
40
41
- ** (4 ) Download a v3 ggml model**
41
+ ** (5 ) Download a v3 ggml model**
42
42
- ** ggmlv3**
43
43
- file name ends with ** q4_0.bin** - indicating it is 4bit quantized, with quantisation method 0
44
44
You can’t perform that action at this time.
0 commit comments