8000
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent b906596 commit 8c933b7Copy full SHA for 8c933b7
README.md
@@ -680,7 +680,7 @@ python3 -m pip install -r requirements.txt
680
python3 convert.py models/mymodel/
681
682
# [Optional] for models using BPE tokenizers
683
-python convert.py models/mymodel/ --vocabtype bpe
+python convert.py models/mymodel/ --vocab-type bpe
684
685
# quantize the model to 4-bits (using Q4_K_M method)
686
./quantize ./models/mymodel/ggml-model-f16.gguf ./models/mymodel/ggml-model-Q4_K_M.gguf Q4_K_M
0 commit comments