8000 docs: Update README · sunnykim1206/llama-cpp-python@0f8cad6 · GitHub
[go: up one dir, main page]

Skip to content

Commit 0f8cad6

Browse files
committed
docs: Update README
1 parent 045cc12 commit 0f8cad6

File tree

1 file changed

+12
-8
lines changed

1 file changed

+12
-8
lines changed

README.md

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,7 @@ llama-cpp-python -C cmake.args="-DLLAMA_BLAS=ON;-DLLAMA_BLAS_VENDOR=OpenBLAS"
8484

8585
</details>
8686

87+
https://github.com/abetlen/llama-cpp-python/releases/download/${VERSION}/
8788

8889
### Supported Backends
8990

@@ -268,9 +269,9 @@ Below is a short example demonstrating how to use the high-level API to for basi
268269

269270
Text completion is available through the [`__call__`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.__call__) and [`create_completion`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_completion) methods of the [`Llama`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama) class.
270271

271-
## Pulling models from Hugging Face
272+
### Pulling models from Hugging Face Hub
272273

273-
You can pull `Llama` models from Hugging Face using the `from_pretrained` method.
274+
You can download `Llama` models in `gguf` format directly from Hugging Face using the [`from_pretrained`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.from_pretrained) method.
274275
You'll need to install the `huggingface-hub` package to use this feature (`pip install huggingface-hub`).
275276

276277
```python
@@ -281,7 +282,7 @@ llm = Llama.from_pretrained(
281282
)
282283
```
283284

284-
By default the `from_pretrained` method will download the model to the huggingface cache directory so you can manage installed model files with the `huggingface-cli` tool.
285+
By default [`from_pretrained`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.from_pretrained) will download the model to the huggingface cache directory, you can then manage installed model files with the [`huggingface-cli`](https://huggingface.co/docs/huggingface_hub/en/guides/cli) tool.
285286

286287
### Chat Completion
287288

@@ -308,13 +309,16 @@ Note that `chat_format` option must be set for the particular model you are usin
308309

309310
Chat completion is available through the [`create_chat_completion`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_chat_completion) method of the [`Llama`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama) class.
310311

312+
For OpenAI API v1 compatibility, you use the [`create_chat_completion_openai_v1`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_chat_completion_openai_v1) method which will return pydantic models instead of dicts.
313+
314+
311315
### JSON and JSON Schema Mode
312316

313-
If you want to constrain chat responses to only valid JSON or a specific JSON Schema you can use the `response_format` argument to the `create_chat_completion` method.
317+
To constrain chat responses to only valid JSON or a specific JSON Schema use the `response_format` argument in [`create_chat_completion`](http://localhost:8000/api-reference/#llama_cpp.Llama.create_chat_completion).
314318

315319
#### JSON Mode
316320

317-
The following example will constrain the response to be valid JSON.
321+
The following example will constrain the response to valid JSON strings only.
318322

319323
```python
320324
>>> from llama_cpp import Llama
@@ -336,7 +340,7 @@ The following example will constrain the response to be valid JSON.
336340

337341
#### JSON Schema Mode
338342

339-
To constrain the response to a specific JSON Schema, you can use the `schema` property of the `response_format` argument.
343+
To constrain the response further to a specific JSON Schema add the schema to the `schema` property of the `response_format` argument.
340344

341345
```python
342346
>>> from llama_cpp import Llama
@@ -471,7 +475,7 @@ llama = Llama(
471475

472476
### Embeddings
473477

474-
`llama-cpp-python` supports generating embeddings from the text.
478+
To generate text embeddings use [`create_embedding`](http://localhost:8000/api-reference/#llama_cpp.Llama.create_embedding).
475479

476480
```python
477481
import llama_cpp
@@ -480,7 +484,7 @@ llm = llama_cpp.Llama(model_path="path/to/model.gguf", embeddings=True)
480484

481485
embeddings = llm.create_embedding("Hello, world!")
482486

483-
# or batched
487+
# or create multiple embeddings at once
484488

485489
embeddings = llm.create_embedding(["Hello, world!", "Goodbye, world!"])
486490
```

0 commit comments

Comments
 (0)
0