8000 fix: Changed local API doc references to hosted (#1317) · coderonion/llama-cpp-python@a0f373e · GitHub
[go: up one dir, main page]

Skip to content

Commit a0f373e

Browse files
authored
fix: Changed local API doc references to hosted (abetlen#1317)
1 parent f165048 commit a0f373e

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -321,7 +321,7 @@ For OpenAI API v1 compatibility, you use the [`create_chat_completion_openai_v1`
321321

322322
### JSON and JSON Schema Mode
323323

324-
To constrain chat responses to only valid JSON or a specific JSON Schema use the `response_format` argument in [`create_chat_completion`](http://localhost:8000/api-reference/#llama_cpp.Llama.create_chat_completion).
324+
To constrain chat responses to only valid JSON or a specific JSON Schema use the `response_format` argument in [`create_chat_completion`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_chat_completion).
325325

326326
#### JSON Mode
327327

@@ -529,7 +529,7 @@ llama = Llama(
529529

530530
### Embeddings
531531

532-
To generate text embeddings use [`create_embedding`](http://localhost:8000/api-reference/#llama_cpp.Llama.create_embedding).
532+
To generate text embeddings use [`create_embedding`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_embedding).
533533

534534
```python
535535
import llama_cpp

0 commit comments

Comments
 (0)
0