8000 Minor style changes · code-monad/llama.cpp@1daf4dd · GitHub
[go: up one dir, main page]

Skip to content

Commit 1daf4dd

Browse files
authored
Minor style changes
1 parent dc6a845 commit 1daf4dd

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -178,13 +178,15 @@ If you want a more ChatGPT-like experience, you can run in interactive mode by p
178178
In this mode, you can always interrupt generation by pressing Ctrl+C and enter one or more lines of text which will be converted into tokens and appended to the current context. You can also specify a *reverse prompt* with the parameter `-r "reverse prompt string"`. This will result in user input being prompted whenever the exact tokens of the reverse prompt string are encountered in the generation. A typical use is to use a prompt which makes LLaMa emulate a chat between multiple users, say Alice and Bob, and pass `-r "Alice:"`.
179179

180180
Here is an example few-shot interaction, invoked with the command
181-
```
181+
182+
```bash
182183
# default arguments using 7B model
183184
./chat.sh
184185

185186
# custom arguments using 13B model
186187
./main -m ./models/13B/ggml-model-q4_0.bin -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt
187188
```
189+
188190
Note the use of `--color` to distinguish between user input and generated text.
189191

190192
![image](https://user-images.githubusercontent.com/1991296/224575029-2af3c7dc-5a65-4f64-a6bb-517a532aea38.png)

0 commit comments

Comments
 (0)
0