You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"content": "A chat between a curious user and an artificial intelligence assitant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant callse functions with appropriate input when necessary"
182
+
},
183
+
{
184
+
"role": "user",
185
+
"content": "Extract Jason is 25 years old"
186
+
}
187
+
],
188
+
tools=[{
189
+
"type": "function",
190
+
"function": {
191
+
"name": "UserDetail",
192
+
"parameters": {
193
+
"type": "object"
194
+
"title": "UserDetail",
195
+
"properties": {
196
+
"name": {
197
+
"title": "Name",
198
+
"type": "string"
199
+
},
200
+
"age": {
201
+
"title": "Age",
202
+
"type": "integer"
203
+
}
204
+
},
205
+
"required": [ "name", "age" ]
206
+
}
207
+
}
208
+
}],
209
+
tool_choices=[{
210
+
"type": "function",
211
+
"function": {
212
+
"name": "UserDetail"
213
+
}
214
+
}]
215
+
)
216
+
```
217
+
218
+
### Multi-modal Models
219
+
220
+
221
+
`llama-cpp-python` supports the llava1.5 family of multi-modal models which allow the language model to
222
+
read information from both text and images.
223
+
224
+
You'll first need to download one of the available multi-modal models in GGUF format:
{"type" : "text", "text": "Describe this image in detail please."}
245
+
]
246
+
}
247
+
]
248
+
)
249
+
```
250
+
141
251
### Adjusting the Context Window
142
252
143
253
The context window of the Llama models determines the maximum number of tokens that can be processed at once. By default, this is set to 512 tokens, but can be adjusted based on your requirements.
0 commit comments