10000 misc fix verbose printing in functionary model · jerryrelmore/llama-cpp-python@de2e2bc · GitHub
[go: up one dir, main page]

Skip to content

Commit de2e2bc

Browse files
committed
misc fix verbose printing in functionary model
1 parent 36048d4 commit de2e2bc

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

llama_cpp/llama_chat_format.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -955,9 +955,11 @@ def message_to_str(msg: llama_types.ChatCompletionRequestMessage):
955955
assert isinstance(function_call, str)
956956
assert stream is False # TODO: support stream mode
957957

958-
print(new_prompt)
959-
print(completion["choices"][0]["text"])
958+
if llama.verbose:
959+
print(new_prompt)
960+
print(completion["choices"][0]["text"])
960961

962+
# TODO: support stream mode
961963
return llama_types.CreateChatCompletionResponse(
962964
id="chat" + completion["id"],
963965
object="chat.completion",

0 commit comments

Comments
 (0)
0