8000 fix: pass chat handler not chat formatter for huggingface autotokeniz… · devilcoder01/llama-cpp-python@24f3945 · GitHub
[go: up one dir, main page]

Skip to content

Commit 24f3945

Browse files
committed
fix: pass chat handler not chat formatter for huggingface autotokenizer and tokenizer_config formats.
1 parent 7f3209b commit 24f3945

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

llama_cpp/server/model.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ def load_llama_from_model_settings(settings: ModelSettings) -> llama_cpp.Llama:
7878
settings.hf_pretrained_model_name_or_path is not None
7979
), "hf_pretrained_model_name_or_path must be set for hf-autotokenizer"
8080
chat_handler = (
81-
llama_cpp.llama_chat_format.hf_autotokenizer_to_chat_formatter(
81+
llama_cpp.llama_chat_format.hf_autotokenizer_to_chat_completion_handler(
8282
settings.hf_pretrained_model_name_or_path
8383
)
8484
)
@@ -87,7 +87,7 @@ def load_llama_from_model_settings(settings: ModelSettings) -> llama_cpp.Llama:
8787
settings.hf_tokenizer_config_path is not None
8888
), "hf_tokenizer_config_path must be set for hf-tokenizer-config"
8989
chat_handler = (
90-
llama_cpp.llama_chat_format.hf_tokenizer_config_to_chat_formatter(
90+
llama_cpp.llama_chat_format.hf_tokenizer_config_to_chat_completion_handler(
9191
json.load(open(settings.hf_tokenizer_config_path))
9292
)
9393
)

0 commit comments

Comments
 (0)
0