8000 llama : move vocab, grammar and sampling into separate files by ggerganov · Pull Request #8508 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

llama : move vocab, grammar and sampling into separate files #8508

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Jul 23, 2024
Prev Previous commit
Next Next commit
llama : move tokenizers into llama-vocab
ggml-ci
  • Loading branch information
ggerganov committed Jul 22, 2024
commit 8fef5b1897188348f17ca7a88be0c2e4c327070e
4 changes: 2 additions & 2 deletions include/llama.h
Original file line number Diff line number Diff line change
Expand Up @@ -906,10 +906,10 @@ extern "C" {
LLAMA_API llama_token llama_token_pad(const struct llama_model * model); // padding

// Returns -1 if unknown, 1 for true or 0 for false.
LLAMA_API int32_t llama_add_bos_token(const struct llama_model * model);
LLAMA_API int32_t llama_add_bos_token(const struct llama_model * model);

// Returns -1 if unknown, 1 for true or 0 for false.
LLAMA_API int32_t llama_add_eos_token(const struct llama_model * model);
LLAMA_API int32_t llama_add_eos_token(const struct llama_model * model);

// Codellama infill tokens
LLAMA_API llama_token llama_token_prefix(const struct llama_model * model); // Beginning of infill prefix
Expand Down
Loading
0