8000 Add static methods for beginning and end of sequence tokens. · coderonion/llama-cpp-python@67c70cc · GitHub
[go: up one dir, main page]

Skip to content

Commit 67c70cc

Browse files
committed
Add static methods for beginning and end of sequence tokens.
1 parent caff127 commit 67c70cc

File tree

2 files changed

+12
-0
lines changed

2 files changed

+12
-0
lines changed

docs/index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,8 @@ python3 setup.py develop
6969
- create_embedding
7070
- create_completion
7171
- __call__
72+
- token_bos
73+
- token_eos
7274
show_root_heading: true
7375

7476
::: llama_cpp.llama_cpp

llama_cpp/llama.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -446,3 +446,13 @@ def __del__(self):
446446
if self.ctx is not None:
447447
llama_cpp.llama_free(self.ctx)
448448
self.ctx = None
449+
450+
@staticmethod
451+
def token_eos() -> llama_cpp.llama_token:
452+
"""Return the end-of-sequence token."""
453+
return llama_cpp.llama_token_eos()
454+
455+
@staticmethod
456+
def token_bos() -> llama_cpp.llama_token:
< 3D49 /td>
457+
"""Return the beginning-of-sequence token."""
458+
return llama_cpp.llama_token_bos()

0 commit comments

Comments
 (0)
0