8000 Bump version · qeleb/llama-cpp-python@37556bf · GitHub
[go: up one dir, main page]

Skip to content 8000

Commit 37556bf

Browse files
committed
Bump version
1 parent 6d8bc09 commit 37556bf

File tree

2 files changed

+8
-1
lines changed

2 files changed

+8
-1
lines changed

CHANGELOG.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
## [0.2.25]
11+
12+
- feat(server): Multi model support by @D4ve-R in #931
13+
- feat(server): Support none defaulting to infinity for completions by @swg in #111
14+
- feat(server): Implement openai api compatible authentication by @docmeth2 in #1010
15+
- fix: text_offset of multi-token characters by @twaka in #1037
16+
- fix: ctypes bindings for kv override by @phiharri in #1011
1017
- fix: ctypes definitions of llama_kv_cache_view_update and llama_kv_cache_view_free. by @e-c-d in #1028
1118

1219
## [0.2.24]

llama_cpp/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
from .llama_cpp import *
22
from .llama import *
33

4-
__version__ = "0.2.24"
4+
__version__ = "0.2.25"

0 commit comments

Comments
 (0)
0