8000 Bump version · pmbyrd/llama-cpp-python@56171cf · GitHub
[go: up one dir, main page]

Skip to content

Commit 56171cf

Browse files
committed
Bump version
1 parent 52320c3 commit 56171cf

File tree

2 files changed

+7
-1
lines changed

2 files changed

+7
-1
lines changed

CHANGELOG.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
## [0.2.14]
11+
12+
- Update llama.cpp to f0b30ef7dc1360922ccbea0a8cd3918ecf15eaa7
13+
- Add support for Huggingface Autotokenizer Chat Formats by @bioshazard and @abetlen in #790 and bbffdaebaa7bb04b543dbf683a07276087251f86
14+
- Fix llama-2 chat format by @earonesty in #869
15+
- Add support for functionary chat format by @abetlen in #784
1016
- Migrate inference from deprecated `llama_eval`API to `llama_batch` and `llama_decode` by @abetlen in #795
1117

1218
## [0.2.13]

llama_cpp/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
from .llama_cpp import *
22
from .llama import *
33

4-
__version__ = "0.2.13"
4+
__version__ = "0.2.14"

0 commit comments

Comments
 (0)
0