8000 Bump version · cyberjon/llama-cpp-python@5e863d8 · GitHub
[go: up one dir, main page]

Skip to content

Commit 5e863d8

Browse files
committed
Bump version
1 parent cfd698c commit 5e863d8

File tree

2 files changed

+12
-1
lines changed

2 files changed

+12
-1
lines changed

CHANGELOG.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
## [0.2.24]
11+
12+
- feat: Update llama.cpp to ggerganov/llama.cpp@0e18b2e7d0b5c0a509ea40098def234b8d4a938a
13+
- feat: Add offload_kqv option to llama and server by @abetlen in 095c65000642a3cf73055d7428232fb18b73c6f3
14+
- feat: n_ctx=0 now uses the n_ctx_train of the model by @DanieleMorotti in #1015
15+
- feat: logits_to_logprobs supports both 2-D and 3-D logits arrays by @kddubey in #1002
16+
- fix: Remove f16_kv, add offload_kqv fields in low level and llama apis by @brandonrobertz in #1019
17+
- perf: Don't convert logprobs arrays to lists by @kddubey in #1021
18+
- docs: Fix README.md functionary demo typo by @evelynmitchell in #996
19+
- examples: Update low_level_api_llama_cpp.py to match current API by @jsoma in #1023
20+
1021
## [0.2.23]
1122

1223
- Update llama.cpp to ggerganov/llama.cpp@948ff137ec37f1ec74c02905917fa0afc9b97514

llama_cpp/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
from .llama_cpp import *
22
from .llama import *
33

4-
__version__ = "0.2.23"
4+
__version__ = "0.2.24"

0 commit comments

Comments
 (0)
0