8000
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 3fe8e9a commit 3c19faaCopy full SHA for 3c19faa
CHANGELOG.md
@@ -7,6 +7,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
8
## [Unreleased]
9
10
+< 8000 span class="pl-mh">## [0.2.74]
11
+
12
+- feat: Update llama.cpp to ggerganov/llama.cpp@b228aba91ac2cd9eb90e9d423ba1d0d20e0117e2
13
+- fix: Enable CUDA backend for llava by @abetlen in 7f59856fa6f3e23f07e12fc15aeb9359dc6c3bb4
14
+- docs: Fix typo in README.md by @yupbank in #1444
15
16
## [0.2.73]
17
18
- feat: Update llama.cpp to ggerganov/llama.cpp@25c6e82e7a1ad25a42b0894e87d9b5c557409516
llama_cpp/__init__.py
@@ -1,4 +1,4 @@
1
from .llama_cpp import *
2
from .llama import *
3
4
-__version__ = "0.2.73"
+__version__ = "0.2.74"
0 commit comments