8000
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 4442ff8 commit 710e19aCopy full SHA for 710e19a
CHANGELOG.md
@@ -7,6 +7,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
8
## [Unreleased]
9
10
+## [0.3.7]
11
+
12
+- feat: Update llama.cpp to ggerganov/llama.cpp@794fe23f29fb40104975c91fe19f23798f7c726e
13
+- fix(ci): Fix the CUDA workflow by @oobabooga in #1894
14
+- fix: error showing time spent in llama perf context print, adds `no_perf` flag to `Llama` class by @shakalaca in #1898
15
16
## [0.3.6]
17
18
- feat: Update llama.cpp to ggerganov/llama.cpp@f7cd13301c2a88f97073fd119072b4cc92c08df1
llama_cpp/__init__.py
@@ -1,4 +1,4 @@
1
from .llama_cpp import *
2
from .llama import *
3
4
-__version__ = "0.3.6"
+__version__ = "0.3.7"
0 commit comments