8000
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent f5a77a6 commit 56817b1Copy full SHA for 56817b1
README.md
@@ -5,17 +5,9 @@
5
6
Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
7
8
----
9
-
10
-**TEMPORARY NOTICE:**
11
-Big code change incoming: https://github.com/ggerganov/llama.cpp/pull/370
12
13
-Do not merge stuff until we merge this. Probably merge will happen on March 22 ~6:00am UTC
14
15
16
17
**Hot topics:**
18
+- New C-style API is now available: https://github.com/ggerganov/llama.cpp/pull/370
19
- [Added Alpaca support](https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca)
20
- Cache input prompts for faster initialization: https://github.com/ggerganov/llama.cpp/issues/64
21
- Create a `llama.cpp` logo: https://github.com/ggerganov/llama.cpp/issues/105
0 commit comments