8000 feat(snippet): update llama.cpp snippet to include windows (#1476) · huggingface/huggingface.js@b51d3f4 · GitHub
[go: up one dir, main page]

Skip to content

Commit b51d3f4

Browse files
mfuntowiczpcuenca
andauthored
feat(snippet): update llama.cpp snippet to include windows (#1476)
This PR adds the Windows way of installing and running llama.cpp on such OSes. llama.cpp is now available on WinGet and as easy installable as brew on Windows machines now. ```powershelll PS C:\Users\momo-> winget install llama.cpp PS C:\Users\momo-> # Load and run the model: PS C:\Users\momo-> llama-cli -hf lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF:Q4_K_M load_backend: loaded RPC backend from C:\Users\momo-\AppData\Local\Microsoft\WinGet\Packages\ggml.llamacpp_Microsoft.Winget.Source_8wekyb3d8bbwe\ggml-rpc.dll ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: KHR_coopmat [...] == Running in interactive mode. == - Press Ctrl+C to interject at any time. - Press Return to return control to the AI. - To return control without starting a new line, end your input with '/'. - If you want to submit another line, end your input with '\'. - Not using system message. To change it, set a different value via -sys PROMPT > Hey how are you? I'm just a language model, I don't have feelings or emotions like humans do, but I'm functioning properly and ready to help with any questions or tasks you may have! How about you? How's your day going? ``` --------- Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
1 parent 91003db commit b51d3f4

File tree

1 file changed

+5
-0
lines changed

1 file changed

+5
-0
lines changed

packages/tasks/src/local-apps.ts

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -121,6 +121,11 @@ const snippetLlamacpp = (model: ModelData, filepath?: string): LocalAppSnippet[]
121121
setup: "brew install llama.cpp",
122122
content: command("llama-cli"),
123123
},
124+
{
125+
title: "Install from WinGet (Windows)",
126+
setup: "winget install llama.cpp",
127+
content: command("llama-cli"),
128+
},
124129
{
125130
title: "Use pre-built binary",
126131
setup: [

0 commit comments

Comments
 (0)
0