8000 Create README.md · Embarcadero/llama-cpp-delphi@74d8ae4 · GitHub
[go: up one dir, main page]

Skip to content

Commit 74d8ae4

Browse files
authored
Create README.md
1 parent bbd00f6 commit 74d8ae4

File tree

1 file changed

+86
-0
lines changed

1 file changed

+86
-0
lines changed

README.md

Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
# 🐫 llama-cpp-delphi
2+
3+
Welcome to **llama-cpp-delphi**, the Delphi bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp)! This project allows you to integrate the power of Llama-based Large Language Models (LLMs) into your Delphi applications, enabling efficient and versatile local inference.
4+
5+
## 🚀 Features
6+
7+
- **Delphi Integration**: Harness Llama models directly in your Delphi projects.
8+
- **Local Inference**: No external servers or APIs required—your data stays local.
9+
- **Cross-Platform Support**: Compatible with Windows, Linux, and Mac.
10+
- 🖥️ **Mac Silicon**: GPU (MPS) and CPU inference supported.
11+
- 💻 **Windows**: CPU inference supported, with options for CUDA, Vulkan, HIP, Kompute, and OpenBLAS.
12+
- 🌏 **Linux**: CPU inference supported, with options for CUDA, Vulkan, HIP, and MUSA.
13+
- 🚀 **Android and iOS support coming soon!**
14+
- **Pre-Built Libraries**: Simplified setup with pre-compiled libraries.
15+
- **Customizable Sampling**: Fine-tune your AI’s behavior with easy-to-configure samplers.
16+
17+
## 🔧 Getting Started
18+
19+
### Prerequisites
20+
21+
1. **Delphi IDE** installed.
22+
2. **Git** installed (required for cloning model repositories).
23+
3. A basic understanding of Delphi development.
24+
25+
### Installation
26+
27+
1. Clone the **llama-cpp-delphi** repository:
28+
```bash
29+
git clone https://github.com/Embarcadero/llama-cpp-delphi.git
30+
```
31+
2. Open the project in Delphi IDE.
32+
3. Build the project for your desired platform(s):
33+
- Windows
34+
- Linux
35+
- Mac Silicon
36+
37+
### Libraries
38+
39+
The necessary **llama.cpp** libraries are distributed as part of the releases of this repository. You can find them under the "Release" section in the repository.
40+
41+
## 🌟 Using llama-cpp-delphi
42+
43+
### Key Components
44+
45+
- **Llama**: Delphi-friendly IDE component.
46+
47+
### Running Samples
48+
49+
1. Explore the `samples` directory for available examples, like **SimpleChatWithDownload**.
50+
2. Follow the README provided in each sample folder for detailed instructions.
51+
52+
## 🔧 Configuration
53+
54+
### Models
55+
56+
You can use any model compatible with **llama.cpp** (e.g., GGUF format). Popular options include:
57+
- **Llama-2**: A robust and general-purpose model.
58+
- **Llama-3**: A lightweight alternative with excellent performance.
59+
- **Mistral**: A compact and efficient model.
60+
- **DeepSeek**: An innovative model designed for exploratory tasks.
61+
62+
### Hardware Support
63+
64+
- **Mac Silicon**:
65+
- GPU inference (via MPS) is recommended for optimal performance.
66+
- CPU inference is available but slower.
67+
- **Windows**:
68+
- CPU inference supported, with additional support for CUDA, Vulkan, Kompute, HIP, and OpenBLAS.
69+
- **Linux**:
70+
- CPU inference supported, with additional support for CUDA, Vulkan, HIP, and MUSA.
71+
72+
## 🤝 Contributions
73+
74+
We welcome contributions to improve **llama-cpp-delphi**! Feel free to:
75+
- Report issues.
76+
- Submit pull requests.
77+
- Suggest enhancements.
78+
79+
## 📝 License
80+
81+
This project is licensed under the MIT License—see the `LICENSE` file for details.
82+
83+
## 🌟 Final Notes
84+
85+
Get started with **llama-cpp-delphi** and bring advanced AI capabilities to your Delphi projects. If you encounter any issues or have suggestions, let us know—we’re here to help! Happy coding! 🎉
86+

0 commit comments

Comments
 (0)
0