10000 Merge remote-tracking branch 'origin/docs/update-provider-docs' · drivecore/mycoder@38f13b2 · GitHub
[go: up one dir, main page]

Skip to content

Commit 38f13b2

Browse files
committed
Merge remote-tracking branch 'origin/docs/update-provider-docs'
2 parents 6a3a392 + 46470be commit 38f13b2

File tree

5 files changed

+210
-3
lines changed

5 files changed

+210
-3
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -83,8 +83,8 @@ export default {
8383
profile: false,
8484
tokenCache: true,
8585

86-
// Ollama configuration (if using local models)
87-
ollamaBaseUrl: 'http://localhost:11434',
86+
// Base URL configuration (for providers that need it)
87+
baseUrl: 'http://localhost:11434', // Example for Ollama
8888
};
8989
```
9090

packages/docs/docs/providers/index.mdx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,8 @@ MyCoder currently supports the following LLM providers:
1313
- [**Anthropic**](./anthropic.md) - Claude models from Anthropic
1414
- [**OpenAI**](./openai.md) - GPT models from OpenAI
1515
- [**Ollama**](./ollama.md) - Self-hosted open-source models via Ollama
16+
- [**Local OpenAI Compatible**](./local-openai.md) - GPUStack and other OpenAI-compatible servers
17+
- [**xAI**](./xai.md) - Grok models from xAI
1618

1719
## Configuring Providers
1820

@@ -52,3 +54,5 @@ For detailed instructions on setting up each provider, see the provider-specific
5254
- [Anthropic Configuration](./anthropic.md)
5355
- [OpenAI Configuration](./openai.md)
5456
- [Ollama Configuration](./ollama.md)
57+
- [Local OpenAI Compatible Configuration](./local-openai.md)
58+
- [xAI Configuration](./xai.md)
Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
---
2+
sidebar_position: 5
3+
---
4+
5+
# Local OpenAI Compatible Servers
6+
7+
MyCoder supports connecting to local or self-hosted OpenAI-compatible API servers, including solutions like [GPUStack](https://gpustack.ai/), [LM Studio](https://lmstudio.ai/), [Ollama OpenAI compatibility mode](https://github.com/ollama/ollama/blob/main/docs/openai.md), and [LocalAI](https://localai.io/).
8+
9+
## Setup
10+
11+
To use a local OpenAI-compatible server with MyCoder:
12+
13+
1. Install and set up your preferred OpenAI-compatible server
14+
2. Start the server according to its documentation
15+
3. Configure MyCoder to connect to your local server
16+
17+
### Configuration
18+
19+
Configure MyCoder to use your local OpenAI-compatible server in your `mycoder.config.js` file:
20+
21+
```javascript
22+
export default {
23+
// Provider selection - use gpustack for any OpenAI-compatible server
24+
provider: 'gpustack',
25+
model: 'llama3.2', // Use the model name available on your server
26+
27+
// The base URL for your local server
28+
baseUrl: 'http://localhost:80', // Default for GPUStack, adjust as needed
29+
30+
// Other MyCoder settings
31+
maxTokens: 4096,
32+
temperature: 0.7,
33+
// ...
34+
};
35+
```
36+
37+
## GPUStack
38+
39+
[GPUStack](https://gpustack.ai/) is a solution for running AI models on your own hardware. It provides an OpenAI-compatible API server that works seamlessly with MyCoder.
40+
41+
### Setting up GPUStack
42+
43+
1. Install GPUStack following the instructions on their website
44+
2. Start the GPUStack server
45+
3. Configure MyCoder to use the `gpustack` provider
46+
47+
```javascript
48+
export default {
49+
provider: 'gpustack',
50+
model: 'llama3.2', // Choose a model available on your GPUStack instance
51+
baseUrl: 'http://localhost:80', // Default GPUStack URL
52+
};
53+
```
54+
55+
## Other OpenAI-Compatible Servers
56+
57+
You can use MyCoder with any OpenAI-compatible server by setting the appropriate `baseUrl`:
58+
59+
### LM Studio
60+
61+
```javascript
62+
export default {
63+
provider: 'gpustack',
64+
model: 'llama3', // Use the model name as configured in LM Studio
65+
baseUrl: 'http://localhost:1234', // Default LM Studio server URL
66+
};
67+
```
68+
69+
### LocalAI
70+
71+
```javascript
72+
export default {
73+
provider: 'gpustack',
74+
model: 'gpt-3.5-turbo', // Use the model name as configured in LocalAI
75+
baseUrl: 'http://localhost:8080', // Default LocalAI server URL
76+
};
77+
```
78+
79+
### Ollama (OpenAI Compatibility Mode)
80+
81+
```javascript
82+
export default {
83+
provider: 'gpustack',
84+
model: 'llama3', // Use the model name as configured in Ollama
85+
baseUrl: 'http://localhost:11434/v1', // Ollama OpenAI compatibility endpoint
86+
};
87+
```
88+
89+
## Hardware Requirements
90+
91+
Running LLMs locally requires significant hardware resources:
92+
93+
- Minimum 16GB RAM (32GB+ recommended)
94+
- GPU with at least 8GB VRAM for optimal performance
95+
- SSD storage for model files (models can be 5-20GB each)
96+
97+
## Best Practices
98+
99+
- Ensure your local server and the selected model support tool calling/function calling
100+
- Use models optimized for coding tasks when available
101+
- Monitor your system resources when running large models locally
102+
- Consider using a dedicated machine for hosting your local server
103+
104+
## Troubleshooting
105+
106+
If you encounter issues with local OpenAI-compatible servers:
107+
108+
- Verify the server is running and accessible at the configured base URL
109+
- Check that the model name exactly matches what's available on your server
110+
- Ensure the model supports tool/function calling (required for MyCoder)
111+
- Check server logs for specific error messages
112+
- Test the server with a simple curl command to verify API compatibility:
113+
114+
```bash
115+
curl http://localhost:80/v1/chat/completions \
116+
-H "Content-Type: application/json" \
117+
-d '{
118+
"model": "llama3.2",
119+
"messages": [{"role": "user", "content": "Hello!"}]
120+
}'
121+
```
122+
123+
For more information, refer to the documentation for your specific OpenAI-compatible server.

packages/docs/docs/providers/ollama.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ export default {
6262
model: 'medragondot/Sky-T1-32B-Preview:latest',
6363

6464
// Optional: Custom base URL (defaults to http://localhost:11434)
65-
// ollamaBaseUrl: 'http://localhost:11434',
65+
// baseUrl: 'http://localhost:11434',
6666

6767
// Other MyCoder settings
6868
maxTokens: 4096,

packages/docs/docs/providers/xai.md

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
---
2+
sidebar_position: 6
3+
---
4+
5+
# xAI (Grok)
6+
7+
[xAI](https://x.ai/) is the company behind Grok, a powerful large language model designed to be helpful, harmless, and honest. Grok models offer strong reasoning capabilities and support for tool calling.
8+
9+
## Setup
10+
11+
To use Grok models with My B41A Coder, you need an xAI API key:
12+
13+
1. Create an account at [xAI](https://x.ai/)
14+
2. Navigate to the API Keys section and create a new API key
15+
3. Set the API key as an environment variable or in your configuration file
16+
17+
### Environment Variables
18+
19+
You can set the xAI API key as an environment variable:
20+
21+
```bash
22+
export XAI_API_KEY=your_api_key_here
23+
```
24+
25+
### Configuration
26+
27+
Configure MyCoder to use xAI's Grok in your `mycoder.config.js` file:
28+
29+
```javascript
30+
export default {
31+
// Provider selection
32+
provider: 'xai',
33+
model: 'grok-2-latest',
34+
35+
// Optional: Set API key directly (environment variable is preferred)
36+
// xaiApiKey: 'your_api_key_here',
37+
38+
// Other MyCoder settings
39+
maxTokens: 4096,
40+
temperature: 0.7,
41+
// ...
42+
};
43+
```
44+
45+
## Supported Models
46+
47+
xAI offers several Grok models with different capabilities:
48+
49+
- `grok-2-latest` (recommended) - The latest Grok-2 model with strong reasoning and tool-calling capabilities
50+
- `grok-1` - The original Grok model
51+
52+
## Best Practices
53+
54+
- Grok models excel at coding tasks and technical problem-solving
55+
- They have strong tool-calling capabilities, making them suitable for MyCoder workflows
56+
- For complex programming tasks, use Grok-2 models for best results
57+
- Provide clear, specific instructions for optimal results
58+
59+
## Custom Base URL
60+
61+
If you need to use a different base URL for the xAI API (for example, if you're using a proxy or if xAI changes their API endpoint), you can specify it in your configuration:
62+
63+
```javascript
64+
export default {
65+
provider: 'xai',
66+
model: 'grok-2-latest',
67+
baseUrl: 'https://api.x.ai/v1', // Default xAI API URL
68+
};
69+
```
70+
71+
## Troubleshooting
72+
73+
If you encounter issues with xAI's Grok:
74+
75+
- Verify your API key is correct and has sufficient quota
76+
- Check that you're using a supported model name
77+
- For tool-calling issues, ensure your functions are properly formatted
78+
- Monitor your token usage to avoid unexpected costs
79+
80+
For more information, visit the [xAI Documentation](https://x.ai/docs).

0 commit comments

Comments
 (0)
0