[go: up one dir, main page]

Skip to content
generated from S1M0N38/base.nvim

✧ Query LLMs following OpenAI API specification

License

Notifications You must be signed in to change notification settings

S1M0N38/ai.nvim

Repository files navigation

✧  ai.nvim  ✧

Tests workflow LuaRocks release GitHub release Reddit post


💡 Idea

LLM providers offer libraries for the most popular programming languages, so you can build code that interacts with their API. Generally, those are wrappers around HTTPS requests with a mechanism to handle API responses (e.g., using callbacks).

To the best of my knowledge, if you want to build a plugin for Neovim that uses LLM, you have to explicitly make requests using a library like curl and take care of requests and responses parsing yourself. This results in a lot of boilerplate code that can be abstracted away.

ai.nvim is an experimental library that can be used to build Neovim plugins that interact with LLM providers: it crafts requests, parses responses, invokes callbacks, and handles errors.

⚡️ Requirements

  • Neovim ≥ 0.9
  • Curl
  • Access to LLM Provider

🚀 Usage

Read the documentation with :help ai.nvim

Plugins built with ai.nvim:

  • dante.nvim ✎ A basic writing tool powered by LLM
  • PR your plugin here ...

✨ LLM Providers

There are many providers that offer LLM models exposing OpenAI-compatible API. The following is an incomplete list of providers that I have experimented with:

Provider Models Base URL
OpenAI gpt-4o, gpt-4o-mini https://api.openai.com/v1
Mistral mistral-large-latest, open-mistral-nemo https://api.mistral.ai/v1
Groq gemma2-9b-it, llama-3.1-70b-versatile, llama-3.1-8b-instant https://api.groq.com/openai/v1
Copilot Chat1 gpt-3.5-turbo, gpt-4o-mini, gpt-4o, gpt-4-0125-preview https://api.githubcopilot.com
  • If you want to use other providers that do not expose OpenAI-compatible API (e.g., Anthropic, Cohere, ...), you can try liteLLM proxy service.
  • If you want to use local models, you can use Ollama, llama-cpp, vLLM or others.

There is no future plan to support other API standards besides OpenAI-compatible API.

🙏 Acknowledgments

Footnotes

  1. Copilot Chat is not a proper LLM provider, but a service offered with a Copilot subscription. If you use copilot.vim or copilot.lua, you should have the token stored in one of these locations: ~/AppData/Local/github-copilot, $XDG_CONFIG_HOME/github-copilot, or ~/.config/github-copilot in a file named hosts.json or apps.json. That token is used for requesting another token with an expiration time. You can use that second token as api_key in ai.nvim configuration. No plan to implement an auto-token refresh mechanism.