Ollama Buddy - Seven Lines to Any LLM Provider
Ever found yourself wanting to add a new AI provider to ollama-buddy? (probably not I would guess 🙂), only to realise you’d need to write an entire Elisp module? Or perhaps you’re running a local inference server that speaks the OpenAI API, but can’t be bothered with the ceremony of creating a dedicated provider file?

Fair question. That’s exactly why I built ollama-buddy-provider-create — a single function that lets you register any LLM provider in seconds, whether it’s a cloud API or your own local server.
The traditional approach required a separate .el file for each provider — OpenAI, Claude, Gemini, you name it. Each with its own defcustom variables, configuration boilerplate, and maintenance overhead. It worked, but it felt a bit… heavy-handed for simple use cases.
What if you just wanted to quickly add support for that new local LM Studio instance running on port 1234? Or point ollama-buddy at your company’s internal AI gateway? Previously, you’d be looking at copying an existing provider file and modifying dozens of lines. Now? One function call.
The magic (yes, of the elisp kind!) happens in ollama-buddy-provider.el, which provides a generic provider registration system. Instead of requiring separate Elisp files, you can register any provider with a single call:
(ollama-buddy-provider-create
:name "My Local Server"
:api-type 'openai
:endpoint "http://localhost:1234/v1/chat"
:models-endpoint "http://localhost:1234/v1/models"
:api-key "your-key-here"
:prefix "l:")
Three API types are supported out of the box:
openai(default) — Any OpenAI-compatible chat/completions APIclaude— Anthropic Claude Messages APIgemini— Google Gemini generateContent API
The system handles all the underlying HTTP requests, error mapping, and session management automatically. Your provider just needs to specify which API flavour it speaks.
Adding a local LM Studio instance:
(ollama-buddy-provider-create
:name "LM Studio"
:api-type 'openai
:endpoint "http://localhost:1234/v1/chat/completions"
:models-endpoint "http://localhost:1234/v1/models"
:api-key "not-needed" ; LM Studio often doesn't require auth
:model-prefix "l:")
Connecting to OpenRouter (400+ models through one API):
(ollama-buddy-provider-create
:name "OpenRouter"
:api-type 'openai
:endpoint "https://openrouter.ai/api/v1/chat/completions"
:models-endpoint "https://openrouter.ai/api/v1/models"
:api-key "your-openrouter-key"
:model-prefix "r:")
After registration, your new provider appears in the status line and becomes available through the standard model selection interface. The model-prefix (like l: for local or r: for OpenRouter) lets you quickly identify which provider a model belongs to.
The provider system leverages ollama-buddy’s shared infrastructure in ollama-buddy-remote.el, which extracts common functionality like request handling, error mapping, and response processing. This means your custom provider gets the same robust error handling as the built-in ones:
- Proper HTTP status code mapping (rate limits, timeouts, authentication errors)
- Async request support for non-blocking UI
- Automatic model listing and caching
- Integration with the existing session and conversation system
When you call ollama-buddy-provider-create, it registers your provider with the core system, making it available to all the usual entry points: the transient menu, model selection, and conversation buffers.
This approach is perfect for:
- Local inference servers (LM Studio, llama.cpp, vLLM, Ollama’s own OpenAI-compat layer)
- Company/internal AI gateways
- Quick experiments with new APIs
The beauty ojf this system is that it makes ollama-buddy genuinely extensible without requiring deep knowledge of its internals. Want to add support for that new AI service that launched yesterday? You can probably do it in five lines of configuration rather than fifty.
Next up I think this will be the big one, adding tooling for those external providers!!!