ollama-proxy

Ollama proxy — one endpoint that routes to multiple Ollama instances. Drop-in Ollama proxy replacement for localhost:11434. Same Ollama API, same model names, but the Ollama proxy routes requests to the best device. Auto-discovers Ollama nodes, scores on 7 signals, retries on failure. Works with Open WebUI, LangChain, Aider. Ollama代理 | proxy Ollama

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ollama-proxy" with this command: npx skills add twinsgeeks/ollama-proxy

Ollama Proxy — One Endpoint for All Your Ollama Instances

You have Ollama running on multiple machines. Instead of hardcoding IPs and manually picking which Ollama instance to hit, point everything at the Ollama proxy. The Ollama proxy routes to the best available device automatically.

Before:  App → http://macmini:11434  (one Ollama instance, hope it's not busy)
After:   App → http://ollama-proxy:11435   (Ollama proxy picks the best machine)

Set up the Ollama proxy

pip install ollama-herd    # PyPI: https://pypi.org/project/ollama-herd/

On one machine (the Ollama proxy):

herd    # starts the Ollama proxy on port 11435

On every machine running Ollama:

herd-node    # discovers the Ollama proxy automatically on your network

Now point your apps at http://ollama-proxy:11435 instead of http://localhost:11434. Same Ollama API, same model names, same streaming — the Ollama proxy handles smarter routing.

Drop-in Ollama proxy replacement

Every Ollama API endpoint works through the Ollama proxy:

# Chat via Ollama proxy (same as direct Ollama)
curl http://ollama-proxy:11435/api/chat -d '{
  "model": "llama3.3:70b",
  "messages": [{"role": "user", "content": "Hello via Ollama proxy"}]
}'

# Generate via Ollama proxy (same as direct Ollama)
curl http://ollama-proxy:11435/api/generate -d '{
  "model": "qwen3:32b",
  "prompt": "Explain quantum computing via Ollama proxy"
}'

# List models via Ollama proxy (aggregated from all Ollama nodes)
curl http://ollama-proxy:11435/api/tags

# List loaded models via Ollama proxy (across all Ollama nodes)
curl http://ollama-proxy:11435/api/ps

# Pull a model via Ollama proxy (auto-selects best node)
curl -N http://ollama-proxy:11435/api/pull -d '{"name": "codestral"}'

OpenAI-compatible Ollama proxy API

The Ollama proxy also exposes an OpenAI-compatible endpoint — same models, no code changes:

from openai import OpenAI

# Point at the Ollama proxy instead of direct Ollama
ollama_proxy_client = OpenAI(base_url="http://ollama-proxy:11435/v1", api_key="not-needed")
ollama_proxy_response = ollama_proxy_client.chat.completions.create(
    model="llama3.3:70b",
    messages=[{"role": "user", "content": "Hello via Ollama proxy"}],
    stream=True,
)

What the Ollama proxy does that direct Ollama doesn't

FeatureDirect OllamaOllama Proxy (Herd)
Multiple machinesManual IP switchingOllama proxy routes automatically
Load balancingNoneOllama proxy scores on 7 signals
FailoverNoneOllama proxy auto-retries on next node
Model discoveryPer-machine OllamaOllama proxy aggregates fleet-wide
Queue managementNoneOllama proxy manages per-node:model queues
DashboardNoneOllama proxy provides real-time web UI
Health checksNoneOllama proxy runs 15 automated checks
Request tracingNoneOllama proxy logs to SQLite trace store
Image generationNoneOllama proxy routes mflux + DiffusionKit
Speech-to-textNoneOllama proxy routes Qwen3-ASR

Ollama proxy works with your existing tools

Just change the Ollama URL to the Ollama proxy — no other configuration needed:

ToolBefore (direct Ollama)After (Ollama proxy)
Open WebUIhttp://localhost:11434http://ollama-proxy:11435
Aider--openai-api-base http://localhost:11434/v1--openai-api-base http://ollama-proxy:11435/v1
Continue.devOllama at localhostOllama proxy at ollama-proxy:11435
LangChainOllama(base_url="http://localhost:11434")Ollama(base_url="http://ollama-proxy:11435")
LiteLLMollama/llama3.3:70bollama/llama3.3:70b (point at Ollama proxy)
CrewAIOPENAI_API_BASE=http://localhost:11434/v1OPENAI_API_BASE=http://ollama-proxy:11435/v1

How the Ollama proxy routes requests

When a request arrives at the Ollama proxy, it scores all Ollama nodes that have the requested model:

  1. Thermal state — is the model already loaded in the Ollama instance (hot)?
  2. Memory fit — does the Ollama node have enough free RAM?
  3. Queue depth — is the Ollama node busy with other requests?
  4. Latency history — how fast has this Ollama node been recently?
  5. Role affinity — the Ollama proxy sends big models to big machines
  6. Availability trend — is this Ollama node reliably available?
  7. Context fit — does the loaded context window match the request?

The highest-scoring Ollama node wins. If it fails, the Ollama proxy retries on the next best node automatically.

Monitor your Ollama proxy fleet

Ollama proxy dashboard at http://ollama-proxy:11435/dashboard — see every Ollama node, every model, every queue in real time.

# Ollama proxy fleet overview
curl -s http://ollama-proxy:11435/fleet/status | python3 -m json.tool

# Ollama proxy health checks
curl -s http://ollama-proxy:11435/dashboard/api/health | python3 -m json.tool

Full documentation

Contribute

Ollama Herd (the Ollama proxy) is open source (MIT). We welcome contributions:

  • Star on GitHub — help others find the Ollama proxy
  • Open an issue — bug reports, feature requests
  • PRs welcomeCLAUDE.md gives AI agents full Ollama proxy context. 444 tests, async Python.

Guardrails

  • No automatic model downloads — the Ollama proxy requires explicit user confirmation for model pulls.
  • Model deletion requires explicit user confirmation via the Ollama proxy.
  • All Ollama proxy requests stay local — no data leaves your network.
  • Never delete or modify files in ~/.fleet-manager/.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Ollama Load Balancer

Ollama load balancer for Llama, Qwen, DeepSeek, and Mistral inference across multiple machines. Load balancing with auto-discovery via mDNS, health checks, q...

Registry SourceRecently Updated
2480Profile unavailable
Coding

Local Coding

Local coding assistant — run DeepSeek-Coder, Codestral, StarCoder, and Qwen-Coder across your device fleet. Code generation, review, refactoring, and debuggi...

Registry SourceRecently Updated
1181Profile unavailable
Web3

Local Researcher

完全本地的深度研究助手 Skill。使用 Ollama 或 LMStudio 本地 LLM 进行迭代式网络研究,生成带引用来源的 Markdown 报告。当用户需要进行隐私优先的研究、本地文档分析或生成结构化研究报告时触发。

Registry Source
1230Profile unavailable
General

Ollama Ollama Herd

Ollama Ollama Herd — multimodal Ollama model router that herds your Ollama LLMs into one smart Ollama endpoint. Route Ollama Llama, Qwen, DeepSeek, Phi, Mist...

Registry SourceRecently Updated
1512Profile unavailable