ollama

Ollama fleet router — herd your Ollama LLMs into one smart endpoint. Route Llama, Qwen, DeepSeek, Phi, Mistral, and Gemma across multiple devices with 7-signal scoring, auto-retry, VRAM-aware fallback, and context protection. Plus image generation, speech-to-text, and embeddings. Drop-in OpenAI SDK compatible.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ollama" with this command: npx skills add twinsgeeks/ollama-fleet-router

Ollama — Herd Your LLMs Into One Endpoint

You have Ollama running on multiple machines. This skill gives you one endpoint that routes every request to the best available device automatically. No more hardcoding IPs, no more manual load balancing, no more "which machine has that model loaded?"

Setup

pip install ollama-herd
herd              # start the router on port 11435
herd-node         # run on each machine with Ollama

Now point everything at http://localhost:11435 instead of http://localhost:11434. Same Ollama API, same models, smarter routing.

Package: ollama-herd | Repo: github.com/geeks-accelerator/ollama-herd

Use your Ollama models through the fleet

OpenAI SDK (drop-in)

from openai import OpenAI

client = OpenAI(base_url="http://localhost:11435/v1", api_key="not-needed")
response = client.chat.completions.create(
    model="llama3.3:70b",
    messages=[{"role": "user", "content": "Hello"}],
    stream=True,
)
for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

Ollama API (same as before, different port)

# Chat
curl http://localhost:11435/api/chat -d '{
  "model": "qwen3:235b",
  "messages": [{"role": "user", "content": "Hello"}],
  "stream": false
}'

# List all models across all machines
curl http://localhost:11435/api/tags

# Models currently in GPU memory
curl http://localhost:11435/api/ps

# Embeddings
curl http://localhost:11435/api/embeddings -d '{
  "model": "nomic-embed-text",
  "prompt": "search query"
}'

What the router does

When a request comes in, the router scores every online node on 7 signals:

  1. Thermal — is the model already loaded in GPU memory? (+50 for hot)
  2. Memory fit — how much headroom does the node have?
  3. Queue depth — how many requests are waiting?
  4. Wait time — estimated latency based on history
  5. Role affinity — large models prefer big machines
  6. Availability — is the node reliably available?
  7. Context fit — does the loaded context window fit the request?

The highest-scoring node handles the request. If it fails, the router retries on the next best node automatically.

Supported Ollama models

Any model that runs on Ollama works through the fleet. Popular ones:

ModelSizesBest for
llama3.38B, 70BGeneral purpose
qwen30.6B–235BMultilingual, reasoning
qwen3.50.8B–397BLatest generation
deepseek-v3671B (37B active)Matches GPT-4o
deepseek-r11.5B–671BReasoning (like o3)
phi414BSmall, fast, capable
mistral7BFast, European languages
gemma31B–27BGoogle's open model
codestral22BCode generation
qwen3-coder30B (3.3B active)Agentic coding
nomic-embed-text137MEmbeddings for RAG

Resilience features

  • Auto-retry — re-routes to next best node on failure (before first chunk)
  • VRAM-aware fallback — routes to a loaded model in the same category instead of cold-loading
  • Context protection — prevents num_ctx from triggering expensive model reloads
  • Zombie reaper — cleans up stuck in-flight requests
  • Auto-pull — downloads missing models to the best node automatically

Also available

The same fleet router handles three more workloads:

Image generation

curl -o image.png http://localhost:11435/api/generate-image \
  -H "Content-Type: application/json" \
  -d '{"model":"z-image-turbo","prompt":"a sunset","width":1024,"height":1024,"steps":4}'

Enable: curl -X POST .../dashboard/api/settings -d '{"image_generation":true}'

Speech-to-text

curl http://localhost:11435/api/transcribe -F "audio=@recording.wav"

Enable: curl -X POST .../dashboard/api/settings -d '{"transcription":true}'

Embeddings

curl http://localhost:11435/api/embeddings -d '{"model":"nomic-embed-text","prompt":"text"}'

Already enabled — routes through Ollama automatically.

Dashboard

http://localhost:11435/dashboard — 8 tabs: Fleet Overview, Trends, Model Insights, Apps, Benchmarks, Health, Recommendations, Settings. Real-time queue visibility with [TEXT], [IMAGE], [STT], [EMBED] badges.

Request tagging

Track per-project usage:

response = client.chat.completions.create(
    model="llama3.3:70b",
    messages=messages,
    extra_body={"metadata": {"tags": ["my-project", "reasoning"]}},
)

Full documentation

Agent Setup Guide

Guardrails

  • Never restart the router or node agents without user confirmation.
  • Never delete or modify files in ~/.fleet-manager/.
  • Never pull or delete models without user confirmation.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Ollama Ollama Herd

Ollama Ollama Herd — multimodal Ollama model router that herds your Ollama LLMs into one smart Ollama endpoint. Route Ollama Llama, Qwen, DeepSeek, Phi, Mist...

Registry SourceRecently Updated
1512Profile unavailable
Coding

Local Llm Router

Local LLM model router for Llama, Qwen, DeepSeek, Phi, Mistral, and Gemma across multiple devices. Self-hosted local LLM inference routing on macOS, Linux, a...

Registry SourceRecently Updated
2350Profile unavailable
General

Ollama Herd

Ollama multimodal model router for Llama, Qwen, DeepSeek, Phi, and Mistral — plus mflux image generation, speech-to-text, and embeddings. Self-hosted Ollama...

Registry SourceRecently Updated
2330Profile unavailable
Coding

Gpu Cluster Manager

Turn your spare GPUs into one inference endpoint. Auto-discovers machines on your network, routes requests to the best available device, learns when your mac...

Registry SourceRecently Updated
2140Profile unavailable