OpenClaw Ollama Memory
Local semantic memory search for OpenClaw using Ollama — free, private, no API key required.
Quick Start
1. Install & Run Ollama
Download from https://ollama.ai, then pull the embedding model:
ollama pull nomic-embed-text
Verify Ollama is running:
curl http://127.0.0.1:11434/api/tags
2. Configure OpenClaw
Edit ~/.openclaw/openclaw.json and add under agents.defaults.memorySearch:
{
"agents": {
"defaults": {
"memorySearch": {
"provider": "ollama",
"model": "nomic-embed-text:latest",
"remote": {
"baseUrl": "http://127.0.0.1:11434"
}
}
}
}
}
3. Restart OpenClaw
openclaw gateway restart
Key Points
- Ollama must be explicit — use
provider: "ollama", not"openai". OpenClaw doesn't auto-detect Ollama. - No API key needed — runs locally, fully offline.
- nomic-embed-text — 274MB model, fast, accurate, free.
- baseUrl format —
http://127.0.0.1:11434(no/v1suffix, no trailing slash).
Next Steps
- See
references/config-reference.mdfor advanced tuning (hybrid search, MMR, temporal decay, extra paths). - See
references/troubleshooting.mdfor common errors and fixes.