Context Mode
An MCP server that solves the context window problem in AI coding agents. It provides:
- Context Saving — sandbox tools keep raw data out of context window
- Session Continuity — SQLite + FTS5 for event tracking
- Think in Code — program analysis instead of reading files
- Output Compression — terse output format reducing tokens 65-75%
Available Tools
| Tool | When to Use | Token Savings |
|---|---|---|
ctx_batch_execute | Run multiple commands + auto-search results | 90%+ vs raw exec |
ctx_execute | Single script execution (JS/Python/Shell) | 90%+ vs raw exec |
ctx_execute_file | Run code from a file, return only result | high |
ctx_index | Index docs/knowledge into searchable FTS5 | — |
ctx_search | Search indexed content with BM25 | fast recall |
ctx_fetch_and_index | Fetch URL + index into knowledge base | 90%+ vs raw web fetch |
Decision Rules
Use ctx_batch_execute instead of multiple exec/read calls when:
- Analyzing multiple files at once
- Counting/grepping across many files
- Need command output + search results together
Use ctx_execute instead of reading files when:
- User asks "how many lines/funcs/classes in X"
- Need to compute something, not just read it
Use ctx_fetch_and_index instead of web_fetch when:
- Researching a topic across multiple pages
- Full raw content won't fit in context
Output Format
Terse. Drop filler, pleasantries, hedging.
❌ "So I ran a command to check the files and found that there are..."
✅ "Checked. 3 TypeScript files: src/index.ts (142 lines), src/cli.ts (89 lines)."
OpenClaw Integration
Install via MCP:
openclaw mcp set context-mode '{"command":"npx","args":["-y","context-mode"]}'
openclaw gateway restart