context-engineer

Context window optimizer — analyze, audit, and optimize your agent's context utilization. Know exactly where your tokens go before they're sent.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "context-engineer" with this command: npx skills add tkuehnl/context-engineer

When to use this skill

Use this skill when the user wants to:

  • Understand where their context window tokens are going
  • Analyze workspace files (SKILL.md, SOUL.md, MEMORY.md, etc.) for bloat
  • Audit tool definitions for redundancy and overhead
  • Get a comprehensive context efficiency report
  • Compare before/after snapshots to measure optimization progress
  • Optimize system prompts for token efficiency

Commands

# Analyze workspace context files — token counts, efficiency scores, recommendations
python3 skills/context-engineer/context.py analyze --workspace ~/.openclaw/workspace

# Analyze with a custom budget and save a snapshot for later comparison
python3 skills/context-engineer/context.py analyze --workspace ~/.openclaw/workspace --budget 128000 --snapshot before.json

# Audit tool definitions for overhead and overlap
python3 skills/context-engineer/context.py audit-tools --config ~/.openclaw/openclaw.json

# Generate a comprehensive context engineering report
python3 skills/context-engineer/context.py report --workspace ~/.openclaw/workspace --format terminal

# Compare two snapshots to see projected token savings
python3 skills/context-engineer/context.py compare --before before.json --after after.json

What It Analyzes

  • System prompt efficiency — Length, redundancy detection, compression potential
  • Tool definition overhead — Count tools, per-tool token cost, identify unused/overlapping
  • Memory file bloat — MEMORY.md size, stale entries, optimization suggestions
  • Skill overhead — Installed skills contributing to context, per-skill token cost
  • Context budget — What % of model context window is consumed by static content vs available for conversation

Options

  • --workspace PATH — Path to workspace directory (default: ~/.openclaw/workspace)
  • --config PATH — Path to OpenClaw config file (default: ~/.openclaw/openclaw.json)
  • --budget N — Context window token budget (default: 200000)
  • --snapshot FILE — Save analysis snapshot to FILE for later comparison
  • --format terminal — Output format (currently: terminal)

Notes

  • Token estimates are approximate (~4 characters per token). For precise counts, use a model-specific tokenizer.
  • No external dependencies required — runs with Python 3 stdlib only.
  • Built by Anvil AI — context engineering experts. https://anvil-ai.io

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

SealVera

Tamper-evident audit trail for AI agent decisions. Use when logging LLM decisions, setting up AI compliance, auditing agents for EU AI Act, HIPAA, GDPR or SO...

Registry SourceRecently Updated
4970Profile unavailable
General

3-Layer Token Compressor — Cut AI API Costs 40-60%

Pre-process prompts through 3 compression layers before sending to paid APIs. Uses a local Ollama model to intelligently compress messages and summarize hist...

Registry SourceRecently Updated
5940Profile unavailable
General

Anyway Traces

Adds observability and tracing to AI/LLM applications using the Anyway SDK for monitoring calls to providers like OpenAI and Anthropic.

Registry SourceRecently Updated
2390Profile unavailable
Automation

jabrium

Connect your OpenClaw agent to Jabrium — a discussion platform where AI agents get their own thread, earn LLM compute tokens through citations, and participa...

Registry SourceRecently Updated
5890Profile unavailable