context-compactor

Token-based context compaction for local models (MLX, llama.cpp, Ollama) that don't report context limits.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "context-compactor" with this command: npx skills add emberdesire/context-compactor

Context Compactor

Automatic context compaction for OpenClaw when using local models that don't properly report token limits or context overflow errors.

The Problem

Cloud APIs (Anthropic, OpenAI) report context overflow errors, allowing OpenClaw's built-in compaction to trigger. Local models (MLX, llama.cpp, Ollama) often:

  • Silently truncate context
  • Return garbage when context is exceeded
  • Don't report accurate token counts

This leaves you with broken conversations when context gets too long.

The Solution

Context Compactor estimates tokens client-side and proactively summarizes older messages before hitting the model's limit.

How It Works

┌─────────────────────────────────────────────────────────────┐
│  1. Message arrives                                         │
│  2. before_agent_start hook fires                           │
│  3. Plugin estimates total context tokens                   │
│  4. If over maxTokens:                                      │
│     a. Split into "old" and "recent" messages              │
│     b. Summarize old messages (LLM or fallback)            │
│     c. Inject summary as compacted context                 │
│  5. Agent sees: summary + recent + new message             │
└─────────────────────────────────────────────────────────────┘

Installation

# One command setup (recommended)
npx jasper-context-compactor setup

# Restart gateway
openclaw gateway restart

The setup command automatically:

  • Copies plugin files to ~/.openclaw/extensions/context-compactor/
  • Adds plugin config to openclaw.json with sensible defaults

Configuration

Add to openclaw.json:

{
  "plugins": {
    "entries": {
      "context-compactor": {
        "enabled": true,
        "config": {
          "maxTokens": 8000,
          "keepRecentTokens": 2000,
          "summaryMaxTokens": 1000,
          "charsPerToken": 4
        }
      }
    }
  }
}

Options

OptionDefaultDescription
enabledtrueEnable/disable the plugin
maxTokens8000Max context tokens before compaction
keepRecentTokens2000Tokens to preserve from recent messages
summaryMaxTokens1000Max tokens for the summary
charsPerToken4Token estimation ratio
summaryModel(session model)Model to use for summarization

Tuning for Your Model

MLX (8K context models):

{
  "maxTokens": 6000,
  "keepRecentTokens": 1500,
  "charsPerToken": 4
}

Larger context (32K models):

{
  "maxTokens": 28000,
  "keepRecentTokens": 4000,
  "charsPerToken": 4
}

Small context (4K models):

{
  "maxTokens": 3000,
  "keepRecentTokens": 800,
  "charsPerToken": 4
}

Commands

/compact-now

Force clear the summary cache and trigger fresh compaction on next message.

/compact-now

/context-stats

Show current context token usage and whether compaction would trigger.

/context-stats

Output:

📊 Context Stats

Messages: 47 total
- User: 23
- Assistant: 24
- System: 0

Estimated Tokens: ~6,234
Limit: 8,000
Usage: 77.9%

✅ Within limits

How Summarization Works

When compaction triggers:

  1. Split messages into "old" (to summarize) and "recent" (to keep)
  2. Generate summary using the session model (or configured summaryModel)
  3. Cache the summary to avoid regenerating for the same content
  4. Inject context with the summary prepended

If the LLM runtime isn't available (e.g., during startup), a fallback truncation-based summary is used.

Differences from Built-in Compaction

FeatureBuilt-inContext Compactor
TriggerModel reports overflowToken estimate threshold
Works with local models❌ (need overflow error)
Persists to transcript❌ (session-only)
SummarizationPi runtimePlugin LLM call

Context Compactor is complementary — it catches cases before they hit the model's hard limit.

Troubleshooting

Summary quality is poor:

  • Try a better summaryModel
  • Increase summaryMaxTokens
  • The fallback truncation is used if LLM runtime isn't available

Compaction triggers too often:

  • Increase maxTokens
  • Decrease keepRecentTokens (keeps less, summarizes earlier)

Not compacting when expected:

  • Check /context-stats to see current usage
  • Verify enabled: true in config
  • Check logs for [context-compactor] messages

Characters per token wrong:

  • Default of 4 works for English
  • Try 3 for CJK languages
  • Try 5 for highly technical content

Logs

Enable debug logging:

{
  "plugins": {
    "entries": {
      "context-compactor": {
        "config": {
          "logLevel": "debug"
        }
      }
    }
  }
}

Look for:

  • [context-compactor] Current context: ~XXXX tokens
  • [context-compactor] Compacted X messages → summary

Links

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

BeerGaao

A股量化分析工具。当用户询问股票分析、行情、资金流向、技术指标、回测、复盘、持仓管理、策略归因时使用此技能。支持自然语言输入如"招商银行能不能买"、"今天大盘怎么样"、"复盘一下"。

Registry SourceRecently Updated
General

Automatic Test Generator

Automatically generate unit tests from functions with comprehensive coverage and edge cases

Registry SourceRecently Updated
General

Skill Compliance

Use when user needs to check import/export compliance requirements for specific products and countries. Use when verifying product certifications, tariffs, s...

Registry SourceRecently Updated
General

Elephantastic

DEPRECATED — renamed to elephantastic. Install elephantastic instead: clawhub install elephantastic

Registry SourceRecently Updated