token-saver

Five-phase token audit framework for OpenClaw: Discover → Prioritize (3D matrix) → Optimize (8 category techniques) → Validate → Monitor. Universal; adapt via appendix. Trigger: "省点 token", "token 优化", "token saver", "token audit", "检查 token 消耗"

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "token-saver" with this command: npx skills add youxiyin/tsaver

Token Saver

Universal token audit & optimization framework for OpenClaw agents. Based on real-world practice (2026-05-04).

Core Principles

  1. Tier your model usage — Simple tasks use cheap models; complex reasoning uses expensive ones. Don't mix the two.
  2. Prompts say what, not why — Background rationale and philosophy are noise to an agent. Strip them.
  3. Batch > Serial — One call for 10 results costs marginally more than three calls for 3+3+4 results. Combine.
  4. Context = Cost — Every file loaded at session start, every tool schema registered, every past message injected — all have a token price.
  5. Idle = Zero burn — Nighttime, weekends, and idle periods should run nothing. Configure active hours.

Output

After each full execution, write a report (token-audit-report-YYYY-MM-DD.md) containing: before/after comparison table, estimated weekly savings per change, items deferred and why, recommended next step.


Phase 1: DISCOVER — Map the Full Token Landscape

1A Enumerate All Automated Tasks

Read your cron/scheduled task configuration (e.g. ~/.openclaw/cron/jobs.json).

For each task record:

  • name
  • model (or "default" if unset)
  • message / prompt length in chars
  • schedule frequency (daily / weekly / other)
  • delivery.mode (announce / none)
  • sessionTarget (isolated / main)

1B Analyze Agent Configuration

Inspect your gateway config (e.g. openclaw.json):

  • agents.defaults.heartbeat.* — interval, active hours, isolated session, light context flag
  • agents.defaults.compaction.mode — message retention aggressiveness
  • agents.list[].tools.profile — full, coding, or custom
  • agents.list[].model — per-agent model override

1C Measure Context Load

List every file that is injected at session start (typically files in the workspace root directory). Measure each in chars and estimate token cost (~3 chars per token for CJK-heavy text, ~4 for English-heavy).

If LCM (Lossless Context Management) is active, note the number and average size of compacted summary blocks injected per turn.

If tool schemas are accessible, estimate total schema chars: (count of registered tools × average schema size in chars).

1D Map Models to Tiers

Categorize all available models into three tiers based on capability and cost:

  • 🏆 Premium (strong reasoning, high cost): e.g. deepseek-v4-pro, gpt-5.x
  • 🟡 Standard (balanced): e.g. deepseek-v4-flash, minimax-m2.7
  • 🟢 Economy (lightweight): e.g. minimax-m2.7-highspeed, ollama local

Map each task from 1A to its current model tier.

⚠️ Checkpoint: Before moving to Phase 2, present your Phase 1 findings (task inventory, file sizes, model tier map) to the user. Confirm that the inventory is complete and the measurements are correct. This prevents optimizing the wrong things.


Phase 2: PRIORITIZE — Build Your Decision Matrix

Score each finding from Phase 1 along three independent dimensions:

DimensionScaleAssessment
Token Impact 🎯High / Med / LowTokens per occurrence × occurrences per period
Risk ⚠️Safe / Moderate / HighCan you undo it? Does it affect core function?
Effort 🔧Easy / Med / HardSingle config change? Multi-file edit? Needs research?

How to Score

Compute a relative priority for each finding by inverting Risk and Effort:

Priority = ImpactWeight × (1 / RiskWeight) × (1 / EffortWeight)

Where each dimension maps to a simple numeric weight:

  • Impact: High=3, Med=2, Low=1
  • Risk: Safe=1, Moderate=2, High=3
  • Effort: Easy=1, Med=2, Hard=3

Focus on items scoring ≥ 1.5 first. Skip items < 1.0 unless they are trivially easy (effort=1) and safe (risk=1).

Common High-Impact Patterns

These patterns tend to score high across most deployments:

PatternTypical ImpactTypical RiskTypical Effort
Overly verbose task promptsHighSafeEasy
Heavy models on simple tasksHighSafeEasy
No active hours on heartbeatMed-HighSafeEasy
Duplicated content across bootstrap filesMed-HighSafeEasy-Med
Full tool profile on task-specific agentsHighModerateEasy
Idle-time session not configuredMedSafeEasy
Outdated tool/plugin configs still loadedLow-MedSafeEasy

⚠️ Checkpoint: Show your top-3 priority items to the user. Confirm direction before starting optimization. If the highest-score items seem wrong, revisit Phase 1 measurements.


Phase 3: OPTIMIZE — Apply Categorical Techniques

⚠️ User confirmation gate: Techniques marked Moderate or High risk involve config changes, profile switches, or task merging. Before applying them, present the proposed change using this template and get explicit approval:

## Proposed Change
**Technique**: [category/technique name]
**Target**: [file/config path]
**Before**: [current state, chars/tokens if measurable]
**After**: [proposed state, estimated savings]
**Risk**: [Moderate/High]
**Rollback**: [how to undo]

Techniques marked Safe can be applied directly.

Each category below contains a set of techniques. Apply them in priority order from Phase 2 — start with the highest-score items first, regardless of which category they fall into.

Failure Recovery

If a technique causes a problem:

  • Config change: Restore the backed-up config file and reload.
  • Cron merge broken: Restore the old separate cron job from version control or re-create it from the original prompt.
  • Profile switch issue: Revert to "full" profile, report the missing tool.
  • Prompt compression over-aggressive: Restore from the diff backup (keep pre-optimization prompt versions in a prompts/backup/ directory).

Category Selection Guide

Match your Phase 2 findings to the best starting category:

FindingStart With
Verbose task prompts (background context, philosophy)A Prompt Simplicity
Heavy models on simple automation tasksB Model Tiering
Bootstrap files >2K chars each, duplicated contentC Context Slimming
Full tool profile, rarely-used tools registeredD Tool Profile Optimization
Verbose agent output, too many turns per taskE Output Discipline
No active hours, co-located tasks running separatelyF Session Lifecycle
Repeated system prompts without caching structureG Provider-Side Caching
Agent retries failed approaches instead of switchingH Behavioral Discipline

A. Prompt Simplicity

TechniqueDescriptionRisk
A1 Strip preambleRemove background/rationale paragraphs from task prompts. Keep only: trigger, action, output format.
Before: "你是系统监控助手。每天检查服务器状态:CPU使用率>80%告警、内存>90%告警、磁盘>85%告警、SSL证书<30天告警。每个告警按严重程度分别处理:严重→立即通知值班、一般→发运维邮件、提示→记录日志。"
After: "系统监控。检查:CPU(>80%) Mem(>90%) Disk(>85%) SSL(<30d)。告警:严重→立即、一般→邮件、提示→日志。" (360→110 chars, -69%)Safe
A2 Bullet points > proseReplace multi-sentence descriptions with keyword checklists.Safe
A3 Constrain outputAdd "Answer concisely in ≤3 lines" or equivalent to reduce generated tokens.Safe
A4 Remove redundancyDelete "What NOT to do" sections — proper instructions make negatives implicit.Safe
A5 Reference > inlineReplace full instructions for sub-tasks with file references ("See X.md") when the referenced file is always loaded.Safe

B. Model Tiering

TechniqueDescriptionRisk
B1 Right-size each taskMap every automated task to the cheapest model that can do it adequately. Test borderline cases.Safe
B2 Define tier boundariesDocument which model(s) belong to each tier so new tasks are assigned correctly.Safe
B3 Batch same-tier runsSchedule same-tier tasks back-to-back to reuse the same session (single context load).Moderate

C. Context Slimming

TechniqueDescriptionRisk
C1 Measure every boot fileList all files loaded at session start and identify those > 2K chars for potential trimming.Safe
C2 Cross-reference dedupWhen the same content appears in 2+ files (e.g. "Core Principles" in SOUL.md and IDENTITY.md), keep it in one authoritative file and replace the others with a 详见 <file> reference.Safe
C3 Archive aged-out contentMove old diary entries, superseded milestones, and historical promoted entries to a dedicated archive directory.Safe
C4 Trim to one-linerConvert verbose descriptions to single-line summaries.
Before: "This project's coding conventions were established after three code reviews revealed inconsistent patterns: use 2-space indent for HTML/CSS, 4-space for Python, tabs for Go. Prefix private methods with underscore. No Hungarian notation. Import order: stdlib, third-party, local."
After: "Coding conventions (see CONTRIBUTING.md) — 6 rules, numbered."
Actionable instructions stay; background context goes.Safe

D. Tool Profile Optimization

TechniqueDescriptionRisk
D1 Size your tool schemaCount all registered tools and estimate total schema chars. This is typically the single largest per-turn overhead.Safe (measure only)
D2 Switch profile per agentUse "coding" profile for sub-agents/cron jobs (excludes browser, canvas, media generation, feishu tools). Use "full" only where those tools are actually needed.Moderate (test on sub-agents first)
D3 Disable unused toolsIf you have disabled skills or orphaned plugin tools still registering schemas, disable or remove them from the registry. Check skills.entries and plugins.load.paths.Safe
D4 Create custom profileIf neither "full" nor "coding" fits, define a custom profile with exactly the 15-25 tools your use-case needs. Requires config reload.High

E. Output Discipline

TechniqueDescriptionRisk
E1 No operation narrationRemove "I'll...", "Let me check..." patterns. Do the action directly.Safe (behavioral)
E2 Lead with conclusionPut the answer first. Add explanation only when needed.Safe (behavioral)
E3 Batch turnsRead → plan → apply all changes in as few turns as possible, instead of read→think→edit→think→verify per-item. Each extra turn adds LCM context overhead.Safe (behavioral)
E4 Sub-agent concisenessWhen spawning sub-agents, specify a concise return format. Their full output is injected into context if returned.Safe

F. Session Lifecycle

TechniqueDescriptionRisk
F1 Set active hoursConfigure heartbeat.activeHours so no work runs during idle time (overnight, weekends).Safe
F2 Isolated sessionsSet heartbeat.isolatedSession: true so periodic checks don't accumulate in the main session.Safe
F3 Light contextSet heartbeat.lightContext: true to skip loading all bootstrap files — only HEARTBEAT.md is injected.Safe
F4 Merge co-located tasksIf two cron jobs run within minutes of each other (e.g. both at 23:xx), merge them into one session with a combined prompt. Copy both prompts into one job's message field separated by a blank line, then remove the later job. Saves one full startup context per day.Moderate
F5 Merge exampleBefore: Job A at 23:00 (System health check), Job B at 23:10 (Log cleanup). After: Single job at 23:00 with prompt "Do A then B.~A: ...~B: ..."Moderate
F6 Configure queueIf the platform supports message queue settings (debounce, collect), tune them to prevent rapid-turn accumulation during tool execution.Safe

G. Provider-Side Caching

Impact is 10× any other category. DeepSeek V4 Pro cached price is 0.83% of uncached. Cache hit rates of 91-96% are achievable with proper prompt structure.

TechniqueDescriptionRisk
G1 Fixed prefix firstDesign all prompts as [static prefix] + [dynamic suffix]. Static prefix includes system instructions, bootstrap summary, and tool schemas. Dynamic suffix includes runtime instruction. This maximizes KV cache hits on the provider side.
Wrong: "Analyze this code for memory leaks...你是代码审查助手,审查规则如下:..."
Right: "你是代码审查助手,审查规则如下:...现在分析这段代码的内存泄漏:..."Safe
G2 Session contiguityDon't insert unrelated messages between consecutive calls to the same model — this breaks the KV cache prefix. Batch related calls into a single turn instead.Safe
G3 Monitor cache rateCheck provider dashboards for cache hit rate. If <80%, your prefix structure likely has variability. Fix it.Safe
G4 Route to best caching providerDifferent providers have wildly different cached prices. DeepSeek V4 Pro: 0.83% of uncached. MiniMax: ~20%. Route routine tasks to the provider with the best cache economics.Moderate

H. Behavioral Discipline

These are zero-config, zero-cost techniques. The savings come from how you use the system, not how it's configured.

TechniqueDescriptionRisk
H1 Default to working pathUse known-working tools before alternatives. Don't retry tools known to be broken in the current deployment — each retry is a wasted tool call + error response.
Bad: web_search (broken) → error → web_search again → error → baidu-search → works
Good: baidu-search → works (first attempt)Safe
H2 Fail once, switchIf a method fails, switch immediately to a known alternative. Don't retry the same approach with slightly different parameters. Each retry costs full tool-call tokens.Safe
H3 Batch > PollGather all data before acting instead of incrementally. One exec or read call that returns 10 results costs less than 5 separate calls returning 2 each.Safe
H4 Fix root causeIf a tool works inconsistently due to a known config issue (API key expired, wrong provider), fix the config. Working around it each time costs more in accumulated failed calls.Safe

Phase 4: VALIDATE — Confirm Results

4A Prompt Length Delta

Before/after comparison of all modified prompts and files. Include total chars and estimated tokens saved.

4B Config Integrity

After editing JSON configuration files, validate:

python3 -c "import json; json.load(open('<config-path>')); print('OK')"

4C Functional Test

  • Verify cron tasks still start correctly (check cron action=runs or next scheduled trigger)
  • Verify heartbeat runs in configured active window
  • Read through compressed cron prompts to ensure key instructions survive

4D Generate Report

Write token-audit-report-YYYY-MM-DD.md summarizing:

  • Changes made and per-change token savings
  • Total estimated weekly token reduction
  • Items deferred and why
  • Recommended next optimization

Log each optimization cycle in results.tsv (see skill directory for format reference). This creates an audit trail for the quarterly deep audit (5B).


Phase 5: MONITOR — Guard Against Regrowth

5A Periodic Token Watch (Optional)

Optionally create a weekly cron (cheapest available model) that checks prompt lengths haven't crept back:

{
  "name": "token-watch-weekly",
  "schedule": { "kind": "cron", "expr": "0 10 * * 1", "tz": "Asia/Shanghai" },
  "payload": {
    "kind": "agentTurn",
    "model": "<cheapest-model>",
    "message": "Check all cron prompt lengths. Flag any that grew >20% since last baseline.",
    "timeoutSeconds": 120
  },
  "sessionTarget": "isolated",
  "delivery": { "mode": "none" }
}

5B Quarterly Deep Audit

Run the full Phase 1-4 cycle every quarter using the cheapest available model. Compare results against previous reports to spot regrowth trends.


Safety Boundaries

Configs That Need Gateway Restart

Some configuration paths require a gateway restart to take effect:

  • agents.defaults.heartbeat.* (edit config file + restart)
  • agents.list[].tools.profile
  • gateway.*, auth.*
  • plugins.* — certain sub-fields

What NOT to Compress

These core mechanisms must be preserved even in an aggressive token budget:

  • Error detection logic (consecutive errors, failure alerts)
  • Essential signal handling (high-priority alerts → auto-escalation)
  • Drift detection for recurring tasks

External References


Appendix: Local Deployment Configuration

This section is populated by the first execution of the Token Saver in a specific deployment. Replace the example values below with real ones.

Configuration Paths

ItemExample Path
Cron jobs~/.openclaw/cron/jobs.json
Gateway config~/.openclaw/openclaw.json
Workspace root~/.openclaw/workspace/
Bootstrap filesAGENTS.md, SOUL.md, USER.md, MEMORY.md, HEARTBEAT.md, IDENTITY.md, TOOLS.md, STANDING-ORDERS.md

Baseline Measurements (example: Wave 2026-05-04)

FileInitial SizeAfter First PassReductionTechniques Used
SOUL.md7,0343,521-50%C2 (cross-ref), C4 (one-liner), A2
STANDING-ORDERS.md10,9603,816-65%C2 (cross-ref), A4 (remove redundancy)
IDENTITY.md6,2284,313-31%C2 (dedup with SOUL.md), C4
AGENTS.md5,0722,691-47%C2 (ref to STANDING-ORDERS), C4
TOOLS.md8,8937,488-16%C4 (remove stale entries)
MEMORY.md30,22426,420-13%C3 (archive promoted entries)
Total68,41148,249-29%

Per-session token savings from bootstrap compression: ~6,720 tokens.

Benchmark: Compression by File Type

File TypeTypical SavingsBest Technique
Program/Protocol (STANDING-ORDERS.md)55-65%A4 (remove boilerplate sections)
Guide/Identity (SOUL.md, IDENTITY.md)30-50%C2 (cross-reference dedup)
Instructions (AGENTS.md)40-50%C2 (replace lists with file refs)
Knowledge base (MEMORY.md)10-20%C3 (archive old entries only)
Config/state table (TOOLS.md)10-20%C4 (remove stale entries only)

Task-to-Model Map

TaskModel TierModel
Version checkEconomyminimax-m2.7
Demand scanningStandarddeepseek-v4-pro (needs search)
Domain probeEconomyminimax-m2.7
Dreaming (memory integration)Economyminimax-m2.7
Doc maintenanceEconomyminimax-m2.7
WaveCap daily expansionStandarddeepseek-v4-pro (needs reasoning)
Weekly reviewPremiumdeepseek-v4-pro
Friday topic selectionPremiumdeepseek-v4-pro
Main sessionStandarddeepseek-v4-flash

Deferred Items

ItemReasonCondition to Revisit
Tool profile for main agentHigh risk (may break unexpected features)After sub-agent coding profile proven in production for 1 week
Cron task mergingNeeds user confirmation; may affect reliabilityNext token audit cycle
Compaction mode change (safeguard→normal)Needs config reloadWhen gateway restarted for other reasons

Deployment-Specific Constraints

  • Network: GFW blocks chatgpt.com, api.openai.com. All OpenAI/Codex models unavailable.
  • Models available: deepseek-v4-pro (premium), deepseek-v4-flash (standard), minimax-m2.7 (economy).
  • File paths: Standard OpenClaw paths under ~/.openclaw/.
  • Git: Workspace is a git repository; all changes version-controlled.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Production Code Audit

Deep-scan a codebase, understand its architecture and patterns, then produce a comprehensive audit report with prioritized fixes. Optionally apply changes on...

Registry SourceRecently Updated
1900Profile unavailable
Security

Soc Deploy Misp

Deploy MISP threat intelligence platform on any Docker-ready Linux host. Official misp-docker project with automatic MariaDB memory tuning (prevents OOM on s...

Registry SourceRecently Updated
2090Profile unavailable
Security

SEO Intel

Local SEO competitive intelligence tool. Use when the user asks about SEO analysis, competitor research, keyword gaps, content strategy, site audits, AI cita...

Registry SourceRecently Updated
2640Profile unavailable
Security

MAL-Updater

Multi-provider anime → MyAnimeList sync and recommendations skill with guarded auth, review-queue triage, health checks, bootstrap auditing, and user-systemd...

Registry SourceRecently Updated
2530Profile unavailable