Token Counter
Overview
Use this skill to produce token usage reports from local OpenClaw data. It parses session transcripts (.jsonl), session metadata, and cron definitions, then reports usage by category, client, tool, model, and top token consumers.
Quick Start
Run:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter --period 7d
Common Commands
- Basic report:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter --period 7d
- Focus on selected breakdowns:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
--period 1d \
--breakdown tools,category,client
- Analyze one session:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
--session agent:main:cron:d3d76f7a-7090-41c3-bb19-e2324093f9b1
- Export JSON:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
--period 30d \
--format json \
--output $OPENCLAW_WORKSPACE/token-usage/token-usage-30d.json
- Persist daily snapshot:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
--period 1d \
--save
This writes JSON to:
$OPENCLAW_WORKSPACE/token-usage/daily/YYYY-MM-DD.json
Defaults and Data Sources
- Sessions index:
$OPENCLAW_DATA_DIR/agents/main/sessions/sessions.json - Session transcripts:
$OPENCLAW_DATA_DIR/agents/main/sessions/*.jsonl - Cron definitions:
$OPENCLAW_DATA_DIR/cron/jobs.json
The parser reads assistant usage fields for token counts and uses tool-call records for attribution.
Notes on Attribution
- Tool token attribution is heuristic: assistant-message tokens are split across tool calls in that message.
- Session
totalTokensmay come from either session index metadata or transcript usage sums (max is used). - Client detection is rules-based (
personal,bonsai,mixed,unknown) using path/domain/email markers.
Validation
Run:
python3 $OPENCLAW_SKILLS_DIR/skill-creator/scripts/quick_validate.py \
$OPENCLAW_SKILLS_DIR/token-counter
References
See:
references/classification-rules.mdfor category/client detection logic and keyword mapping.