cavecrew

Decision guide for delegating to caveman-style subagents. Tells the main thread WHEN to spawn `cavecrew-investigator` (locate code), `cavecrew-builder` (1-2 file edit), or `cavecrew-reviewer` (diff review) instead of doing the work inline or using vanilla `Explore`. Subagent output is caveman-compressed so the tool-result injected back into main context is ~60% smaller — main context lasts longer across long sessions. Trigger: "delegate to subagent", "use cavecrew", "spawn investigator/builder/reviewer", "save context", "compressed agent output".

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "cavecrew" with this command: npx skills add juliusbrussee/caveman/juliusbrussee-caveman-cavecrew

Cavecrew = three subagent presets that emit caveman output. Same job as Anthropic defaults (Explore, edit-style agents, reviewer); difference is the tool-result they return is compressed, so main context shrinks per delegation.

When to use cavecrew vs alternatives

TaskUse
"Where is X defined / what calls Y / list uses of Z"cavecrew-investigator
Same but you also want suggestions/architecture commentaryExplore (vanilla)
Surgical edit, ≤2 files, scope obviouscavecrew-builder
New feature / 3+ files / cross-cutting refactorMain thread or feature-dev:code-architect
Review diff, branch, or file for bugscavecrew-reviewer
Deep code review with rationale + alternativesCode Reviewer (vanilla)
One-line answer you already knowMain thread, no subagent

Rule of thumb: if you'd want the subagent's output in 1/3 the tokens, pick cavecrew. If you'd want prose, pick vanilla.

Why this exists (the real win)

Subagent tool results get injected into main context verbatim. A vanilla Explore that returns 2k tokens of prose costs 2k tokens of main-context budget every time. The same finding from cavecrew-investigator returns ~700 tokens. Across 20 delegations in one session that's the difference between context exhaustion and finishing the task.

Output contracts

What main thread can rely on per agent:

cavecrew-investigator

<Header>:
- path:line — `symbol` — short note
totals: <counts>.

Or No match. Always file-path-first, line-number-attached, backticked symbols. Safe to grep with path:\d+.

cavecrew-builder

<path:line-range> — <change ≤10 words>.
verified: <re-read OK | mismatch @ path:line>.

Or one of: too-big. / needs-confirm. / ambiguous. / regressed. (terminal first token).

cavecrew-reviewer

path:line: <emoji> <severity>: <problem>. <fix>.
totals: N🔴 N🟡 N🔵 N❓

Or No issues. Findings sorted file → line ascending.

Chaining patterns

Locate → fix → verify (most common):

  1. cavecrew-investigator returns site list.
  2. Main thread picks 1-2 sites, hands paths to cavecrew-builder.
  3. cavecrew-reviewer audits the diff.

Parallel scout (when investigation is broad): Spawn 2-3 cavecrew-investigator calls in one message (different angles: defs vs callers vs tests). Aggregate in main thread.

Single-shot edit (when site is already known): Skip investigator. Hand exact path:line to cavecrew-builder directly.

What NOT to do

  • Don't use cavecrew-builder when you don't already know the file. Spawn investigator first or main thread will eat tokens passing context.
  • Don't chain cavecrew-investigator → cavecrew-builder for a 5-file refactor. Builder will return too-big. and you'll have wasted a turn.
  • Don't ask cavecrew-reviewer for "general feedback" — it returns findings only, no architecture opinions. Use Code Reviewer for that.
  • Don't expect prose. Cavecrew output is structured, sometimes terse to the point of cryptic. If a human will read it directly, paraphrase.

Auto-clarity (inherited)

Subagents drop caveman → normal English for security warnings, irreversible-action confirmations, and any output where fragment ambiguity could be misread. Resume caveman after.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

caveman-review

Ultra-compressed code review comments. Cuts noise from PR feedback while preserving the actionable signal. Each comment is one line: location, problem, fix. Use when user says "review this PR", "code review", "review the diff", "/review", or invokes /caveman-review. Auto-triggers when reviewing pull requests.

Repository Source
Coding

caveman-compress

Compress natural language memory files (CLAUDE.md, todos, preferences) into caveman format to save input tokens. Preserves all technical substance, code, URLs, and structure. Compressed version overwrites the original file. Human-readable backup saved as FILE.original.md. Trigger: /caveman:compress FILEPATH or "compress memory file"

Repository Source
General

caveman

Ultra-compressed communication mode. Cuts token usage ~75% by speaking like caveman while keeping full technical accuracy. Supports intensity levels: lite, full (default), ultra, wenyan-lite, wenyan-full, wenyan-ultra. Use when user says "caveman mode", "talk like caveman", "use caveman", "less tokens", "be brief", or invokes /caveman. Also auto-triggers when token efficiency is requested.

Repository SourceNeeds Review
General

caveman-commit

Ultra-compressed commit message generator. Cuts noise from commit messages while preserving intent and reasoning. Conventional Commits format. Subject ≤50 chars, body only when "why" isn't obvious. Use when user says "write a commit", "commit message", "generate commit", "/commit", or invokes /caveman-commit. Auto-triggers when staging changes.

Repository Source