Gathering Feature π²πΎ
The drum echoes through the forest. But this time, it's different. The conductor stands at the center of the clearing β not doing the work, but orchestrating it. Each animal arrives with fresh eyes, reads its own instructions, and works with full attention. No context fatigue. No phoning it in. No "the code documents itself." Eight animals, eight fresh minds, one feature built right.
When to Summon
-
Building a complete feature from scratch
-
Major functionality spanning frontend, backend, and database
-
Features requiring exploration, implementation, testing, and documentation
-
When you want the full lifecycle handled with consistent quality through every phase
IMPORTANT: This gathering is a conductor. It never writes code, tests, or docs directly. It dispatches subagents β one per animal β each with isolated context and an intentional model. The conductor only manages handoffs and gate checks.
The Gathering
SUMMON β DISPATCH β GATE β DISPATCH β GATE β ... β CHORUS β β β β β β Spec Bloodhound Check Elephant Check Final Verification (self) (haiku) β (opus) β & Summary
Animals Dispatched
Order Animal Model Role Fresh Eyes?
1 π Bloodhound haiku Scout codebase, map territory Yes β sees only the spec
2 π Elephant opus Build the feature across files Yes β sees only spec + territory map
3 π’ Turtle opus Harden security (adversarial) Yes β sees only file list, not build reasoning
4 𦫠Beaver sonnet Write tests from behavior Yes β sees only file list, not impl details
5a π¦ Raccoon sonnet Security audit + cleanup Yes β parallel
5b π¦ Deer sonnet Accessibility audit Yes β parallel
5c π¦ Fox sonnet Performance optimization Yes β parallel
6 π¦ Owl opus Write actual documentation Yes β receives full summary
Reference: Load references/conductor-dispatch.md for exact subagent prompts and handoff formats
Phase 1: SUMMON
The drum sounds. The conductor steps into the clearing...
The conductor (this skill) receives the feature request and prepares the dispatch plan:
-
Clarify: What does this feature do? Which users benefit? What's in scope?
-
Identify affected packages and likely file count
-
Determine if Elephant needs multi-agent dispatch (>15 files across multiple packages)
-
Confirm the gathering with the human
Output: Feature specification, estimated scope, dispatch plan confirmed.
Phase 2: SCOUT (Bloodhound)
The conductor signals. The Bloodhound enters the forest...
Agent(bloodhound, model: haiku) Input: feature spec only Reads: bloodhound-scout/SKILL.md (MANDATORY) Output: territory map
Dispatch a haiku subagent to scout the codebase. The Bloodhound receives ONLY the feature specification β no opinions, no pre-analysis. It reads its own skill file and executes its full SCENT β TRACK β HUNT β REPORT workflow.
Handoff to conductor: Territory map (files to change, patterns found, integration points, existing conventions, potential obstacles).
Gate check: Territory map received with at least: file list, pattern summary, integration points. If incomplete, resume the agent with specific questions.
Phase 3: BUILD (Elephant)
The ground trembles. The Elephant arrives...
Agent(elephant, model: opus) Input: feature spec + territory map (from Bloodhound) Reads: elephant-build/SKILL.md + references (MANDATORY) Output: built files list + implementation summary
Dispatch an opus subagent to build the feature. The Elephant receives the spec and territory map β NOT the Bloodhound's reasoning process, just its structured output.
Multi-agent dispatch (when scope > 15 files or spans 3+ packages):
Agent(elephant-backend, model: opus) β API routes, services, migrations Agent(elephant-frontend, model: opus) β Svelte components, stores, pages PARALLEL Agent(elephant-schema, model: sonnet) β Database migrations, types
Then: Agent(elephant-wire, model: opus) β Integration wiring across all three
Each sub-elephant reads elephant-build/SKILL.md and works on its domain only.
Cross-cutting standards the Elephant MUST follow:
-
Signpost error codes for all error paths (buildErrorJson , throwGroveError )
-
Rootwork type safety at boundaries (parseFormData , safeJsonParse )
-
Reference: AgentUsage/error_handling.md , AgentUsage/rootwork_type_safety.md
Handoff to conductor: File list (every file created/modified), implementation summary (what was built, key decisions), any open questions.
Gate check: Run gw ci --affected --fail-fast β must at least compile. If build fails, resume the Elephant agent with error output.
Phase 4: HARDEN (Turtle)
The Turtle approaches slowly. It sees only what was built β not why...
Agent(turtle, model: opus) Input: file list ONLY (not Elephant's reasoning β fresh adversarial eyes) Reads: turtle-harden/SKILL.md + references (MANDATORY) Output: hardening report + applied fixes
Dispatch an opus subagent for security hardening. The Turtle receives ONLY the file list β not the Elephant's implementation summary. This is intentional: the Turtle should examine the code with adversarial fresh eyes, not sympathize with the builder's reasoning.
What the Turtle hardens:
-
Input validation (Zod schemas on all entry points)
-
Output encoding (context-aware, DOMPurify for rich text)
-
Parameterized queries (no string concatenation in SQL)
-
Security headers (CSP with nonces, HSTS, X-Frame)
-
Signpost error codes (verify Elephant used them correctly)
-
Rootwork boundary safety (verify no as casts at trust boundaries)
Handoff to conductor: Hardening report (what was found, what was fixed, defense layers applied), updated file list.
Gate check: Run gw ci --affected --fail-fast β must still compile after hardening. If broken, resume Turtle agent.
Phase 5: TEST (Beaver)
The Beaver surveys the stream. It doesn't know how the dam was built β only what it should hold...
Agent(beaver, model: sonnet) Input: file list + feature spec (NOT implementation details) Reads: beaver-build/SKILL.md + references (MANDATORY) Output: test suite + test results
Dispatch a sonnet subagent to write tests. The Beaver receives the file list and the original feature spec β NOT the Elephant's implementation summary or Turtle's hardening report. Tests should be written from behavior, not from reading the code.
What the Beaver tests:
-
Feature behavior (from the spec, not the code)
-
Security regressions (API routes return proper error_code fields)
-
Boundary validation (rejection of bad input)
-
Catch block type guards (isRedirect /isHttpError )
Handoff to conductor: Test file list, test results (pass/fail counts), any behavioral gaps found.
Gate check: ALL tests pass. Run gw ci --affected --fail-fast --diagnose . If tests fail, resume Beaver agent with failure output. If a test reveals an implementation bug, note it for the conductor to decide: resume Elephant or Turtle?
Phase 6: AUDIT (Raccoon + Deer + Fox β PARALLEL)
Three animals enter the clearing at once. Each sees with different eyes...
Dispatch three subagents simultaneously:
Agent(raccoon, model: sonnet) β Input: file list + feature scope Reads: raccoon-audit/SKILL.md Focus: secrets, dead code, dependency audit, unsafe patterns
Agent(deer, model: sonnet) β Input: UI file list + feature spec PARALLEL Reads: deer-sense/SKILL.md Focus: keyboard nav, screen readers, contrast, touch targets
Agent(fox, model: sonnet) β Input: hot path files + feature spec Reads: fox-optimize/SKILL.md Focus: bundle size, query performance, lazy loading
Each animal works in isolation. They don't see each other's findings. The conductor collects all three reports.
Handoff to conductor: Three reports β audit findings, a11y findings, performance findings. Plus any fixes each animal applied.
Gate check: Review all three reports. If any animal found issues that need fixing:
-
Security issues β conductor applies fixes or re-dispatches targeted agent
-
A11y issues β conductor applies fixes
-
Performance issues β conductor applies fixes
-
Re-run gw ci --affected --fail-fast --diagnose after all fixes
Phase 7: DOCUMENT (Owl)
The Owl opens its eyes. It has heard everything β now it speaks...
Agent(owl, model: opus) Input: FULL gathering summary (spec, territory map, file list, hardening report, test results, audit/a11y/perf reports) Reads: owl-archive/SKILL.md + references (MANDATORY) Output: actual documentation written to files
Dispatch an opus subagent to write documentation. The Owl receives the FULL gathering summary β it needs context to write meaningful docs. Opus because documentation requires warmth, voice, and the judgment to know what's worth documenting.
What the Owl writes:
-
Help article or user-facing documentation (if feature is user-visible)
-
API documentation (if new endpoints were created)
-
Inline code comments where logic isn't self-evident (NOT "the code documents itself")
-
Update any affected existing docs
Handoff to conductor: Documentation file list, summary of what was documented.
Gate check: Verify documentation files exist and have actual content (not stubs).
Phase 8: CHORUS
Dawn breaks. The conductor raises their hands. The forest sings...
The conductor runs final verification and presents the summary:
Final verification β the whole gathering's work proven sound
pnpm install gw ci --affected --fail-fast --diagnose
Visual Verification (for features with UI):
uv run --project tools/glimpse glimpse matrix
"http://localhost:5173/[feature-page]?subdomain=midnight-bloom"
--seasons autumn,winter --themes light,dark --logs --auto
Completion Report:
π² GATHERING FEATURE COMPLETE
Feature: [Name]
DISPATCH LOG π Bloodhound (haiku) β [territory mapped, X files identified] π Elephant (opus) β [Y files built across Z packages] π’ Turtle (opus) β [N hardening fixes applied, M defense layers] 𦫠Beaver (sonnet) β [P tests written, all passing] π¦ Raccoon (sonnet) β [audit clean / N issues fixed] π¦ Deer (sonnet) β [a11y verified / N issues fixed] π¦ Fox (sonnet) β [performance verified / N optimizations] π¦ Owl (opus) β [documentation written at: paths]
GATE LOG After Scout: β territory map complete After Build: β compiles clean After Harden: β compiles clean, hardening applied After Test: β all tests pass After Audit: β findings resolved After Document: β docs written (not stubs) Final CI: β gw ci --affected passes
The forest grows. The feature lives.
Conductor Rules
Never Do Animal Work
The conductor dispatches. It does not scout, build, harden, test, audit, or document. If you catch yourself writing code, stop β you should be dispatching a subagent.
Fresh Eyes Are a Feature
Turtle and Beaver intentionally receive LESS context than the full history. Turtle doesn't see Elephant's reasoning (adversarial fresh eyes). Beaver doesn't see implementation details (behavioral testing). This isolation produces better results.
Gate Every Transition
Run verification between every animal. Don't let bad state cascade β catch it early.
Parallel When Possible
Raccoon, Deer, and Fox run simultaneously. Three fresh agents, three different concerns, zero context sharing.
Multi-Agent for Heavy Phases
If Elephant would touch 15+ files across multiple packages, split into domain-focused sub-elephants. Each reads the skill, each handles its domain, then a wiring agent integrates.
Resume, Don't Restart
If a gate check fails, resume the failing agent with the error context. Don't spawn a new one β the resumed agent has its prior work in context.
Communication
-
"The drum sounds..." (summoning)
-
"Dispatching [animal]..." (spawning subagent)
-
"Gate check: [result]..." (verifying between phases)
-
"The chorus rises..." (final verification)
Anti-Patterns
The conductor does NOT:
-
Write code, tests, or documentation itself (dispatch subagents)
-
Pass full conversation history to every agent (structured handoffs only)
-
Skip gate checks ("I'm sure it's fine")
-
Run all animals in the same context (the whole point is isolation)
-
Let agents skip reading their skill file (MANDATORY in every prompt)
-
Declare documentation "complete" without verifying files exist with content
-
Continue after a gate failure without fixing it
Quick Decision Guide
Scope Dispatch Strategy
Small feature (< 10 files) Standard: one agent per animal
Medium feature (10-20 files) Standard, but consider parallel sub-elephants
Large feature (20+ files, 3+ packages) Multi-elephant dispatch + parallel audit phase
UI-only feature Skip Fox, emphasize Deer, add Glimpse verification
API-only feature Skip Deer, emphasize Turtle + Raccoon
Feature with existing tests Beaver reviews + extends instead of writing from scratch
Integration
Before: swan-design or eagle-architect for spec/architecture During: mole-debug if a gate check reveals mysterious failures After: crow-reason to challenge the result before shipping
When the drum sounds, the forest answers β with fresh eyes, full attention, and no animal phoning it in. π²