Work Execution Command
Execute work efficiently while maintaining quality and finishing features.
Introduction
This command takes a work document (plan, specification, or todo file) or a bare prompt describing the work, and executes it systematically. The focus is on shipping complete features by understanding requirements quickly, following existing patterns, and maintaining quality throughout.
Beta rollout note: Invoke ce:work-beta manually when you want to trial Codex delegation. During the beta period, planning and workflow handoffs remain pointed at stable ce:work to avoid dual-path orchestration complexity.
Input Document
<input_document> #$ARGUMENTS </input_document>
Argument Parsing
Parse $ARGUMENTS for the following optional tokens. Strip each recognized token before interpreting the remainder as the plan file path or bare prompt.
Token Example Effect
delegate:codex
delegate:codex
Activate Codex delegation mode for plan execution
delegate:local
delegate:local
Deactivate delegation even if enabled in config
All tokens are optional. When absent, fall back to the resolution chain below.
Fuzzy activation: Also recognize imperative delegation-intent phrases such as "use codex", "delegate to codex", "codex mode", or "delegate mode" as equivalent to delegate:codex . A bare mention of "codex" in a prompt (e.g., "fix codex converter bugs") must NOT activate delegation -- only clear delegation intent triggers it.
Fuzzy deactivation: Also recognize phrases such as "no codex", "local mode", "standard mode" as equivalent to delegate:local .
Settings Resolution Chain
After extracting tokens from arguments, resolve the delegation state using this precedence chain:
-
Argument flag -- delegate:codex or delegate:local from the current invocation (highest priority)
-
Config file -- extract settings from the config block below. Value codex for work_delegate activates delegation; false deactivates.
-
Hard default -- false (delegation off)
Config (pre-resolved): !cat "$(git rev-parse --show-toplevel 2>/dev/null)/.compound-engineering/config.local.yaml" 2>/dev/null || cat "$(dirname "$(git rev-parse --path-format=absolute --git-common-dir 2>/dev/null)")/.compound-engineering/config.local.yaml" 2>/dev/null || echo 'NO_CONFIG'
If the block above contains YAML key-value pairs, extract values for the keys listed below. If it shows NO_CONFIG , the file does not exist — all settings fall through to defaults. If it shows an unresolved command string, read .compound-engineering/config.local.yaml from the repo root using the native file-read tool (e.g., Read in Claude Code, read_file in Codex). If the file does not exist, all settings fall through to defaults.
If any setting has an unrecognized value, fall through to the hard default for that setting.
Config keys:
-
work_delegate -- codex or default false
-
work_delegate_consent -- true or default false
-
work_delegate_sandbox -- yolo (default) or full-auto
-
work_delegate_decision -- auto (default) or ask
-
work_delegate_model -- Codex model to use (default gpt-5.4 ). Passthrough — any valid model name accepted.
-
work_delegate_effort -- minimal , low , medium , high (default), or xhigh
Store the resolved state for downstream consumption:
-
delegation_active -- boolean, whether delegation mode is on
-
delegation_source -- argument or config or default -- how delegation was resolved (used by environment guard to decide notification verbosity)
-
sandbox_mode -- yolo or full-auto (from config or default yolo )
-
consent_granted -- boolean (from config work_delegate_consent )
-
delegate_model -- string (from config or default gpt-5.4 )
-
delegate_effort -- string (from config or default high )
Execution Workflow
Phase 0: Input Triage
Determine how to proceed based on what was provided in <input_document> .
Plan document (input is a file path to an existing plan, specification, or todo file) → skip to Phase 1.
Bare prompt (input is a description of work, not a file path):
Scan the work area
-
Identify files likely to change based on the prompt
-
Find existing test files for those areas (search for test/spec files that import, reference, or share names with the implementation files)
-
Note local patterns and conventions in the affected areas
Assess complexity and route
Complexity Signals Action
Trivial 1-2 files, no behavioral change (typo, config, rename) Proceed to Phase 1 step 2 (environment setup), then implement directly — no task list, no execution loop. Apply Test Discovery if the change touches behavior-bearing code
Small / Medium Clear scope, under ~10 files Build a task list from discovery. Proceed to Phase 1 step 2
Large Cross-cutting, architectural decisions, 10+ files, touches auth/payments/migrations Inform the user this would benefit from /ce:brainstorm or /ce:plan to surface edge cases and scope boundaries. Honor their choice. If proceeding, build a task list and continue to Phase 1 step 2
Phase 1: Quick Start
Read Plan and Clarify (skip if arriving from Phase 0 with a bare prompt)
-
Read the work document completely
-
Treat the plan as a decision artifact, not an execution script
-
If the plan includes sections such as Implementation Units , Work Breakdown , Requirements Trace , Files , Test Scenarios , or Verification , use those as the primary source material for execution
-
Check for Execution note on each implementation unit — these carry the plan's execution posture signal for that unit (for example, test-first or characterization-first). Note them when creating tasks.
-
Check for a Deferred to Implementation or Implementation-Time Unknowns section — these are questions the planner intentionally left for you to resolve during execution. Note them before starting so they inform your approach rather than surprising you mid-task
-
Check for a Scope Boundaries section — these are explicit non-goals. Refer back to them if implementation starts pulling you toward adjacent work
-
Review any references or links provided in the plan
-
If the user explicitly asks for TDD, test-first, or characterization-first execution in this session, honor that request even if the plan has no Execution note
-
If anything is unclear or ambiguous, ask clarifying questions now
-
Get user approval to proceed
-
Do not skip this - better to ask questions now than build the wrong thing
Setup Environment
First, check the current branch:
current_branch=$(git branch --show-current) default_branch=$(git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's@^refs/remotes/origin/@@')
Fallback if remote HEAD isn't set
if [ -z "$default_branch" ]; then default_branch=$(git rev-parse --verify origin/main >/dev/null 2>&1 && echo "main" || echo "master") fi
If already on a feature branch (not the default branch):
First, check whether the branch name is meaningful — a name like feat/crowd-sniff or fix/email-validation tells future readers what the work is about. Auto-generated worktree names (e.g., worktree-jolly-beaming-raven ) or other opaque names do not.
If the branch name is meaningless or auto-generated, suggest renaming it before continuing:
git branch -m <meaningful-name>
Derive the new name from the plan title or work description (e.g., feat/crowd-sniff ). Present the rename as a recommended option alongside continuing as-is.
Then ask: "Continue working on [current_branch] , or create a new branch?"
-
If continuing (with or without rename), proceed to step 3
-
If creating new, follow Option A or B below
If on the default branch, choose how to proceed:
Option A: Create a new branch
git pull origin [default_branch] git checkout -b feature-branch-name
Use a meaningful name based on the work (e.g., feat/user-authentication , fix/email-validation ).
Option B: Use a worktree (recommended for parallel development)
skill: git-worktree
The skill will create a new branch from the default branch in an isolated worktree
Option C: Continue on the default branch
-
Requires explicit user confirmation
-
Only proceed after user explicitly says "yes, commit to [default_branch]"
-
Never commit directly to the default branch without explicit permission
Recommendation: Use worktree if:
-
You want to work on multiple features simultaneously
-
You want to keep the default branch clean while experimenting
-
You plan to switch between branches frequently
Create Todo List (skip if Phase 0 already built one, or if Phase 0 routed as Trivial)
-
Use your available task tracking tool (e.g., TodoWrite, task lists) to break the plan into actionable tasks
-
Derive tasks from the plan's implementation units, dependencies, files, test targets, and verification criteria
-
Carry each unit's Execution note into the task when present
-
For each unit, read the Patterns to follow field before implementing — these point to specific files or conventions to mirror
-
Use each unit's Verification field as the primary "done" signal for that task
-
Do not expect the plan to contain implementation code, micro-step TDD instructions, or exact shell commands
-
Include dependencies between tasks
-
Prioritize based on what needs to be done first
-
Include testing and quality check tasks
-
Keep tasks specific and completable
Choose Execution Strategy
Delegation routing gate: If delegation_active is true AND the input is a plan file (not a bare prompt), read references/codex-delegation-workflow.md and follow its Pre-Delegation Checks and Delegation Decision flow. If all checks pass and delegation proceeds, force serial execution and proceed directly to Phase 2 using the workflow's batched execution loop. If any check disables delegation, fall through to the standard strategy table below. If delegation is active but the input is a bare prompt (no plan file), set delegation_active to false with a brief note: "Codex delegation requires a plan file -- using standard mode." and continue with the standard strategy selection below.
After creating the task list, decide how to execute based on the plan's size and dependency structure:
Strategy When to use
Inline 1-2 small tasks, or tasks needing user interaction mid-flight. Default for bare-prompt work — bare prompts rarely produce enough structured context to justify subagent dispatch
Serial subagents 3+ tasks with dependencies between them. Each subagent gets a fresh context window focused on one unit — prevents context degradation across many tasks. Requires plan-unit metadata (Goal, Files, Approach, Test scenarios)
Parallel subagents 3+ tasks that pass the Parallel Safety Check (below). Dispatch independent units simultaneously, run dependent units after their prerequisites complete. Requires plan-unit metadata
Parallel Safety Check — required before choosing parallel dispatch:
-
Build a file-to-unit mapping from every candidate unit's Files: section (Create, Modify, and Test paths)
-
Check for intersection — any file path appearing in 2+ units means overlap
-
If any overlap is found, downgrade to serial subagents. Log the reason (e.g., "Units 2 and 4 share config/routes.rb — using serial dispatch"). Serial subagents still provide context-window isolation without shared-directory risks
Even with no file overlap, parallel subagents sharing a working directory face git index contention (concurrent staging/committing corrupts the index) and test interference (concurrent test runs pick up each other's in-progress changes). The parallel subagent constraints below mitigate these.
Subagent dispatch uses your available subagent or task spawning mechanism. For each unit, give the subagent:
-
The full plan file path (for overall context)
-
The specific unit's Goal, Files, Approach, Execution note, Patterns, Test scenarios, and Verification
-
Any resolved deferred questions relevant to that unit
-
Instruction to check whether the unit's test scenarios cover all applicable categories (happy paths, edge cases, error paths, integration) and supplement gaps before writing tests
Parallel subagent constraints — when dispatching units in parallel (not serial or inline):
-
Instruct each subagent: "Do not stage files (git add ), create commits, or run the project test suite. The orchestrator handles testing, staging, and committing after all parallel units complete."
-
These constraints prevent git index contention and test interference between concurrent subagents
Permission mode: Omit the mode parameter when dispatching subagents so the user's configured permission settings apply. Do not pass mode: "auto" — it overrides user-level settings like bypassPermissions .
After each subagent completes (serial mode):
-
Review the subagent's diff — verify changes match the unit's scope and Files: list
-
Run the relevant test suite to confirm the tree is healthy
-
If tests fail, diagnose and fix before proceeding — do not dispatch dependent units on a broken tree
-
Update the plan checkboxes and task list
-
Dispatch the next unit
After all parallel subagents in a batch complete:
-
Wait for every subagent in the current parallel batch to finish before acting on any of their results
-
Cross-check for discovered file collisions: compare the actual files modified by all subagents in the batch (not just their declared Files: lists). Subagents may create or modify files not anticipated during planning — this is expected, since plans describe what not how. A collision only matters when 2+ subagents in the same batch modified the same file. In a shared working directory, only the last writer's version survives — the other unit's changes to that file are lost. If a collision is detected: commit all non-colliding files from all units first, then re-run the affected units serially for the shared file so each builds on the other's committed work
-
For each completed unit, in dependency order: review the diff, run the relevant test suite, stage only that unit's files, and commit with a conventional message derived from the unit's Goal
-
If tests fail after committing a unit's changes, diagnose and fix before committing the next unit
-
Update the plan checkboxes and task list
-
Dispatch the next batch of independent units, or the next dependent unit
Phase 2: Execute
Task Execution Loop
For each task in priority order:
while (tasks remain):
- Mark task as in-progress
- Read any referenced files from the plan or discovered during Phase 0
- Look for similar patterns in codebase
- Find existing test files for implementation files being changed (Test Discovery — see below)
- If delegation_active: branch to the Codex Delegation Execution Loop
(see
references/codex-delegation-workflow.md) - Otherwise: implement following existing conventions
- Add, update, or remove tests to match implementation changes (see Test Discovery below)
- Run System-Wide Test Check (see below)
- Run tests after changes
- Assess testing coverage: did this task change behavior? If yes, were tests written or updated? If no tests were added, is the justification deliberate (e.g., pure config, no behavioral change)?
- Mark task as completed
- Evaluate for incremental commit (see below)
When a unit carries an Execution note , honor it. For test-first units, write the failing test before implementation for that unit. For characterization-first units, capture existing behavior before changing it. For units without an Execution note , proceed pragmatically.
Guardrails for execution posture:
-
Do not write the test and implementation in the same step when working test-first
-
Do not skip verifying that a new test fails before implementing the fix or feature
-
Do not over-implement beyond the current behavior slice when working test-first
-
Skip test-first discipline for trivial renames, pure configuration, and pure styling work
Test Discovery — Before implementing changes to a file, find its existing test files (search for test/spec files that import, reference, or share naming patterns with the implementation file). When a plan specifies test scenarios or test files, start there, then check for additional test coverage the plan may not have enumerated. Changes to implementation files should be accompanied by corresponding test updates — new tests for new behavior, modified tests for changed behavior, removed or updated tests for deleted behavior.
Test Scenario Completeness — Before writing tests for a feature-bearing unit, check whether the plan's Test scenarios cover all categories that apply to this unit. If a category is missing or scenarios are vague (e.g., "validates correctly" without naming inputs and expected outcomes), supplement from the unit's own context before writing tests:
Category When it applies How to derive if missing
Happy path Always for feature-bearing units Read the unit's Goal and Approach for core input/output pairs
Edge cases When the unit has meaningful boundaries (inputs, state, concurrency) Identify boundary values, empty/nil inputs, and concurrent access patterns
Error/failure paths When the unit has failure modes (validation, external calls, permissions) Enumerate invalid inputs the unit should reject, permission/auth denials it should enforce, and downstream failures it should handle
Integration When the unit crosses layers (callbacks, middleware, multi-service) Identify the cross-layer chain and write a scenario that exercises it without mocks
System-Wide Test Check — Before marking a task done, pause and ask:
Question What to do
What fires when this runs? Callbacks, middleware, observers, event handlers — trace two levels out from your change. Read the actual code (not docs) for callbacks on models you touch, middleware in the request chain, after_* hooks.
Do my tests exercise the real chain? If every dependency is mocked, the test proves your logic works in isolation — it says nothing about the interaction. Write at least one integration test that uses real objects through the full callback/middleware chain. No mocks for the layers that interact.
Can failure leave orphaned state? If your code persists state (DB row, cache, file) before calling an external service, what happens when the service fails? Does retry create duplicates? Trace the failure path with real objects. If state is created before the risky call, test that failure cleans up or that retry is idempotent.
What other interfaces expose this? Mixins, DSLs, alternative entry points (Agent vs Chat vs ChatMethods). Grep for the method/behavior in related classes. If parity is needed, add it now — not as a follow-up.
Do error strategies align across layers? Retry middleware + application fallback + framework error handling — do they conflict or create double execution? List the specific error classes at each layer. Verify your rescue list matches what the lower layer actually raises.
When to skip: Leaf-node changes with no callbacks, no state persistence, no parallel interfaces. If the change is purely additive (new helper method, new view partial), the check takes 10 seconds and the answer is "nothing fires, skip."
When this matters most: Any change that touches models with callbacks, error handling with fallback/retry, or functionality exposed through multiple interfaces.
Incremental Commits
After completing each task, evaluate whether to create an incremental commit:
Commit when... Don't commit when...
Logical unit complete (model, service, component) Small part of a larger unit
Tests pass + meaningful progress Tests failing
About to switch contexts (backend → frontend) Purely scaffolding with no behavior
About to attempt risky/uncertain changes Would need a "WIP" commit message
Heuristic: "Can I write a commit message that describes a complete, valuable change? If yes, commit. If the message would be 'WIP' or 'partial X', wait."
If the plan has Implementation Units, use them as a starting guide for commit boundaries — but adapt based on what you find during implementation. A unit might need multiple commits if it's larger than expected, or small related units might land together. Use each unit's Goal to inform the commit message.
Commit workflow:
1. Verify tests pass (use project's test command)
Examples: bin/rails test, npm test, pytest, go test, etc.
2. Stage only files related to this logical unit (not git add .)
git add <files related to this logical unit>
3. Commit with conventional message
git commit -m "feat(scope): description of this unit"
Handling merge conflicts: If conflicts arise during rebasing or merging, resolve them immediately. Incremental commits make conflict resolution easier since each commit is small and focused.
Note: Incremental commits use clean conventional messages without attribution footers. The final Phase 4 commit/PR includes the full attribution.
Parallel subagent mode: When units run as parallel subagents, the subagents do not commit — the orchestrator handles staging and committing after the entire parallel batch completes (see Parallel subagent constraints in Phase 1 Step 4). The commit guidance in this section applies to inline and serial execution, and to the orchestrator's commit decisions after parallel batch completion.
Follow Existing Patterns
-
The plan should reference similar code - read those files first
-
Match naming conventions exactly
-
Reuse existing components where possible
-
Follow project coding standards (see AGENTS.md; use CLAUDE.md only if the repo still keeps a compatibility shim)
-
When in doubt, grep for similar implementations
Test Continuously
-
Run relevant tests after each significant change
-
Don't wait until the end to test
-
Fix failures immediately
-
Add new tests for new behavior, update tests for changed behavior, remove tests for deleted behavior
-
Unit tests with mocks prove logic in isolation. Integration tests with real objects prove the layers work together. If your change touches callbacks, middleware, or error handling — you need both.
Simplify as You Go
After completing a cluster of related implementation units (or every 2-3 units), review recently changed files for simplification opportunities — consolidate duplicated patterns, extract shared helpers, and improve code reuse and efficiency. This is especially valuable when using subagents, since each agent works with isolated context and can't see patterns emerging across units.
Don't simplify after every single unit — early patterns may look duplicated but diverge intentionally in later units. Wait for a natural phase boundary or when you notice accumulated complexity.
If a /simplify skill or equivalent is available, use it. Otherwise, review the changed files yourself for reuse and consolidation opportunities.
Figma Design Sync (if applicable)
For UI work with Figma designs:
-
Implement components following design specs
-
Use figma-design-sync agent iteratively to compare
-
Fix visual differences identified
-
Repeat until implementation matches design
Frontend Design Guidance (if applicable)
For UI tasks without a Figma design -- where the implementation touches view, template, component, layout, or page files, creates user-visible routes, or the plan contains explicit UI/frontend/design language:
-
Load the frontend-design skill before implementing
-
Follow its detection, guidance, and verification flow
-
If the skill produced a verification screenshot, it satisfies Phase 4's screenshot requirement -- no need to capture separately. If the skill fell back to mental review (no browser access), Phase 4's screenshot capture still applies
Track Progress
-
Keep the task list updated as you complete tasks
-
Note any blockers or unexpected discoveries
-
Create new tasks if scope expands
-
Keep user informed of major milestones
Phase 3-4: Quality Check and Ship It
When all Phase 2 tasks are complete and execution transitions to quality check, read references/shipping-workflow.md for the full shipping workflow: quality checks, code review, final validation, PR creation, and notification.
Codex Delegation Mode
When delegation_active is true after argument parsing, read references/codex-delegation-workflow.md for the complete delegation workflow: pre-checks, batching, prompt template, execution loop, and result classification.
Key Principles
Start Fast, Execute Faster
-
Get clarification once at the start, then execute
-
Don't wait for perfect understanding - ask questions and move
-
The goal is to finish the feature, not create perfect process
The Plan is Your Guide
-
Work documents should reference similar code and patterns
-
Load those references and follow them
-
Don't reinvent - match what exists
Test As You Go
-
Run tests after each change, not at the end
-
Fix failures immediately
-
Continuous testing prevents big surprises
Quality is Built In
-
Follow existing patterns
-
Write tests for new code
-
Run linting before pushing
-
Review every change — inline for simple additive work, full review for everything else
Ship Complete Features
-
Mark all tasks completed before moving on
-
Don't leave features 80% done
-
A finished feature that ships beats a perfect feature that doesn't
Common Pitfalls to Avoid
-
Analysis paralysis - Don't overthink, read the plan and execute
-
Skipping clarifying questions - Ask now, not after building wrong thing
-
Ignoring plan references - The plan has links for a reason
-
Testing at the end - Test continuously or suffer later
-
Forgetting to track progress - Update task status as you go or lose track of what's done
-
80% done syndrome - Finish the feature, don't move on early
-
Skipping review - Every change gets reviewed; only the depth varies