beads-from-plan

Convert markdown implementation plans into beads tasks. Use when the user says "create tasks from plan", "plan to beads", "bd from plan", "break down plan", "create beads from markdown", or has a large markdown plan that needs to be decomposed into trackable tasks.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "beads-from-plan" with this command: npx skills add deligoez/beads-from-plan/deligoez-beads-from-plan-beads-from-plan

Beads From Plan

Convert markdown implementation plans into structured beads tasks with full coverage guarantees.

ModeTriggersAction
DECOMPOSE"create tasks from plan", "plan to beads", "break down plan"Analyze markdown -> JSON task plan -> create beads
VERIFY"check plan coverage", "verify tasks"Validate existing plan JSON against source markdown

Purpose: Ensure every section of a plan becomes a trackable, dependency-ordered beads task with quality gates.

Execute autonomously. Never skip sections.

Script Path

The bd-from-plan script is at scripts/bd-from-plan relative to this skill's base directory. Use the base directory provided at skill activation to construct the full path:

# The base directory is shown as "Base directory for this skill: <path>" when the skill loads.
# Create a plan directory with mktemp
PLAN_DIR=$(mktemp -d /tmp/task-plan-XXXXXXXX)
# Write _plan.json and epic-*.json files into PLAN_DIR (see steps below)
<base_directory>/scripts/bd-from-plan "$PLAN_DIR"

The Process

Overview

Markdown Plan (2000+ lines)
        |
        v
   AI Analysis (per-epic, parallelizable)
   - Parse all headings (##, ###, ####)
   - Identify epics (top-level sections)
   - Identify tasks (sub-sections)
   - Map dependencies between tasks
   - Verify 100% section coverage
        |
        v
  Plan Directory (mktemp -d)
   plan-dir/
     _plan.json           Global: prefix, workflow, coverage
     epic-auth.json       Epic + tasks (full details)
     epic-payment.json    Epic + tasks (full details)
        |
        v
  bd-from-plan script
   - Merges _plan.json + epic-*.json files
   - Validates structure and coverage
   - Rejects if unmapped sections exist
   - Detects circular dependencies
   - Topological sort by dependencies
   - Creates epics and tasks in order
   - Wires dependencies via bd dep add
   - Reports summary

Critical Rules

100% Coverage Guarantee (STRICT)

Every content section in the markdown MUST map to at least one task.

This is the most important rule. A plan section that doesn't become a task will be forgotten.

Section TypeAction
Implementation sectionMap to a task
Overview/IntroductionMark as context_only in coverage
Table of ContentsMark as context_only
References/LinksMark as context_only
Everything elseMUST become a task

The script rejects plans with unmapped sections. Fix coverage before proceeding.

Dependency Accuracy (STRICT)

Dependencies must reflect real implementation order, not document order.

  • A task that uses a model depends on the task that creates it
  • A task that writes tests depends on the task that creates the code
  • A task that configures something depends on the task that installs it
  • Document order (section 1 before section 2) is NOT a dependency

Detecting Dependencies

Ask for each task:

  1. "What must exist before I can start this?"
  2. "What would break if I did this first?"

If the answer to both is "nothing" -> no dependencies.

Dependency Format

The script uses smart resolution — both formats work transparently:

Dependency TypeFormatExample
Same-epicJust task ID"depends_on": ["create-model"]
Cross-epicepicId-taskId"depends_on": ["model-create-model"]

Resolution: tries exact match against all epicId-taskId first, then falls back to same-epic.

Circular Dependencies

The script detects and rejects circular dependencies. If you find a cycle:

  • Break it by splitting one task into two
  • The setup part has no dependency, the integration part depends on the other

Atomic Task Decomposition (STRICT)

Each task MUST be completable by an AI agent in a single execution AND expressible as one commit.

This is the second most important rule (after 100% coverage). These tasks are designed for AI agent execution (including parallel agents), not human sessions. Over-broad tasks cause:

  • Context rot — accuracy drops 20-50% as agent context grows from 10K→100K tokens (Chroma research)
  • Success cliff — SWE-bench: <15 min tasks = 70%+ success, 1+ hour = 23% success
  • Poor commits — impossible to create atomic commits from broad tasks
  • Tracking failure — "50% done" tasks are invisible in beads
  • Parallelism blocked — coarse tasks can't be distributed across parallel agents

Rule 1: Single Commit Test

If you can't describe the task's output in ONE commit message, split it.

Task TitleCommit MessageResult
"Create User model"feat(User): create model with migrationPASS
"Create config, migration, model, service"Can't fit in one messageFAIL — split into 4

Rule 2: One File Rule

Each new file creation = separate task.

If a task creates 3 new files, it should be 3 tasks.

Files CreatedTasksWhy
Model.php1 taskSingle file
Model.php + ModelTest.php1 taskCode + its test = one concern
Model.php + Migration.php + Factory.php3 tasksDifferent concerns
Service.php + ServiceInterface.php1 taskCompile-time dependency

Exception: A source file + its direct test file = one task (they share one concern).

Rule 3: Maximum 15 Minutes

Implementation tasks MUST NOT exceed 15 minutes.

Tasks are executed by AI agents, not humans. There is no minimum — a 1-minute task is perfectly valid. The goal is maximum atomicity for agent success and parallelization.

EstimateAction
1–15 minIdeal agent task — high success rate, parallelizable
16–30 minMUST split — agent accuracy degrades significantly
> 30 minMUST split aggressively — this is multiple tasks disguised as one

Why 15 minutes? Data-driven: METR shows Claude 50% success at ~50 min with non-linear degradation. SWE-bench shows <15 min tasks achieve 70%+ success. Setting the max at 15 minutes keeps each task well within the high-success zone.

Rule 4: Verb-Object Test

A good task title has ONE verb and ONE object.

TitleAnalysisResult
"Create MachineStateLock model"create + modelPASS
"Add config and create migration"add + config, create + migrationFAIL — 2 tasks
"Implement service with exception handling"implement + service (exception is part of it)PASS

Red flag words: "and", "+", commas separating nouns. These usually indicate multiple concerns jammed into one task.

Rule 5: Count the Files

If a task implies creating or modifying >2 files, it's too broad.

Count the files mentioned or implied in the description. Source + test = 1 logical file.

Rule 6: Acceptance Criteria Count

If acceptance criteria lists >3 distinct checkpoints, the task combines multiple concerns.

AcceptanceCriteriaResult
"Model exists. Migration runs."2PASS
"Manager acquires. Handle releases. Stale healed. Migration publishable."4FAIL — split

Rule 7: Noun Count in Title

Count the distinct nouns (objects being created/modified) in the title. More than 2 = split.

TitleNounsResult
"Create MachineLockManager service"1 (MachineLockManager)PASS
"Lock infrastructure: config, migration, model, service, exception"5FAIL — 5 tasks

Recursive Decomposition Algorithm

After initial task identification, the agent MUST run this loop:

FOR each task:
  1. Single Commit Test → "Can I write ONE commit message for this?"
  2. Verb-Object Test → "Does the title have ONE verb + ONE object?"
  3. Noun Count → "How many distinct things am I creating?"
  4. File Count → "How many files will this create/modify?"
  5. Time Check → "Is this ≤ 15 minutes?"
  6. Acceptance Count → "Are there ≤ 3 acceptance criteria?"

  IF any check fails:
    → Split the task along the failing dimension
    → Re-run ALL checks on each sub-task

  REPEAT until every task passes every check.

Decomposition Example

BEFORE (1 broad task, 120 min):

{
  "id": "lock",
  "title": "Lock infrastructure: config, migration, model, service, exception",
  "estimate_minutes": 120,
  "acceptance": "MachineLockManager acquires/blocks/times out. MachineLockHandle releases/extends. Stale locks self-healed. Migration publishable."
}

Failures: Single Commit ❌, Verb-Object ❌, Noun Count ❌ (5), File Count ❌ (5+), Time ❌ (120m), Acceptance ❌ (4+)

AFTER (6 atomic tasks):

[
  {"id": "config",       "title": "Add parallel_dispatch config section",        "estimate_minutes": 5},
  {"id": "migration",    "title": "Create machine_locks migration",              "estimate_minutes": 5},
  {"id": "model",        "title": "Create MachineStateLock Eloquent model",      "estimate_minutes": 10},
  {"id": "lock-manager", "title": "Create MachineLockManager service",           "estimate_minutes": 15},
  {"id": "lock-handle",  "title": "Create MachineLockHandle value object",       "estimate_minutes": 10},
  {"id": "lock-ex",      "title": "Create LockTimeoutException class",           "estimate_minutes": 5}
]

Each task: one commit, one verb, one file, ≤ 15 min. Parallelizable where dependencies allow.

Expected Task Counts

Use this as calibration — if your count is significantly below, you're under-decomposing. With 15-minute max, expect more tasks than traditional approaches:

Plan SizeExpected Tasks
100 lines12–25 tasks
500 lines40–70 tasks
1000 lines70–120 tasks
2000 lines120–200 tasks

Quality Gates

The quality gate is a single executable command that combines all quality checks for the project. The agent discovers available commands from the project (composer.json, package.json, Makefile, CI config) and combines them with &&.

Examples:

# PHP/Laravel project
composer lint && composer test && composer larastan

# Node.js project
npm run lint && npm run test && npm run typecheck

# Python project
ruff check . && pytest && mypy .

# Documentation-only tasks (no gate)
(leave quality_gate empty)

The agent MUST verify the quality gate command runs successfully before including it in the plan.

Commit Strategy

Each task's commit_strategy determines HOW the agent commits after the quality gate passes.

StrategyAgent Action
agentic-commitsInvoke the /agentic-commits skill — it splits changes into atomic one-file-per-commit hunks with structured messages
conventionalgit add changed files + git commit with type(scope): message format
manualDo NOT commit — leave changes staged for user to handle

Default: agentic-commits for all code tasks. The workflow-level default applies unless a task overrides it.


MODE 1: DECOMPOSE

Step 0: Ask User Preferences (MANDATORY)

Before reading the plan, ask the user two questions. Do NOT skip this step.

Question 1: Quality Gate Command

Ask: "What quality check commands should run after each task?"

Discovery approach: First, try to discover existing quality commands from the project:

  • Check composer.json scripts (e.g., lint, test, larastan, infection)
  • Check package.json scripts (e.g., lint, test, typecheck)
  • Check Makefile targets
  • Check CI config (.github/workflows/, .gitlab-ci.yml)

Present discovered commands to the user, or ask them to specify:

I found these quality commands in your project:
  - composer lint
  - composer test
  - composer larastan

Should I combine all of these as the quality gate, or do you want to customize?

The quality gate is a single executable command — combine multiple checks with &&:

composer lint && composer test && composer larastan

Step 0.5: Verify Quality Gate Command (MANDATORY)

Before generating the JSON plan, RUN the quality gate command to verify it works:

# Run the combined command
composer lint && composer test && composer larastan

If the command fails:

  • Ask the user to fix the issue or adjust the command
  • Do NOT proceed with JSON generation until the command succeeds
  • This prevents writing a broken command into every task

Question 2: Commit Strategy

Ask: "How should completed tasks be committed?"

Present options:

Commit Strategy options:
  [1] agentic-commits — atomic, one-file-per-commit, structured format (recommended)
  [2] conventional — conventional commit messages (feat:, fix:, etc.)
  [3] manual — no auto-commit, handle manually

Store in JSON

Record the user's choices in the workflow field of the JSON plan:

{
  "workflow": {
    "quality_gate": "composer lint && composer test && composer larastan",
    "commit_strategy": "agentic-commits",
    "checklist_note": "- [ ] Run quality gate: composer lint && composer test && composer larastan\n- [ ] Commit IMMEDIATELY after gate passes (do NOT batch with other tasks)\n- [ ] Commit using agentic-commits"
  }
}

The checklist_note is a human-readable summary of the workflow. The script appends it to every task's description as a checklist.

Individual tasks can override the workflow defaults via their own quality_gate and commit_strategy fields. If not overridden, the workflow defaults apply.


Step 1: Read the Plan

Delegate plan reading to keep the main agent's context clean.

For large plans (500+ lines), use this approach:

  1. Extract headings first — get the structural skeleton without reading content:

    grep -n '^#' plan.md
    
  2. Delegate full reading to a subagent — spawn a single Agent (subagent_type: "general-purpose") with a clear prompt:

    • Read the full plan file
    • Extract epics, tasks, dependencies, and coverage mapping
    • Return a structured summary (not the raw content)
  3. For smaller plans (<500 lines) — reading directly is fine, but prefer the Read tool over cat.

Why? Large plans (2000+ lines) consume 30-50K tokens of context. Delegating to a subagent keeps the main context free for JSON generation and validation. Chunked parallel reading was tested and rejected — cross-chunk dependency loss outweighs the speed gain.

Step 2: Extract Structure

Parse all headings and build a section tree:

# Title                          -> context_only
## 1. Authentication             -> epic: auth
### 1.1 User Model               -> task: auth-user-model
### 1.2 Login Flow                -> task: auth-login-flow
#### 1.2.1 JWT Tokens             -> task: auth-jwt-tokens
### 1.3 Password Reset            -> task: auth-password-reset
## 2. Authorization               -> epic: authz
### 2.1 Role System               -> task: authz-roles
...

Rules for section-to-task mapping:

  • # (h1) = Plan title -> context_only
  • ## (h2) = Epic candidates
  • ### (h3) = Task candidates
  • #### (h4) = Sub-task candidates (merge into parent task or create separate task)

Step 3: Identify Dependencies

For each task, scan the plan for:

  • "requires X", "depends on X", "after X"
  • References to entities created in other tasks
  • Logical ordering (create before use, define before implement)

Build a dependency list per task.

Step 4: Build Coverage Map

Create a table mapping EVERY heading to a task or context_only:

| Section | Mapped To | Status |
|---------|-----------|--------|
| # Plan Title | - | context_only |
| ## Overview | - | context_only |
| ## 1. Auth | epic:auth | mapped |
| ### 1.1 User Model | task:auth-user-model | mapped |
| ### 1.2 Login Flow | task:auth-login-flow | mapped |
| ## Appendix | - | context_only |

If ANY section is unmapped and not context_only -> STOP and fix.

Step 5: Generate Plan Directory

Write the plan as a directory with separate files. This keeps each file small and recoverable.

PLAN_DIR=$(mktemp -d /tmp/task-plan-XXXXXXXX)

Step 5a: Write _plan.json (global metadata)

cat > "$PLAN_DIR/_plan.json" << 'EOF'
{
  "version": 1,
  "source": "docs/plans/feature-x.md",
  "prefix": "feat",
  "workflow": {
    "quality_gate": "composer lint && composer test && composer type",
    "commit_strategy": "agentic-commits",
    "checklist_note": "- [ ] Run quality gate: composer lint && composer test && composer type\n- [ ] Commit IMMEDIATELY after gate passes (do NOT batch with other tasks)\n- [ ] Commit using agentic-commits"
  },
  "coverage": {
    "total_sections": 12,
    "mapped_sections": 10,
    "unmapped": [],
    "context_only": ["# Feature X Plan", "## Overview"]
  }
}
EOF

Step 5b: Write one epic-{id}.json per epic

Write each epic as a separate file. Each file is small (~1-3K tokens), minimizing AI output errors.

cat > "$PLAN_DIR/epic-auth.json" << 'EOF'
{
  "id": "auth",
  "title": "Authentication System",
  "description": "Implement user authentication with JWT tokens and password reset",
  "priority": 1,
  "labels": ["auth", "security"],
  "source_sections": ["## 1. Authentication"],
  "tasks": [
    {
      "id": "user-model",
      "title": "Create User model and migration",
      "description": "Define User model with email, password_hash, timestamps. Create migration with proper indexes.",
      "type": "feature",
      "priority": 1,
      "estimate_minutes": 10,
      "labels": ["model"],
      "depends_on": [],
      "source_sections": ["### 1.1 User Model"],
      "source_lines": "15-42",
      "acceptance": "User model exists with migration. Factory and seeder work. PHPStan passes.",
      "commit_strategy": "agentic-commits"
    },
    {
      "id": "login-flow",
      "title": "Implement login endpoint with JWT",
      "description": "POST /api/login accepts email+password, returns JWT.",
      "type": "feature",
      "priority": 1,
      "estimate_minutes": 15,
      "depends_on": ["user-model"],
      "source_sections": ["### 1.2 Login Flow", "#### 1.2.1 JWT Tokens"],
      "source_lines": "43-98",
      "acceptance": "Login endpoint returns valid JWT. Invalid credentials return 401.",
      "commit_strategy": "agentic-commits"
    }
  ]
}
EOF

File naming convention: epic-{id}.json where {id} matches the epic's id field. Files are read in alphabetical order.

Step 6: Execute Plan

<base_directory>/scripts/bd-from-plan "$PLAN_DIR"

The script will:

  1. Validate the JSON
  2. Check coverage (fail if unmapped sections)
  3. Detect circular dependencies (fail if cycles)
  4. Create epics in order
  5. Create tasks in topological order
  6. Wire up dependencies
  7. Print summary with bd ready output

Step 7: Verify

bd ready --pretty          # See what's ready to work on
bd graph                   # Visualize dependency graph
bd epic status             # Check epic completion status

MODE 2: VERIFY

Validate an existing plan JSON against its source markdown.

Step 1: Load Plan Directory

# Read the global metadata
cat "$PLAN_DIR/_plan.json" | jq .

# Read the source markdown
SOURCE=$(cat "$PLAN_DIR/_plan.json" | jq -r '.source')

# List all epic files
ls "$PLAN_DIR"/epic-*.json

Step 2: Extract Markdown Headings

grep -n '^#' "$SOURCE" | head -50

Step 3: Cross-Reference

For each heading in the markdown:

  • Check if it appears in any task's source_sections
  • Check if it appears in coverage.context_only
  • If neither -> report as unmapped

Step 4: Report

Coverage Report:
  Total sections: 12
  Mapped to tasks: 10
  Context only: 2
  Unmapped: 0

  Status: PASS

ID Naming Convention

IDs follow a hierarchical pattern:

prefix-epicId-taskId
ComponentFormatExample
prefixlowercase alphafeat, auth, fix
epicIdkebab-caseauth, data-layer, ui
taskIdkebab-caseuser-model, login-flow
Full IDprefix-epic-taskfeat-auth-user-model

The script combines these automatically:

  • Epic ID: {prefix}-{epicId} -> feat-auth
  • Task ID: {prefix}-{epicId}-{taskId} -> feat-auth-user-model

Keep IDs short but descriptive. Avoid abbreviations that aren't obvious.


Dry Run

Always do a dry run first for large plans:

<base_directory>/scripts/bd-from-plan --dry-run "$PLAN_DIR"

This validates everything and shows what WOULD be created without actually creating anything.


Error Recovery

ErrorAction
Unmapped sectionsAdd missing tasks or mark as context_only
Circular dependencySplit the cycle-causing task
Duplicate IDsRename conflicting task IDs
bd create failsCheck bd is initialized (bd info), check prefix
Partial creationScript tracks created IDs, re-run skips existing

bd CLI Reference

CRITICAL: NEVER use bd edit — it opens $EDITOR which blocks the agent. Use bd update with flags instead.

Priority Format

Priorities are integers 0–4, never strings. Using "high" or "medium" will error.

ValueMeaning
0Critical
1High
2Medium (default)
3Low
4Backlog

Task Lifecycle

After tasks are created, the agent works through them using this cycle:

1. FIND    →  bd ready --pretty              # What can I work on?
2. READ    →  bd show <id>                   # Understand the task
3. CLAIM   →  bd update <id> --claim         # Atomic claim (fails if taken)
4. WORK    →  implement the task
5. GATE    →  run quality gate command        # Must pass before commit
6. COMMIT  →  commit using commit_strategy   # IMMEDIATELY after gate passes
7. CLOSE   →  bd close <id> --reason="..."   # Mark complete
8. NEXT    →  bd ready --pretty              # What's next?

Commit After Every Task (STRICT)

Each task MUST be committed IMMEDIATELY after its quality gate passes. Do NOT batch commits.

PatternResult
Task 1 done → commit → Task 2 done → commit → Task 3 done → commitCORRECT
Task 1 done → Task 2 done → Task 3 done → commit allWRONG

Why? Batching commits defeats the purpose of atomic tasks:

  • Impossible to revert a single task
  • bd close with no matching commit breaks traceability
  • Parallel agents can't see each other's progress
  • Context loss mid-session loses all uncommitted work

The commit strategy (from workflow.commit_strategy or task-level override) determines the format. For agentic-commits: use the /agentic-commits skill to split changes into atomic, one-file-per-commit hunks.

Finding Work

bd ready --pretty             # Tasks with all deps satisfied (no blockers)
bd list --status=open         # All open tasks
bd blocked                    # Tasks waiting on dependencies
bd search "query"             # Full-text search across all tasks

Claiming and Working

bd update <id> --claim                    # Atomic claim — fails if already claimed
bd update <id> --status=in_progress       # Manual status change
bd update <id> --notes="progress update"  # Add notes during work

Completing

bd close <id> --reason="Implemented with tests. All passing."
bd close <id1> <id2> <id3>               # Batch close (more efficient)

Issue Management

bd create "Title" --type=task --priority=2
bd create "Title" --type=bug --parent=<epic-id>
bd update <id> --title="New title"        # NEVER use bd edit
bd update <id> --add-label=foo
bd update <id> --defer="+2d"              # Hide from ready until date
bd rename <old-id> <new-id>               # Change issue ID

Dependencies

bd dep add <issue> <depends-on>           # issue depends on depends-on
bd dep tree <id>                          # Text dependency tree
bd graph <id> --compact                   # Visual dependency graph

Epics and Hierarchy

bd epic status                            # Epic completion percentages
bd children <id>                          # List epic's children

Session End Protocol

Before ending a session:

  1. bd close all completed tasks
  2. Check bd ready --pretty — report what's next
  3. bd sync --from-main if on an ephemeral branch

Issue Types

task | bug | feature | epic | chore

Issue Statuses

StatusMeaning
openNot started
in_progressBeing worked on
blockedWaiting on dependency
deferredHidden until defer date
closedCompleted

Quick Reference

CommandPurpose
bd-from-plan plan-dir/Create tasks from plan directory
bd-from-plan --dry-run plan-dir/Preview without creating
bd ready --prettyShow next available tasks
bd show <id>Task details
bd update <id> --claimClaim a task before working
bd close <id> --reason="..."Complete a task
bd graphDependency visualization
bd dep tree <id>Show task dependency tree
bd epic statusEpic completion overview
bd blockedTasks waiting on dependencies
bd search "query"Full-text search

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

agentic-commits

No summary provided by upstream source.

Repository SourceNeeds Review
General

Kafka

Kafka - command-line tool for everyday use

Registry SourceRecently Updated
General

Helm

Helm - command-line tool for everyday use

Registry SourceRecently Updated
General

Cms

Cms - command-line tool for everyday use

Registry SourceRecently Updated