subtask-orchestration

Orchestrate parallel subtasks across multiple repositories using vibe-kanban MCP. Use this skill when: (1) Work needs to be distributed across multiple repos, (2) Parallel execution of independent tasks is needed, (3) You need to create subtasks and collect their results, (4) Multi-agent coordination via kanban task descriptions. Triggers: "subtask", "multi-repo", "parallel tasks", "distribute work", "orchestrate".

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "subtask-orchestration" with this command: npx skills add apocalypseyun/skills/apocalypseyun-skills-subtask-orchestration

Subtask Orchestration

Coordinate parallel work across repositories by creating subtasks, launching workspace sessions, and collecting results via task descriptions.

Core Concept

Main Task Agent
     │
     ├─► create_task (subtask A) ─► start_workspace_session ─► Agent A executes
     │                                                              │
     ├─► create_task (subtask B) ─► start_workspace_session ─► Agent B executes
     │                                                              │
     └─► poll list_tasks + get_task ◄─── update_task (results) ────┘

Communication channel: Task description field serves as the data exchange medium between main task and subtasks.

Main Task Workflow

1. Create Subtask

IMPORTANT: Always append the SUBTASK_REPORT_INSTRUCTIONS to your task description.

create_task(
  project_id: "<current_project_id>",
  title: "Subtask: <specific work>",
  description: "<your task instructions here>\n\n" + SUBTASK_REPORT_INSTRUCTIONS
)

SUBTASK_REPORT_INSTRUCTIONS (copy this exactly into every subtask description):

---

## MANDATORY: Report Results Before Completion

You are running as a **subtask**. The main task depends on your results.

**Before finishing, you MUST execute these steps:**

1. Get your task identity:

context = get_context()


2. Update your task description with results:

update_task( task_id: context.task_id, description: "<your result report>" )


### Result Report Format

## Status
[SUCCESS / FAILED / PARTIAL]

## Summary
<1-2 sentence summary>

## Completed Work
- <item 1>
- <item 2>

## Outputs
- <file paths, PR links, artifacts>

## Notes
<issues or info for main task>

---

**This is NON-NEGOTIABLE.** Main task polls your description for results. No report = invisible work.

Record the returned task_id for each subtask.

2. Launch Workspace Session

start_workspace_session(
  task_id: "<subtask_task_id>",
  executor: "CLAUDE_CODE",  // or: AMP, GEMINI, CODEX, OPENCODE, CURSOR_AGENT, QWEN_CODE, COPILOT, DROID
  repos: [{ repo_id: "<target_repo_id>", base_branch: "main" }]
)

3. Poll for Results (CRITICAL: Check Both Status AND Description)

Use list_tasks to check execution status, then get_task for details:

# Step 1: Check task status via list_tasks
tasks = list_tasks(project_id: "<project_id>")
subtask = tasks.find(t => t.id == subtask_id)

# Step 2: Determine subtask state
if subtask.has_in_progress_attempt:
    # Still running - wait and poll again
    
elif subtask.last_attempt_failed:
    # FAILED! Workspace session crashed or setup script failed
    # Do NOT wait - mark as failed immediately
    
else:
    # Not running, not failed - check description for results
    result = get_task(task_id: subtask_id)
    if "## Status" in result.description:
        # Subtask reported results
    else:
        # Subtask completed but didn't report (edge case)

Subtask State Matrix

has_in_progress_attemptlast_attempt_failedDescription has resultsState
truefalseNoRunning - wait
falsetrueNoFailed - workspace crashed, don't wait
falsefalseYesCompleted - collect results
falsefalseNoCompleted but no report - check manually

4. Aggregate Results

After all subtasks complete (or fail), summarize outcomes.

Complete Example

SUBTASK_REPORT_INSTRUCTIONS = """
---

## MANDATORY: Report Results Before Completion

You are running as a **subtask**. The main task depends on your results.

**Before finishing, you MUST execute these steps:**

1. Get your task identity:

context = get_context()


2. Update your task description with results:

update_task( task_id: context.task_id, description: "<your result report>" )


### Result Report Format

## Status
[SUCCESS / FAILED / PARTIAL]

## Summary
<1-2 sentence summary>

## Completed Work
- <item 1>
- <item 2>

## Outputs
- <file paths, PR links, artifacts>

## Notes
<issues or info for main task>

---

**This is NON-NEGOTIABLE.** Main task polls your description for results. No report = invisible work.
"""

# Create and launch subtasks
subtask_ids = []
for work_item in work_items:
 task = create_task(
     project_id: project_id,
     title: f"Subtask: {work_item.name}",
     description: f"{work_item.instructions}\n\n{SUBTASK_REPORT_INSTRUCTIONS}"
 )
 subtask_ids.append(task.task_id)
 
 start_workspace_session(
     task_id: task.task_id,
     executor: "CLAUDE_CODE",
     repos: [{ repo_id: work_item.repo_id, base_branch: "main" }]
 )

# Poll with failure detection
pending = set(subtask_ids)
results = {}
failed = {}

while pending:
 tasks = list_tasks(project_id: project_id)
 task_map = {t.id: t for t in tasks}
 
 for task_id in list(pending):
     task_status = task_map.get(task_id)
     
     if task_status.has_in_progress_attempt:
         continue  # Still running
         
     if task_status.last_attempt_failed:
         failed[task_id] = "Workspace session failed"
         pending.remove(task_id)
         continue
         
     result = get_task(task_id)
     if "## Status" in result.description:
         results[task_id] = result.description
         pending.remove(task_id)

print(f"Completed: {len(results)}, Failed: {len(failed)}")

MCP Tools Reference

ToolRolePurpose
get_contextSubtaskGet own task_id, project_id, workspace_id
create_taskMainCreate subtask with instructions
start_workspace_sessionMainLaunch subtask workspace
list_tasksMainCheck execution status (has_in_progress_attempt, last_attempt_failed)
get_taskMainGet subtask description for results
update_taskSubtaskWrite results to own description
list_reposMainGet available repo IDs

Error Handling

Startup failure (last_attempt_failed = true):

  • Workspace session failed to start (setup script error, agent crash)
  • Do NOT wait - immediately mark subtask as failed
  • Check vibe-kanban UI for error logs

Subtask timeout:

  • has_in_progress_attempt = false but no results after long time
  • Agent may have exited without reporting
  • Check workspace logs in UI

Missing report:

  • Subtask completed but description unchanged
  • Agent didn't follow SUBTASK_REPORT_INSTRUCTIONS
  • Manually check workspace output

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

find-skills

Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill.

Repository SourceNeeds Review
10.3K561.5K
vercel-labs
Automation

pptx

Use this skill any time a .pptx file is involved in any way — as input, output, or both. This includes: creating slide decks, pitch decks, or presentations; reading, parsing, or extracting text from any .pptx file (even if the extracted content will be used elsewhere, like in an email or summary); editing, modifying, or updating existing presentations; combining or splitting slide files; working with templates, layouts, speaker notes, or comments. Trigger whenever the user mentions "deck," "slides," "presentation," or references a .pptx filename, regardless of what they plan to do with the content afterward. If a .pptx file needs to be opened, created, or touched, use this skill.

Repository Source
94.2K34.5K
anthropics
Automation

doc-coauthoring

Guide users through a structured workflow for co-authoring documentation. Use when user wants to write documentation, proposals, technical specs, decision docs, or similar structured content. This workflow helps users efficiently transfer context, refine content through iteration, and verify the doc works for readers. Trigger when user mentions writing docs, creating proposals, drafting specs, or similar documentation tasks.

Repository Source
94.2K15K
anthropics