team-builder

Discover, compose, and activate specialist teams from 3 rosters — OpenClaw Core (CEO/Artist), Agency Division (55+ specialists), and Research Lab (autonomous experiment loops via Karpathy's autoresearch). Planner proposes optimal teams; Reviewer validates deliverables.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "team-builder" with this command: npx skills add joeszeles/oc-team-builder

Team Builder

Compose the right team for any job by drawing from three rosters of specialists. The Research Lab uses Karpathy's autoresearch methodology for autonomous experiment loops.

Quick Start — Scripts

1. Browse available agents

bash {baseDir}/scripts/roster.sh                     # all 3 rosters
bash {baseDir}/scripts/roster.sh -r agency            # agency only
bash {baseDir}/scripts/roster.sh -d engineering        # one division
bash {baseDir}/scripts/roster.sh -s "frontend"         # search
bash {baseDir}/scripts/roster.sh -v                    # verbose descriptions
bash {baseDir}/scripts/roster.sh -j                    # JSON output

2. Generate a team proposal

bash {baseDir}/scripts/plan.sh "Build a portfolio dashboard with pie charts"
bash {baseDir}/scripts/plan.sh --mode sprint "Optimize image generation prompts using autoresearch"
bash {baseDir}/scripts/plan.sh -o proposal.md "Analyze astronomy photos for star classification"

The planner auto-detects task domains (engineering, creative, research, marketing, operations, spatial) and proposes the right-sized team (micro/sprint/full).

3. Activate a specialist

bash {baseDir}/scripts/activate.sh --division engineering --agent frontend-developer
bash {baseDir}/scripts/activate.sh --division testing --agent evidence-collector
bash {baseDir}/scripts/activate.sh --division testing --list
bash {baseDir}/scripts/activate.sh --file reference/agency-agents-main/design/design-ui-designer.md
bash {baseDir}/scripts/activate.sh --division engineering --agent ai-engineer --personality-only

Outputs the agent's full personality definition for use in delegation prompts.

4. Run QA review

bash {baseDir}/scripts/review.sh --task "Portfolio dashboard"
bash {baseDir}/scripts/review.sh --task "Image pipeline" --criteria criteria.txt --pass evidence
bash {baseDir}/scripts/review.sh --task "LLM training optimization" --pass reality
bash {baseDir}/scripts/review.sh --task "Full product" --pass both -o review.md

Generates review checklists (Evidence Collector Pass 1 + Reality Checker Pass 2) and logs to ~/.openclaw/team-reviews/.

5. Run a Research Lab experiment

bash {baseDir}/scripts/experiment.sh --setup /path/to/project     # initialize experiment
bash {baseDir}/scripts/experiment.sh --run /path/to/project       # run one experiment cycle
bash {baseDir}/scripts/experiment.sh --status /path/to/project    # show ledger

See references/TEAM-RESEARCH.md for the full autoresearch methodology and working examples.

The Three Rosters

1. Core Team (references/TEAM-CORE.md)

The permanent OpenClaw agents. Always available, always running.

AgentRoleWorkspace
CEOLeader, orchestrator, final authority.openclaw/workspace/
ArtistImage generation, visual analysis.openclaw/workspace-artist/

2. Agency Division (references/TEAM-AGENCY.md)

55+ specialist agents across 9 divisions. Activated on demand from reference/agency-agents-main/.

DivisionAgentsKey Specialists
Engineering7Frontend Developer, Backend Architect, AI Engineer, DevOps
Design7UI Designer, UX Architect, Image Prompt Engineer
Marketing8Growth Hacker, Content Creator, Social Media
Product3Sprint Prioritizer, Trend Researcher, Feedback Synthesizer
Project Management5Senior PM, Studio Producer, Experiment Tracker
Testing7Evidence Collector, Reality Checker, API Tester
Support6Analytics Reporter, Finance Tracker, Legal Compliance
Spatial Computing6XR Architect, visionOS Engineer
Specialized7Agents Orchestrator, Data Analytics, LSP Engineer

3. Research Lab (references/TEAM-RESEARCH.md)

Autonomous experiment loops adapted from Karpathy's autoresearch. Set up a measurable experiment, run it in a fixed time budget, keep improvements, discard failures, loop forever.

Source code reference: reference/autoresearch-master/ (program.md, train.py, prepare.py)

Cross-Team Workflow Examples

Image Analysis + Research Loop

Artist (image acquisition) + Research Lab (analysis loop) + AI Engineer (classification)

Visual Content Pipeline

Artist (generation) + Image Prompt Engineer (prompts) + Visual Storyteller (narrative)

Dashboard / UI Feature Build

Senior PM (scope) + Frontend Developer (build) + Evidence Collector (QA)

Autonomous LLM Training (autoresearch)

Research Lab (experiment loop on train.py) + AI Engineer (architecture suggestions)
→ 12 experiments/hour, ~100 overnight, fully autonomous

Full Product Launch

CEO (orchestrate) + Engineering (build) + Design (UX) + Marketing (launch) + Testing (validate)

Handoff Protocol

When passing work between specialists:

## Handoff
| Field | Value |
|-------|-------|
| From | [Agent Name] |
| To | [Agent Name] |
| Task | [What needs to be done] |
| Priority | [Critical / High / Medium / Low] |

## Context
- Current state: [What's been done]
- Relevant files: [File paths]

## Deliverable
- What is needed: [Specific output]
- Acceptance criteria:
  - [ ] [Criterion 1]
  - [ ] [Criterion 2]

## Quality
- Evidence required: [What proof looks like]
- Reviewer: [Who validates]

For complete handoff templates: reference/agency-agents-main/strategy/coordination/handoff-templates.md

NEXUS Pipeline Modes

ModeScaleAgentsTimeline
MicroSingle task/fix1-3Hours-days
SprintFeature or MVP5-101-2 weeks
FullComplete product10+Weeks-months

Reference Files

FileContents
SKILL.mdThis file — overview, scripts, quick start
scripts/roster.shBrowse and search all agent rosters
scripts/plan.shGenerate team proposals from task descriptions
scripts/activate.shLoad agent personality definitions
scripts/review.shGenerate QA review checklists
scripts/experiment.shRun autoresearch experiment loops
references/TEAM-CORE.mdCEO/Artist — roles and interactions
references/TEAM-AGENCY.mdAll 55+ Agency specialists indexed by division
references/TEAM-RESEARCH.mdAutonomous experiment methodology (autoresearch)
references/PLANNER.mdJob analysis → team proposal workflow (detailed)
references/REVIEWER.mdQA validation workflow with quality gates
references/PROOF-OF-WORK.mdExample proposals showing cross-roster teams

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

Create a Deep Research Feishu Doc

飞书深度研究报告生成器。自动执行深度研究并将结果导入为飞书文档。输入研究主题和云空间文件夹token,自动生成结构化研究报告并上传到飞书云空间。用于投资研究、行业分析、竞品调研、技术调研等需要多步骤信息收集和文档化的场景。

Registry SourceRecently Updated
Research

Scout

Agent trust intelligence for Moltbook and x402 Bazaar. Use when you need to check if an agent or service is trustworthy before paying, compare agents side-by-side, scan feeds for quality agents, or make trust-gated USDC payments. Answers the question "should I pay this agent?" with research-backed scoring across 6 dimensions.

Registry SourceRecently Updated
Research

Botcoin

A puzzle game for AI agents. Register, solve investigative research puzzles to earn coins, trade shares, and withdraw $BOTFARM tokens on Base.

Registry SourceRecently Updated
Research

DuckDuckGo Web Search

Search the web using DuckDuckGo. Free, no API key required. Use when the user asks to search the web, look something up, find information online, research a topic, or when you need to find current information that isn't in your training data. Also use when web_search tool is unavailable or has no API key configured.

Registry SourceRecently Updated