skill-coach

Skill Coach: Creating Expert-Level Agent Skills

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "skill-coach" with this command: npx skills add curiositech/some_claude_skills/curiositech-some-claude-skills-skill-coach

Skill Coach: Creating Expert-Level Agent Skills

Encode real domain expertise, not just surface-level instructions. Focus on shibboleths - the deep knowledge that separates novices from experts.

When to Use This Skill

Use for:

  • Creating new Agent Skills from scratch

  • Reviewing/auditing existing skills

  • Improving skill activation rates

  • Adding domain expertise to skills

  • Debugging why skills don't activate

NOT for:

  • General Claude Code features (slash commands, MCPs)

  • Non-skill coding advice

  • Debugging runtime errors (use domain skills)

Quick Wins

Immediate improvements for existing skills:

  • Add NOT clause to description → Prevents false activation

  • Add 1-2 anti-patterns → Prevents common mistakes

  • Check line count (run validator) → Should be fewer than 500 lines

  • Remove dead files → Delete unreferenced scripts/references

  • Test activation → Questions that should/shouldn't trigger it

What Makes a Great Skill

Great skills are progressive disclosure machines that:

  • Activate precisely - Specific keywords + NOT clause

  • Encode shibboleths - Expert knowledge that separates novice from expert

  • Surface anti-patterns - "If you see X, that's wrong because Y, use Z"

  • Capture temporal knowledge - "Pre-2024: X. 2024+: Y"

  • Know their limits - "Use for A, B, C. NOT for D, E, F"

  • Provide decision trees - Not templates, but "If X then A, if Y then B"

  • Stay under 500 lines - Core in SKILL.md, deep dives in /references

Core Principles

Progressive Disclosure

  • Phase 1 (~100 tokens): Metadata - "Should I activate?"

  • Phase 2 (<5k tokens): SKILL.md - "How do I do this?"

  • Phase 3 (as needed): References - "Show me the details"

Critical: Keep SKILL.md under 500 lines. Split details into /references .

Description Formula

[What] [Use for] [Keywords] NOT for [Exclusions]

❌ Bad: "Helps with images" ⚠️ Better: "Image processing with CLIP" ✅ Good: "CLIP semantic search. Use for image-text matching. Activate on 'CLIP', 'embeddings'. NOT for counting, spatial reasoning."

SKILL.md Template


name: your-skill-name description: [What] [When] [Triggers]. NOT for [Exclusions]. allowed-tools: Read,Write # Minimal only

Skill Name

[One sentence purpose]

When to Use

✅ Use for: [A, B, C] ❌ NOT for: [D, E, F]

Core Instructions

[Step-by-step, decision trees, not templates]

Common Anti-Patterns

[Pattern]

Symptom: [Recognition] Problem: [Why wrong] Solution: [Better approach]

Frontmatter Rules (CRITICAL)

Only these frontmatter keys are allowed by Claude's skill marketplace:

Key Required Purpose

name

✅ Lowercase-hyphenated identifier

description

✅ Activation keywords + NOT clause

allowed-tools

⚠️ Comma-separated tool names

license

❌ e.g., "MIT"

metadata

❌ Custom key-value pairs

Invalid keys that will FAIL upload:

❌ WRONG - These will break skill upload

integrates_with:

  • orchestrator triggers:
  • "activate on this" tools: Read,Write outputs: formatted text coordinates_with: other-skill python_dependencies:
  • numpy

Move custom info to the body:

Integrations

Works with: orchestrator, team-builder

Activation Triggers

Responds to: "create skill", "review skill", "skill quality"

Validation command:

Find invalid frontmatter keys

for skill in .claude/skills/*/SKILL.md; do sed -n '/^---$/,/^---$/p' "$skill" | grep -E "^[a-zA-Z_-]+:" | cut -d: -f1 |
grep -vE "^(name|description|license|allowed-tools|metadata)$" &&
echo " ^ in $(basename $(dirname $skill))" done

Skill Structure

Mandatory:

your-skill/ └── SKILL.md # Core instructions (max 500 lines)

Strongly Recommended (self-contained skills):

├── scripts/ # Working code - NOT templates ├── mcp-server/ # Custom MCP if external APIs needed ├── agents/ # Subagent definitions if orchestration needed ├── references/ # Deep dives on domain knowledge └── CHANGELOG.md # Version history

Self-Contained Skills (RECOMMENDED)

Skills with working tools are immediately useful. See references/self-contained-tools.md for full patterns.

Quick decision: External APIs? → MCP. Multi-step workflow? → Subagents. Repeatable operations? → Scripts.

Decision Trees

When to create a NEW skill?

  • ✅ Domain expertise not in existing skills

  • ✅ Pattern repeats across 3+ projects

  • ✅ Anti-patterns you want to prevent

  • ❌ One-time task → Just do it directly

  • ❌ Existing skill could be extended → Improve that one

Skill vs Subagent vs MCP?

  • Skill: Domain expertise, decision trees (no runtime state)

  • Subagent: Multi-step workflows needing tool orchestration

  • MCP: External APIs, auth, stateful connections

Skill Creation Process (6 Steps)

Follow these steps in order when creating a new skill:

Step 1: Understand with Concrete Examples

Skip only if usage patterns are already clear. Ask:

  • "What functionality should this skill support?"

  • "Can you give examples of how it would be used?"

  • "What would a user say that should trigger this skill?"

Step 2: Plan Reusable Contents

For each example, analyze:

  • How to execute from scratch

  • What scripts, references, assets would help with repeated execution

Example analyses:

  • pdf-editor for "rotate this PDF" → Needs scripts/rotate_pdf.py

  • frontend-webapp-builder → Needs assets/hello-world/ template

  • big-query skill → Needs references/schema.md for table schemas

Step 3: Initialize the Skill

Create the skill directory structure:

your-skill/ ├── SKILL.md # Core instructions (max 500 lines) ├── scripts/ # Working code - NOT templates ├── references/ # Deep dives on domain knowledge └── assets/ # Files used in output (templates, icons)

Step 4: Write SKILL.md

  • Write in imperative/infinitive form ("To accomplish X, do Y")

  • Answer: Purpose? When to use? How to use bundled resources?

  • Reference all scripts/references so Claude knows they exist

Step 5: Validate and Package

Validate skill structure and content

python scripts/validate_skill.py <path>

Check for self-contained tool completeness

python scripts/check_self_contained.py <path>

Step 6: Iterate

After real-world use:

  • Notice struggles or inefficiencies

  • Identify how SKILL.md or bundled resources should be updated

  • Implement changes and test again

Common Workflows

Create Skill from Expertise:

  • Define scope: What expertise? What keywords? What NOT to handle?

  • Write description with keywords and NOT clause

  • Add anti-patterns you've observed

  • Test activation thoroughly

Debug Activation Issues (flowchart):

Skill not activating when expected? ├── Check description has specific keywords │ ├── NO → Add "Activate on: keyword1, keyword2" │ └── YES → Check if query contains those keywords │ ├── NO → Add missing keyword variations │ └── YES → Check for conflicting NOT clause │ ├── YES → Narrow exclusion scope │ └── NO → Check file structure │ ├── SKILL.md missing → Create it │ └── Wrong location → Move to .claude/skills/

Skill activating when it shouldn't? ├── Missing NOT clause? │ ├── YES → Add "NOT for: exclusion1, exclusion2" │ └── NO → NOT clause too narrow │ └── Expand exclusions based on false positive queries

Run python scripts/test_activation.py <path> to validate

Recursive Self-Improvement (use this skill to improve skills):

  • Run python scripts/validate_skill.py <path> → Get validation report

  • Run python scripts/check_self_contained.py <path> → Check tool completeness

  • Address ERRORS first, then WARNINGS, then SUGGESTIONS

  • Re-run validation until clean

  • Update CHANGELOG.md with improvements made

Tool Permissions

Guidelines:

  • Read-only skill: Read,Grep,Glob

  • File modifier: Read,Write,Edit

  • Build integration: Read,Write,Bash(npm:,git:)

  • ⚠️ Never: Unrestricted Bash for untrusted skills

Success Metrics

Metric Target

Correct activation

90%

False positive rate <5%

Token usage <5k typical

Reference Files

File Contents

references/antipatterns.md

Domain shibboleths and anti-pattern catalog with case studies

references/shibboleths.md

Expert vs novice knowledge patterns

references/validation-checklist.md

Complete review and testing guide

references/self-contained-tools.md

Scripts, MCP servers, and subagent implementation patterns

references/scoring-rubric.md

Quantitative skill evaluation (0-10 scoring)

references/skill-composition.md

Cross-skill dependencies and composition patterns

references/skill-lifecycle.md

Maintenance, versioning, and deprecation guidance

references/mcp_vs_scripts.md

Architectural decision guide: Skills vs Agents vs MCPs vs Scripts

This skill guides: Skill creation | Skill auditing | Anti-pattern detection | Progressive disclosure | Domain expertise encoding

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

chatbot-analytics

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

test-automation-expert

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

agent-creator

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

dag-task-scheduler

No summary provided by upstream source.

Repository SourceNeeds Review