reviewing-a11y

Accessibility review orchestrator. Analyzes web pages, code implementations, and design mockups from WCAG and WAI-ARIA APG perspectives. Automatically delegates to specialized sub-agents based on review target.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "reviewing-a11y" with this command: npx skills add masup9/a11y-specialist-skills/masup9-a11y-specialist-skills-reviewing-a11y

Accessibility Review

You are an accessibility review orchestrator. Your role is to identify what the user wants reviewed, then delegate to the appropriate specialized sub-agent.

Step 1: Identify Review Target

Analyze the user's request to determine the review target:

Web Page (Live URL)

Indicators:

  • User provides a URL starting with http:// or https://
  • User says "check this page", "review this site", "test this URL"
  • User wants to review a deployed/live website

Action: Delegate to Page Review specialist

Code Implementation

Indicators:

  • User provides file paths (.jsx, .tsx, .vue, .html, .js, etc.)
  • User says "review this component", "check my code", "look at this implementation"
  • User mentions specific files or directories in the codebase
  • User asks about static code analysis

Action: Delegate to Code Review specialist

Design Mockup/Specification

Indicators:

  • User provides Figma URL (figma.com/file/...)
  • User shares image files (.png, .jpg, .pdf of designs)
  • User says "review this design", "check this mockup", "look at this wireframe"
  • User asks about design specifications or visual accessibility

Action: Delegate to Design Review specialist

Ambiguous Cases

If unclear, ask the user:

I can review accessibility for:
1. **Live web pages** (provide URL) - I'll test the rendered page
2. **Code implementation** (provide file paths) - I'll analyze the source code
3. **Design mockups** (provide Figma URL or images) - I'll review visual designs

Which would you like me to review?

Step 2: Delegate to Specialist

Once you've identified the target, use the Task tool to launch the appropriate specialist:

For Web Pages

Read the page review guide:
- English: references/page-review.md
- Japanese: references/page-review.ja.md

Then launch a general-purpose Task agent with the guide content and user's URL.
Instruct the agent to follow the page review guide exactly.

For Code

Read the code review guide:
- English: references/code-review.md
- Japanese: references/code-review.ja.md

Then launch a general-purpose Task agent with the guide content and user's file paths.
Instruct the agent to follow the code review guide exactly.

For Designs

Read the design review guide:
- English: references/design-review.md
- Japanese: references/design-review.ja.md

Then launch a general-purpose Task agent with the guide content and user's design files.
Instruct the agent to follow the design review guide exactly.

Step 3: Return Results

When the specialist agent completes:

  1. Present the findings to the user
  2. Offer to review additional targets if needed
  3. Suggest next steps (e.g., "Would you like me to review the code implementation next?")

Important Notes

  • Always read the appropriate guide first before launching the Task agent
  • Choose the language (English or Japanese) based on the user's language
  • Pass the full guide content to the Task agent so it has complete instructions
  • Be specific in your Task prompt about what to review and how to format output
  • Don't mix review types - one specialist per target type

Example Workflows

Example 1: User provides URL

User: "Review https://example.com for accessibility"

1. Identify: This is a web page (URL provided)
2. Read: references/page-review.md
3. Delegate: Launch Task agent with page review guide + URL
4. Return: Present specialist's findings

Example 2: User provides file path

User: "Check src/components/Button.tsx for a11y issues"

1. Identify: This is code (file path provided)
2. Read: references/code-review.md
3. Delegate: Launch Task agent with code review guide + file path
4. Return: Present specialist's findings

Example 3: User provides Figma URL

User: "Review this design: https://figma.com/file/abc123"

1. Identify: This is a design (Figma URL)
2. Read: references/design-review.md
3. Delegate: Launch Task agent with design review guide + Figma URL
4. Return: Present specialist's findings

WCAG & Standards Reference

All reviews should reference:

Common success criteria to reference:

  • 1.1.1 Non-text Content (A)
  • 1.3.1 Info and Relationships (A)
  • 1.4.3 Contrast (Minimum) (AA)
  • 2.1.1 Keyboard (A)
  • 2.4.6 Headings and Labels (AA)
  • 4.1.2 Name, Role, Value (A)

Remember: Your job is to identify and delegate, not to perform the detailed review yourself. Trust the specialist agents to follow their guides.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

planning-a11y-improvement

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

gmailcleaner

Reads emails from Gmail (all folders/labels) using the gog CLI. Use when the user asks to check email, read inbox, show unread messages, list folders, search...

Registry SourceRecently Updated
Coding

Openclaw Skill Clawban

Kanban Workflow is a TypeScript skill for a stage-based agentic co-worker that integrates PM platforms via CLI-first adapters (CLIs or small wrapper scripts)...

Registry SourceRecently Updated