enhance-prompts

Analyze prompts for clarity, structure, examples, and output reliability.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "enhance-prompts" with this command: npx skills add avifenesh/agentsys/avifenesh-agentsys-enhance-prompts

enhance-prompts

Analyze prompts for clarity, structure, examples, and output reliability.

Parse Arguments

const args = '$ARGUMENTS'.split(' ').filter(Boolean); const targetPath = args.find(a => !a.startsWith('--')) || '.'; const fix = args.includes('--fix');

Differentiation from enhance-agent-prompts

Skill Focus Use When

enhance-prompts

Prompt quality (clarity, structure, examples) General prompts, system prompts, templates

enhance-agent-prompts

Agent config (frontmatter, tools, model) Agent files with YAML frontmatter

Workflow

Run Analyzer - Execute the JavaScript analyzer to get findings:

node -e "const a = require('./lib/enhance/prompt-analyzer.js'); console.log(JSON.stringify(a.analyzeAllPrompts('.'), null, 2));"

For a specific path: a.analyzeAllPrompts('./plugins/enhance')

For a single file: a.analyzePrompt('./path/to/file.md')

Parse Results - The analyzer returns JSON with summary and findings

Filter - Apply certainty filtering based on --verbose flag

Report - Format findings as markdown output

Fix - If --fix flag, apply auto-fixes from findings

The JavaScript analyzer (lib/enhance/prompt-analyzer.js ) implements all detection patterns including AST-based code validation. The patterns below are reference documentation.

Prompt Engineering Knowledge Reference

System Prompt Structure

Effective system prompts include: Role/Identity, Capabilities & Constraints, Instruction Priority, Output Format, Behavioral Directives, Examples, Error Handling.

Minimal Template:

<system> You are [ROLE]. [PURPOSE]. Key constraints: [CONSTRAINTS] Output format: [FORMAT] When uncertain: [HANDLING] </system>

XML Tags (Claude-Specific)

Claude is fine-tuned for XML tags. Use: <role> , <constraints> , <output_format> , <examples> , <instructions> , <context>

<constraints>

  • Maximum response length: 500 words
  • Use only Python 3.10+ syntax </constraints>

Few-Shot Examples

  • 2-5 examples is optimal (research-backed)

  • Include edge cases and ensure format consistency

  • Start zero-shot, add examples only if needed

  • Show both good AND bad examples when relevant

Chain-of-Thought (CoT)

Use CoT Don't Use CoT

Complex multi-step reasoning Simple factual questions

Math and logic problems Classification tasks

Code debugging When model has built-in reasoning

Key: Modern models (Claude 4.x, o1/o3) perform CoT internally. "Think step by step" is redundant.

Role Prompting

Helps: Creative tasks, tone/style, roleplay Doesn't help: Accuracy tasks, factual retrieval, complex reasoning

Better: "Approach systematically, showing work" vs "You are an expert"

Instruction Hierarchy

Priority: System > Developer > User > Retrieved Content

Include explicit priority in prompts with multiple constraint sources.

Negative Prompting

Positive alternatives are more effective than negatives:

Less Effective More Effective

"Don't use markdown" "Use prose paragraphs"

"Don't be vague" "Use specific language"

Structured Output

  • Prompt-based: ~35.9% reliability

  • Schema enforcement: 100% reliability

  • Always provide schema example and validate output

Context Window Optimization

Lost-in-the-Middle: Models weigh beginning and end more heavily.

Place critical constraints at start, examples in middle, error handling at end.

Extended Thinking

High-level instructions ("Think deeply") outperform step-by-step guidance. "Think step-by-step" is redundant with modern models.

Anti-Patterns Quick Reference

Anti-Pattern Problem Fix

Vague references "The above code" loses context Quote specifically

Negative-only "Don't do X" without alternative State what TO do

Aggressive emphasis "CRITICAL: MUST" Use normal language

Redundant CoT Wastes tokens Let model manage

Critical info buried Lost-in-the-middle Place at start/end

Detection Patterns

  1. Clarity Issues (HIGH Certainty)

Vague Instructions: "usually", "sometimes", "try to", "if possible", "might", "could"

Negative-Only Constraints: "don't", "never", "avoid" without stating what TO do

Aggressive Emphasis: Excessive CAPS (CRITICAL, IMPORTANT), multiple !!

  1. Structure Issues (HIGH/MEDIUM Certainty)

Missing XML Structure: Complex prompts (>800 tokens) without XML tags

Inconsistent Sections: Mixed heading styles, skipped levels (H1→H3)

Critical Info Buried: Important instructions in middle 40%, constraints after examples

  1. Example Issues (HIGH/MEDIUM Certainty)

Missing Examples: Complex tasks without few-shot, format requests without example

Suboptimal Count: Only 1 example (optimal: 2-5), more than 7 (bloat)

Missing Contrast: No good/bad labeling, no edge cases

  1. Context Issues (MEDIUM Certainty)

Missing WHY: Rules without explanation

Missing Priority: Multiple constraint sections without conflict resolution

  1. Output Format Issues (HIGH/MEDIUM Certainty)

Missing Format: Substantial prompts without format specification

JSON Without Schema: Requests JSON but no example structure

  1. Anti-Patterns (HIGH/MEDIUM/LOW Certainty)

Redundant CoT (HIGH): "Think step by step" with modern models

Overly Prescriptive (MEDIUM): 10+ numbered steps, micro-managing reasoning

Prompt Bloat (LOW): Over 2500 tokens, redundant instructions

Vague References (HIGH): "The above code", "as mentioned"

Auto-Fix Implementations

  1. Aggressive Emphasis

Replace CRITICAL→critical, !!→!, remove excessive caps

  1. Negative-Only to Positive

Suggest positive alternatives for "don't" statements

Output Format

Prompt Analysis: {prompt-name}

File: {path} Type: {system|agent|skill|template} Token Count: ~{tokens}

Summary

  • HIGH: {count} issues
  • MEDIUM: {count} issues

Clarity Issues ({n})

| Issue | Location | Fix | Certainty |

Structure Issues ({n})

| Issue | Location | Fix | Certainty |

Example Issues ({n})

| Issue | Location | Fix | Certainty |

Pattern Statistics

Category Patterns Auto-Fixable

Clarity 4 1

Structure 4 0

Examples 4 0

Context 2 0

Output Format 3 0

Anti-Pattern 4 0

Total 21 1

<bad_example>

You should usually follow best practices when possible.

Why it's bad: Vague qualifiers reduce determinism. </bad_example>

<good_example>

Follow these practices:

  1. Validate input before processing
  2. Handle null/undefined explicitly

Why it's good: Specific, actionable instructions. </good_example>

Example: Negative-Only Constraints

<bad_example>

  • Don't use vague language
  • Never skip validation

Why it's bad: Only states what NOT to do. </bad_example>

<good_example>

  • Use specific, deterministic language
  • Always validate input; return structured errors

Why it's good: Each constraint includes positive action. </good_example>

Example: Redundant Chain-of-Thought

<bad_example>

Think through this step by step:

  1. First, analyze the input
  2. Then, identify the key elements

Why it's bad: Modern models do this internally. Wastes tokens. </bad_example>

<good_example>

Analyze the input carefully before responding.

Why it's good: High-level guidance without micro-managing. </good_example>

Example: Missing Output Format

<bad_example>

Respond with a JSON object containing the analysis results.

Why it's bad: No schema or example. </bad_example>

<good_example>

Output Format

{"status": "success|error", "findings": [{"severity": "HIGH"}]}

Why it's good: Concrete schema shows exact structure. </good_example>

Example: Critical Info Buried

<bad_example>

Task

[task]

Background

[500 words...]

Important Constraints <- buried at end

Why it's bad: Lost-in-the-middle effect. </bad_example>

<good_example>

Task

Critical Constraints <- at start

[constraints]

Background

Why it's good: Critical info at start where attention is highest. </good_example>

Constraints

  • Only apply auto-fixes for HIGH certainty issues

  • Preserve original structure and formatting

  • Validate against embedded knowledge reference above

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

consult

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

debate

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

validate-delivery

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

learn

No summary provided by upstream source.

Repository SourceNeeds Review