brand-voice-architect

A high-precision engine for deconstructing, documenting, and synthesizing brand-specific linguistic patterns and tonal architectures. Use this skill whenever a user wants to: create or generate a brand voice guide, analyze writing samples or a corpus for tone/style, review content for linguistic consistency, build a voice system prompt, define brand pillars, identify prohibited words or preferred vocabulary, create "this not that" style guides, adapt tone across platforms (LinkedIn vs. technical docs vs. social), or reverse-engineer competitor or reference brand voices. Trigger even for loosely related requests like "make our writing more consistent", "what tone should we use?", or "analyze how we write."

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "brand-voice-architect" with this command: npx skills add midnightstudioai/brand-voice-architect

Brand Voice Architect (BVA)

A skill for engineering, documenting, and synthesizing brand-specific voice with quantifiable precision. Brand voice is treated as a Linguistic DNA — a measurable baseline, not an aesthetic preference.


Core Workflow

Phase I: Decomposition — /analyze [corpus]

Run a linguistic audit on provided text samples:

  1. Lexical Audit — High-frequency verbs/adjectives, prohibited terms, vocabulary signature
  2. Structural Mapping — Average Sentence Length (ASL), syntactic complexity, variance
  3. Sentiment Baseline — Emotional temperature on a 0.0–1.0 scale

→ Use scripts/voice_analyzer.py to compute metrics programmatically when a corpus is provided.

Phase II: Architectural Design — /synthesize [pillars]

Build the voice matrix:

  1. Pillar Definition — Establish 3 core attributes (e.g., Authoritative, Wit-driven, Technical)
  2. The Spectrum — Define "This, Not That" logic gates for each pillar
  3. Persona Encoding — Translate pillars into LLM system-level instructions

→ Use scripts/prompt_synthesizer.py to generate deployable system prompts.

Phase III: Delivery

  1. Artifact Generation — Produce voice guide docs, style reference cards, prompt templates
  2. Manual Review/review [output] provides a qualitative checklist to assess whether output aligns with the established voice pillars (Claude-assisted, not script-automated)
  3. Platform Pivot/pivot [context] adapts voice for specific channels while preserving DNA, using generate_platform_pivot() from prompt_synthesizer.py

Note on prohibited words: The generated system prompt instructs the LLM to replace prohibited words with preferred equivalents. This is a prompt-level instruction — enforcement depends on the model following the system prompt, not on automated script-level filtering.


The 4-Pillar Framework

Map every brand voice across four axes to define its Safe Operating Area:

AxisPoles
CharacterFriendly ←→ Authoritative
ToneHumorous ←→ Serious
LanguageSimple ←→ Complex
PurposeHelpful ←→ Entertaining

See references/methodology.md for full framework details including Cadence Analysis and Semantic Salience scoring.


Mandatory Output Components

Every Brand Voice engagement must produce:

  1. Metrics Report — Lexical density %, ASL, top keywords, cadence variance
  2. Voice Matrix — 3 pillars × "This/Not That" for each
  3. System Prompt — Ready-to-deploy LLM persona encoding
  4. Platform Pivots — At minimum: formal/informal, long-form/short-form variants
  5. Prohibited/Preferred Lexicon — Concrete word lists

Quick Reference Commands

CommandActionImplementation
/analyze [corpus]Linguistic audit on provided textscripts/voice_analyzer.py
/synthesize [pillars]Generate LLM system prompt from pillarsscripts/prompt_synthesizer.py
/review [output]Qualitative checklist review against voice pillarsClaude-assisted (no script)
/pivot [context]Adapt voice for target platform/audiencegenerate_platform_pivot() in prompt_synthesizer

Scripts

  • scripts/voice_analyzer.py — Computes lexical density, ASL, cadence variance, sentiment temperature, and top keywords from a corpus
  • scripts/prompt_synthesizer.py — Generates deployable LLM system prompts from a BrandConfig object; includes generate_platform_pivot() for channel-specific adaptations

References

  • references/methodology.md — Full technical methodology: 4-Pillar Framework, Cadence Analysis, Semantic Salience, Human-AI Collaborative Loop

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

通义晓蜜 - 智能外呼

触发阿里云晓蜜外呼机器人任务,自动批量拨打电话。适用于批量外呼、客户回访、满意度调查、简历筛查约面试等场景。可从前置工具或节点获取外呼名单。

Registry SourceRecently Updated
General

Letterboxd Watchlist

Scrape a public Letterboxd user's watchlist into a CSV/JSONL list of titles and film URLs without logging in. Use when a user asks to export, scrape, or mirror a Letterboxd watchlist, or to build watch-next queues.

Registry SourceRecently Updated
General

Seedance Video Generation

Generate AI videos using ByteDance Seedance. Use when the user wants to: (1) generate videos from text prompts, (2) generate videos from images (first frame, first+last frame, reference images), or (3) query/manage video generation tasks. Supports Seedance 1.5 Pro (with audio), 1.0 Pro, 1.0 Pro Fast, and 1.0 Lite models.

Registry SourceRecently Updated
4.2K17jackycser
General

Universal Skills Manager

The master coordinator for AI skills. Discovers skills from multiple sources (SkillsMP.com, SkillHub, and ClawHub), manages installation, and synchronization...

Registry SourceRecently Updated