Generate Initiatives
Solution Track — Step 1 of 4. Brainstorms targeted solutions for the identified root cause.
Inputs Required
- Root cause from
.agents/mkt/root-cause.md
Output
.agents/mkt/initiatives.md
Quality Gate
Before delivering, verify each initiative:
- Hypothesis explicitly names the root cause (not a generic growth lever)
- Mechanic describes specific actions, not categories ("add interactive tour to first login" not "improve onboarding")
- Anti-generic test: Delete the root cause reference — does the idea still make sense for ANY company? If yes, it's generic. Rewrite.
- Mix includes ≥2 small, ≥2 medium, ≥1 large effort
Chain Position
Previous: mkt-root-cause | Next: mkt-initiative-prioritize
Before Starting
Step 0: Product Context
Check for .agents/mkt/product-context.md. If missing: INTERVIEW. Ask the user 8 product questions (what, who, problem, differentiator, proof points, pricing, objections, voice) and save to .agents/mkt/product-context.md. Or recommend running mkt-copywriting to bootstrap it.
Required Artifacts
| Artifact | Source | If Missing |
|---|---|---|
root-cause.md | mkt-root-cause | STOP. "Run the Problem track (mkt-diagnosis → mkt-hypothesis → mkt-root-cause) first." |
Optional Artifacts
| Artifact | Source | Benefit |
|---|---|---|
product-context.md | mkt-copywriting | Constraints and capabilities context |
Read .agents/mkt/root-cause.md. Quote the root cause statement(s) and gap percentages. Do NOT brainstorm generic growth ideas without a root cause anchor.
Step 1: Understand Constraints
Ask:
- Root cause context: Confirm the root cause and gap size
- Constraints: Budget ceiling? Team size? Hard deadlines?
- What you've tried: What's been attempted? What worked/didn't?
Step 2: Research + Generate
WebSearch directive: Search for how other companies solved this specific root cause:
"[root cause topic] case study"(e.g., "ad targeting quality case study")"how [company type] improved [metric]"(e.g., "how SaaS improved trial conversion")"[root cause] solution [industry]"(e.g., "email deliverability fix SaaS")site:reddit.com "[root cause topic]" what worked
Use real examples to inspire specific, targeted ideas — not generic playbooks.
For each idea:
| # | Name | Hypothesis | Mechanic | Target Metric | Effort |
|---|---|---|---|---|---|
| 1 | [Name] | If [action], then [outcome], because [root cause connection] | [2-3 sentences: specific steps, not categories] | [Metric] | S/M/L |
Rules
- Root cause anchor: Every hypothesis MUST reference the root cause
- Anti-generic test: For each idea, ask: "Would this help ANY company, or only one with OUR root cause?" Generic → rewrite or cut.
- Specific mechanic: "Improve onboarding" ≠ mechanic. "Add 3-step interactive tour triggered on first login, guiding to first report export" = mechanic.
- Effort mix: ≥2 Small (days, 1 person), ≥2 Medium (1-2 weeks, small team), ≥1 Large (month+, cross-team)
- Distinct approaches: Each idea is a genuinely different approach
Step 3: Refine
Ask: "Which resonate? Any you'd cut immediately? Any that spark a different idea?"
Artifact Template
On re-run: rename existing artifact to initiatives.v[N].md and create new with incremented version.
---
skill: mkt-initiative-generate
version: 1
date: {{today}}
status: draft
---
# Initiatives
**Root Cause:** [from root-cause.md]
## Initiatives
### 1. [Name] — Effort: S
**Hypothesis:** If [action], then [outcome], because [root cause connection].
**Mechanic:** [Specific steps — what you'd actually do, in order]
**Target Metric:** [metric]
**Anti-generic check:** [Why this only works for our specific root cause]
### 2. [Name] — Effort: S
[Same format]
### 3. [Name] — Effort: M
[Same format]
[...5-10 total]
## Next Step
Run `mkt-initiative-prioritize` to score and rank these.
Worked Example
Root Cause: Signups declined because (1) ad targeting brought low-intent visitors (~55%) and (2) homepage redesign lost trust signals (~35%).
# Initiatives
**Date:** 2026-03-13
**Skill:** mkt-initiative-generate
**Root Cause:** (1) Ad targeting quality (~55%), (2) Homepage trust/clarity (~35%)
## Initiatives
### 1. Restore + Refine Paid Targeting — Effort: S
**Hypothesis:** If we revert to original targeting and add a converters lookalike, then paid conversion returns to ≥3%, because the current broad targeting is bringing visitors outside our ICP.
**Mechanic:** Pull last 90d converters list → create 1% lookalike in Meta → A/B test current broad vs. lookalike targeting → 7-day test at $50/day per variant.
**Target Metric:** Paid visitor-to-signup rate
**Anti-generic check:** Only relevant because our specific targeting change caused the quality drop.
### 2. Restore Homepage Social Proof — Effort: S
**Hypothesis:** If we add back customer logos + testimonial above the fold, then homepage bounce rate drops to ≤40%, because the redesign removed the trust signals that were on the old page.
**Mechanic:** Screenshot old homepage via Wayback Machine → identify removed trust elements → add 3 customer logos + 1 short testimonial quote to new hero section → deploy.
**Target Metric:** Homepage bounce rate
**Anti-generic check:** Only relevant because our specific redesign removed these elements.
### 3. Landing Page per Ad Group — Effort: M
**Hypothesis:** If paid visitors land on a page matching their ad's promise, then paid conversion improves by 30%+, because the current generic homepage doesn't match specific ad copy.
**Mechanic:** Identify top 3 ad groups by spend → create 1 landing page per group with message-matched headline → redirect ad traffic → measure conversion per LP vs. homepage.
**Target Metric:** Ad-to-signup conversion by ad group
**Anti-generic check:** Addresses both root causes — targeting quality AND page clarity for specific audiences.
### 4. Homepage A/B: Old vs. New — Effort: S
**Hypothesis:** If the old homepage converts better, then the redesign itself is the problem, because we'll have direct evidence comparing the two.
**Mechanic:** Restore old homepage on a subdomain or via feature flag → split traffic 50/50 → run for 14 days → measure signup conversion.
**Target Metric:** Homepage-to-signup rate
**Anti-generic check:** Directly tests whether Root Cause 2 is the homepage design itself.
### 5. Intent-Qualifying Ad Creative — Effort: M
**Hypothesis:** If ad creative pre-qualifies intent (mentions price, mentions specific use case), then paid visitor quality improves even with broad targeting, because self-selection filters out low-intent clicks.
**Mechanic:** Write 3 new ad variants that mention pricing tier or specific use case → run alongside current ads → compare conversion rate of clickers.
**Target Metric:** Paid ad-to-signup conversion rate
**Anti-generic check:** Only relevant because broad targeting is bringing wrong audience.
### 6. Dedicated Onboarding for Paid Visitors — Effort: L
**Hypothesis:** If paid visitors get a tailored first-visit experience emphasizing the value prop they saw in the ad, then their conversion improves, because the gap between ad promise and site experience is what loses them.
**Mechanic:** Detect UTM source on landing → show contextual welcome message matching ad group → customize hero copy dynamically → build for top 3 ad groups.
**Target Metric:** Paid visitor activation + signup rate
**Anti-generic check:** Only needed because paid visitors have different expectations than organic.
## Next Step
Run `mkt-initiative-prioritize` to score and rank these.
References
- references/hypothesis-framework.md — If/Then/Because templates
- references/initiative-types.md — Hero vs Support classification