distill

Diagnose requirements problems and guide discovery of real needs vs. stated wants. Use before any implementation to distinguish problem from solution, surface hidden constraints, and bound scope to a viable V1. Trigger phrases: "requirements analysis", "what should I build", "clarify requirements", "is this the right problem", "define scope", "what does the user need", "distill requirements", "/empire-product:distill". Findings stay local — never post to GitHub.

Safety Notice

This listing is imported from SkillsMP metadata and should be treated as untrusted until upstream source review is completed.

Copy this and send it to your AI assistant to learn

Install skill "distill" with this command: npx skills add marcoskichel/skillsmp-marcoskichel-marcoskichel-distill

No markdown body

This source entry does not include full markdown content beyond metadata.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

probe

Diagnose thinking failures and audit whether reasoning serves inquiry or defense. Use when reasoning feels stuck or circular, a conclusion feels defended rather than discovered, confidence is high but evidence is thin, analysis grows more elaborate without becoming more accurate, or same approach keeps failing. Trigger phrases: "check my thinking", "am I reasoning well", "why am I stuck", "reasoning feels circular", "probe my logic", "/empire-product:probe". Two modes: self-monitoring (agent audits own process) and user coaching (diagnose user's thinking pattern with questions, not declarations).

Repository SourceNeeds Review
Coding

team-review

Trigger when user says: "team review", "have specialists review", "review my changes", "re-review", "review again", "another pass", "ask the team", "specialist review", "/empire-dev:team-review", "have the team look at this", "get specialists to review", "run a team review", "do a specialist review". Spawns parallel specialist subagents to review diffs and consolidates findings. Never posts to GitHub.

Repository SourceNeeds Review
Research

recon

Trigger when user says: "competitor analysis", "compare competitors", "/empire-product:recon", "competitor matrix", "competitor research", "feature gap", "scout competitors", "size up competition", "pricing comparison vs competitors", "positioning analysis", "competitive landscape". Maps competitor pricing, features, positioning, and gaps across a chosen dimension set. Produces side-by-side matrix with confidence-tagged data and positioning angle. Different from `/empire-research:compare`, which evaluates tools, libraries, vendors, or architectural choices — NOT competitors. Findings stay local — never posted externally.

Repository SourceNeeds Review
Research

vet

Trigger when user says: "vet this idea", "vet idea", "validate idea", "go no go", "pressure test", "is this idea good", "kill the idea", "should I build this", "fatal flaw check", "what do you think of this product", "stress test the idea", "brutal honesty on this idea". Pressure-tests a product idea with brutal honesty: web research for competitors and demand, fatal-flaw hypothesis, anti-sycophancy mode, structured go/no-go output with pivots. Findings stay local — never posted externally.

Repository SourceNeeds Review
distill | V50.AI