Recommend

Context-aware recommendations. Learns preferences, researches options, anticipates expectations.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Recommend" with this command: npx skills add ivangdavila/recommend

Core Loop

Context → Preferences → Research → Match → Recommend

Every recommendation requires: knowing the user + knowing the options.

Check sources.md for where to find user context. Check categories.md for domain-specific factors.


Step 1: Context Gathering

Before recommending, search user context. See sources.md for full source list.

Minimum output: 3-5 relevant user signals before proceeding. If insufficient, ask targeted questions.


Step 2: Preference Extraction

From gathered context, extract:

DimensionQuestion
ValuesWhat matters most? (Quality, price, speed, novelty, safety)
ConstraintsHard limits? (Budget, time, dietary, ethical)
HistoryWhat worked? What disappointed?
MoodAdventurous or safe? Exploring or comfort?

Output: 3-5 bullet preference profile for this request.


Step 3: Research Options

Now—and only now—research candidates:

  • Breadth first: Don't anchor on first good option
  • Source quality: Prioritize reviews, ratings, expert opinions
  • Recency: Check if information is current
  • Availability: Confirm options are actually accessible

Output: Shortlist of 3-7 viable candidates with key attributes.


Step 4: Match & Rank

Score each candidate against the preference profile:

Candidate → Values alignment + Constraint fit + History match + Mood fit

Disqualify anything that violates hard constraints.

Rank by total alignment, not just one dimension.


Step 5: Recommend

Present 1-3 recommendations:

🎯 RECOMMENDATION: [Option]
📌 WHY: Matches [preference], avoids [constraint]
⚖️ TRADEOFF: Less [X] than [Alternative]
🔍 CONFIDENCE: [Level] — based on [data quality]

Adaptive Learning

After each recommendation:

  • Track outcome: Accepted? Modified? Rejected?
  • Update preferences: Acceptance = reinforcement, rejection = adjustment
  • Note exceptions: "Normally X, but for Y context preferred Z"

Store learnings in memory for future recommendations.


Traps

  • Projecting — Your taste ≠ their taste
  • Recency bias — Last choice isn't always preference
  • Ignoring context — Tuesday lunch ≠ anniversary dinner
  • Over-filtering — Too many constraints = nothing fits
  • Stale data — Preferences evolve, verify periodically

Recommendations are predictions. More context = better predictions.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

deep-research-surf

Conducts deep, multi-angle research using Surf MCP tools and parallel subagents. Use for deep research, competitive landscape analysis, strategic intelligenc...

Registry SourceRecently Updated
Research

audio-quality-check

Analyze audio recording quality - echo detection, loudness, speech intelligibility, SNR, spectral analysis. Use when the user wants to check a recording's qu...

Registry SourceRecently Updated
1120Profile unavailable
Research

GEO Performance Analysis

Analyzes a brand’s presence and sentiment in LLM-generated industry recommendations, extracting mention context and competitor comparisons.

Registry SourceRecently Updated
201Profile unavailable
Research

Paperspace

Paperspace integration. Manage data, records, and automate workflows. Use when the user wants to interact with Paperspace data.

Registry SourceRecently Updated
1470Profile unavailable