adi-decision-engine

Structured multi-criteria decision analysis for ranking options with weights, constraints, confidence, tradeoff reasoning, sensitivity analysis, and explainable recommendations. Use when the user asks for decision support, MCDA, weighted scoring, prioritization, vendor selection, route planning, hiring shortlist ranking, tool comparison, procurement decisions, or auditable agent decision logic.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "adi-decision-engine" with this command: npx skills add dimgouso/adi-decision-engine

ADI Decision Engine

Core promise

Turn a messy tradeoff problem into a structured, auditable multi-criteria decision and return a ranked recommendation with confidence and explanation.

When to use this skill

Use this skill when the user needs structured decision support rather than open-ended brainstorming. Typical triggers include:

  • multi-criteria decision analysis
  • weighted scoring or option ranking
  • vendor selection or procurement
  • route planning with explicit tradeoffs
  • hiring shortlist ranking
  • tool or platform comparison
  • policy-driven or auditable agent decisions

Input modes

This skill supports exactly two input modes.

1. Structured mode

The user already has a decision request with:

  • options
  • criteria
  • optional constraints
  • optional policy_name
  • optional evidence, confidence, or context

Use scripts/validate_request.py first if request quality is uncertain, then scripts/run_adi.py to execute it.

2. Freeform mode

The user provides a natural-language tradeoff problem.

First use scripts/normalize_problem.py to produce a request skeleton. Do not pretend the request is complete if important fields are missing. If the skeleton is not ready, ask for the missing inputs instead of inventing scores or constraints.

Output contract

If ADI runs successfully, the final answer must contain:

  • best_option
  • a short rationale for why it won
  • top-ranked alternatives
  • confidence summary
  • constraint impact summary
  • sensitivity or stability summary when available
  • explicit assumptions

If the request is not complete enough to run, return a request-completion prompt rather than a fabricated ranking.

Workflow

  1. Determine whether the user input is structured or freeform.
  2. For freeform input, normalize it into a request skeleton using scripts/normalize_problem.py.
  3. Validate candidate requests with scripts/validate_request.py.
  4. Run complete requests with scripts/run_adi.py.
  5. Present the ADI result in clear decision-support language:
    • recommendation first
    • strongest tradeoff second
    • caveats and sensitivity after that

Decision hygiene rules

  • Never rank options without explicit criteria.
  • Never silently invent hard constraints.
  • If criterion direction is ambiguous, stop and clarify.
  • Normalize vague goals into named criteria before scoring.
  • Prefer a small, explicit criteria set over many overlapping criteria.
  • Keep the policy choice visible: balanced, risk_averse, or exploratory.

Output quality rules

  • Show the top recommendation first.
  • Explain why it won.
  • Mention the strongest tradeoff.
  • Call out eliminated or constraint-violating options.
  • Include confidence caveats when evidence is weak.
  • Use a compact comparison table or structured bullet list when comparing several options.

Safety and honesty rules

  • No hidden math.
  • No fake scores.
  • No fabricated evidence.
  • Do not claim ADI ran if the runtime dependency is missing.
  • Do not request API keys.
  • Do not require network access for the core workflow.
  • Do not tell the user to trust the ranking if the request is under-specified.

Runtime requirements

  • python3
  • either an importable adi-decision package or the adi CLI on PATH

If the ADI runtime is unavailable, stop with a clear error and explain that the dependency must be installed locally.

References

Examples

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

Ai Task Hub

AI task hub for image analysis, background removal, speech-to-text, text-to-speech, markdown conversion, points balance/ledger lookup, and async execute/poll...

Registry SourceRecently Updated
Research

Ontology 1.0.4

Typed knowledge graph for structured agent memory and composable skills. Use when creating/querying entities (Person, Project, Task, Event, Document), linkin...

Registry SourceRecently Updated
03
Profile unavailable
Research

Arxiv Reader

Read and analyze arXiv papers by fetching LaTeX source, listing sections, or extracting abstracts

Registry SourceRecently Updated
072
Profile unavailable