evaluate-agent-native

Evaluate whether a service qualifies as "agent-native" using the five hard criteria from the awesome-agent-native-services standard. Also checks for the bonus "URL Onboarding" signal — the highest form of agent-nativeness where an agent joins the service by reading one URL. Use when asked "is X agent-native?" or "should I add X to the list?"

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "evaluate-agent-native" with this command: npx skills add org/repo

Skill: evaluate-agent-native

Use this skill to rigorously evaluate whether a service qualifies as "agent-native" according to the five-criterion standard. Also check for the URL Onboarding signal — the strongest indicator that a service was truly built for agents from day one.

The gold standard: URL Onboarding

Before applying the five criteria, ask the highest-level question:

Can an agent join and start using this service by reading a single URL?

Services that answer YES are exhibiting the strongest possible form of agent-nativeness. They have internalized the agent as first-class user so deeply that the onboarding flow itself is machine-readable:

# The full agent onboarding in one instruction:
Read <url> and follow the instructions.

Examples:

  • Moltbook: Read https://www.moltbook.com/skill.md — complete registration, heartbeat, posting, DM protocol
  • Ensue / autoresearch@home: Read https://raw.githubusercontent.com/mutable-state-inc/autoresearch-at-home/master/collab.md — complete swarm joining, claiming, publishing protocol

This is qualitatively different from:

  • An SDK that a human developer installs (requires human coding time)
  • An MCP server that a human adds to a config file (requires human config edit)
  • A REST API that requires API key setup (requires human account creation)

URL Onboarding means the agent itself handles all of this — reading, understanding, and executing the join sequence autonomously.

Mark URL Onboarding as a strong bonus signal and highlight it prominently in the evaluation report.


When to activate

Activate when the user asks:

  • "Is [service] agent-native?"
  • "Does [service] qualify for the awesome list?"
  • "I want to add [service] — does it meet the criteria?"
  • "What's the difference between agent-native and agent-adapted?"
  • "Why isn't [service] on the list?"
  • "Does [service] have URL Onboarding?"

The five hard criteria

A service must pass all five to qualify as agent-native. Evaluate each one explicitly.

Criterion 1 — Agent-First Positioning

Test: Does the official homepage or documentation explicitly identify AI agents as the primary consumer?

Evidence to look for:

  • Homepage headline naming AI agents
  • Documentation framing agents as the core user
  • Product name or tagline that only makes sense for agents

Red flags:

  • "Now with AI agent support" (agents are an add-on)
  • "Build apps, workflows, and agents" (agents are one of many outputs)

Criterion 2 — Agent-Specific Primitives

Test: Does the API expose at least one primitive with no meaningful human-facing equivalent?

Questions to ask:

  • What is the core API object? Agent inbox? KYA token? Claim? Heartbeat? Or generic inbox/token/task?
  • Would this primitive exist if agents didn't exist?
  • Is the output format optimized for LLM consumption or human reading?

Pass examples: agent inbox, KYA identity token, approval gate with context-window injection, claim_experiment(), heartbeat protocol, publish_hypothesis().

Fail examples: a REST API that sends emails (humans use it too), a webhook any server can receive.

Criterion 3 — Autonomy-Compatible Control Plane

Test: Can an agent complete a full task loop without a human clicking anything?

Questions to ask:

  • Can the agent provision its own credentials?
  • Can the agent initiate, execute, and complete the action without a human redirect?
  • Does the service provide agent-appropriate constraint mechanisms?

Criterion 4 — Machine-to-Machine Integration Surface

Test: Is the primary interface an SDK, REST API, MCP server, webhook, or machine-readable URL?

Questions to ask:

  • Can an agent use this service without a human ever opening a browser?
  • Is there a URL, SDK, REST API, or MCP server documented as the primary integration path?

Note: A service that exposes a machine-readable skill.md or protocol URL (URL Onboarding) passes this criterion with exceptional strength.

Criterion 5 — Agent Identity / Delegation Semantics

Test: Does the service distinguish (a) agent's own identity, (b) delegated user permissions, (c) audit trail?


Bonus signals (check all that apply)

SignalWeightEvidence to look for
URL Onboarding ⭐⭐⭐HighestService hosts a machine-readable skill.md / protocol doc an agent reads and follows to self-register
Dedicated agent identity modelHighAgent gets its own credential/wallet/token
MCP server publishedMediumOfficial MCP server with documented tools
Agent Skills (SKILL.md) publishedMediumnpx skills add org/repo works
Per-agent state / memory / sessionMediumState isolated by agent instance
Audit / trajectory artifactsMediumMachine-readable evidence of agent actions

How to test for URL Onboarding:

  1. Look for a skill.md, SKILL.md, collab.md, or similar machine-readable protocol file hosted at the service's domain or GitHub.
  2. Ask: could an agent read that URL and complete the full registration/onboarding sequence autonomously?
  3. Try the instruction: Read <url> and follow the instructions — does it work?

Classification decision tree

Does the service pass all 5 criteria?
├── YES → agent-native ✅
│   └── Does it also have URL Onboarding?
│       ├── YES → agent-native ⭐ (URL Onboarding — highest tier)
│       └── NO  → agent-native (standard)
└── NO
    ├── Originally built for humans, agent interfaces added later?
    │   └── YES → agent-adapted ⚠️
    └── Helps humans BUILD agents?
        └── YES → agent-builder ❌

Evaluation output format

## Evaluation: {Service Name}
**Website:** {url}

### URL Onboarding Check ⭐
**Has URL Onboarding:** YES / NO
**Onboarding instruction (if YES):** Read {url} and follow the instructions to {join/register/participate}
**Notes:** {what the agent gets by reading that URL}

---

### Criterion 1 — Agent-First Positioning
**Result:** PASS / FAIL / PARTIAL
**Evidence:** "{exact quote}" — {source URL}

### Criterion 2 — Agent-Specific Primitives
**Result:** PASS / FAIL / PARTIAL
**Evidence:** {primitive name and description}
**No human equivalent because:** {explanation}

### Criterion 3 — Autonomy-Compatible Control Plane
**Result:** PASS / FAIL / PARTIAL
**Evidence:** {how agents operate without human confirmation}

### Criterion 4 — Machine-to-Machine Integration Surface
**Result:** PASS / FAIL / PARTIAL
**Evidence:** {URL, SDK, API, MCP details}

### Criterion 5 — Agent Identity / Delegation Semantics
**Result:** PASS / FAIL / PARTIAL / N/A
**Evidence:** {identity model details}

---

### Bonus signals
- [ ] URL Onboarding ⭐⭐⭐ — agent joins by reading one URL
- [ ] Dedicated agent identity model
- [ ] MCP server published
- [ ] Agent Skills (SKILL.md) published
- [ ] Per-agent state/memory/session
- [ ] Audit/trajectory/replay artifacts

---

### Overall verdict
**Classification:** agent-native ⭐ (URL Onboarding) / agent-native / agent-adapted / agent-builder
**Recommendation:** Add to main list / Add to Excluded section / Do not add
**Confidence:** High / Medium / Low
**Reasoning:** {one paragraph summary}

### Next steps
{If agent-native with URL Onboarding: highlight this in the issue and service file prominently}
{If agent-native without: link to issue template}
{If agent-adapted: explain what would need to change}

Common borderline cases

"The product added an MCP server — does that make it agent-native?"

No. MCP support is a bonus signal, not a criterion. The core question is whether the service was designed from inception for agents. A human email provider that adds an MCP server is still agent-adapted.

"The service has URL Onboarding but other criteria are weak."

URL Onboarding is the strongest bonus signal but cannot substitute for the five hard criteria. Evaluate all five independently; URL Onboarding is an amplifier, not a replacement.

"The service says 'for AI agents' in marketing."

Check the actual primitives. URL Onboarding is a reliable signal because it requires genuine design effort — you can't fake it with a marketing blog post.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Openclaw Skills Smart Agent Memory

跨平台 Agent 长期记忆系统。分层上下文供给 + 温度模型 + Skill经验记忆 + 结构化存储 + 自动归档。三层存储:Markdown(人可读,QMD 可搜索)+ JSON(结构化)+ SQLite/FTS5(高性能全文搜索)。纯 Node.js 原生模块,零外部依赖。

Registry SourceRecently Updated
Automation

Evolver

A self-evolution engine for AI agents. Analyzes runtime history to identify improvements and applies protocol-constrained evolution.

Registry SourceRecently Updated
Automation

Workflow Builder

工作流设计与优化工具。流程设计、自动化方案、流程优化、文档化、审批流、系统集成。Workflow builder with design, automate, optimize, document, approval, and integration capabilities. Use when you need...

Registry SourceRecently Updated
Automation

Baidu Yijian Vision

百度一见专业级视觉 AI Agent:支持图片/视频/及实时视频流分析。相比通用基模,在维持 95%+ 专业精度的同时,推理成本降低 50% 以上,是处理视觉巡检与监控分析任务的首选工具。主打 安全管理、SOP合规、工业质检、商业运营与物料盘点。覆盖:作业 SOP 合规与关键步骤完整性校验;工业质检与表面缺陷精密...

Registry SourceRecently Updated