moltblock - Trust Layer for AI Agents

Verification gating for AI-generated artifacts. Policy checks to catch dangerous patterns before execution.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "moltblock - Trust Layer for AI Agents" with this command: npx skills add meshailabs/moltblock

moltblock — Trust Layer for AI Agents

Description

Moltblock provides verification gating for AI-generated artifacts. It runs policy checks to catch dangerous patterns before they reach production.

What moltblock does:

  • Generates artifacts via LLM API calls, then runs policy checks against the output
  • Returns a structured verification result (pass/fail with evidence)
  • Reads its own config files (moltblock.json, ~/.moltblock/moltblock.json) if present
  • API keys are read from environment variables at runtime and sent only to the configured LLM provider endpoint
  • No code execution occurs — moltblock only performs policy checks on generated artifacts

When to Use

Use moltblock when the task involves:

  • High risk (always verify): Destructive operations (rm -rf, DROP TABLE), privilege escalation (sudo, chmod 777), credential/key access (.ssh/, id_rsa, .env), system modification, raw disk writes
  • Medium risk (verify when uncertain): Network requests (curl, wget, fetch), file writes, database modifications, subprocess spawning, dynamic code evaluation
  • Low risk (skip verification): Simple text responses, math, reading public information, code that doesn't touch the filesystem or network

Tool: moltblock

Verify a task before execution.

Usage

npx moltblock@0.11.4 "<task description>" --provider <provider> --json

Parameters

ParameterRequiredDescription
taskYesThe task description to verify
--providerNoLLM provider: openai, google, zai, local (auto-detected from env)
--modelNoModel override
--jsonNoOutput structured JSON result

Environment Variables

Moltblock auto-detects the LLM provider from whichever API key is set. If no key is set, it falls back to a local LLM at localhost:1234. Set one of these for a cloud provider:

  • OPENAI_API_KEY — OpenAI (primary)
  • ANTHROPIC_API_KEY — Anthropic/Claude (optional)
  • GOOGLE_API_KEY — Google/Gemini (optional)
  • ZAI_API_KEY — ZAI (optional)

Example

# Verify a task
npx moltblock@0.11.4 "implement a function that validates email addresses" --json

Output (JSON mode)

{
  "verification_passed": true,
  "verification_evidence": "All policy rules passed.",
  "authoritative_artifact": "...",
  "draft": "...",
  "critique": "...",
  "final_candidate": "..."
}

Installation

Use directly with npx (recommended, no install needed):

npx moltblock@0.11.4 "your task" --json

Or install globally:

npm install -g moltblock@0.11.4

Configuration

No configuration file is required. Moltblock auto-detects your LLM provider from environment variables and falls back to sensible defaults.

Optionally, place moltblock.json in your project root or ~/.moltblock/moltblock.json to customize model bindings:

{
  "agent": {
    "bindings": {
      "generator": { "backend": "google", "model": "gemini-2.0-flash" },
      "critic": { "backend": "google", "model": "gemini-2.0-flash" },
      "judge": { "backend": "google", "model": "gemini-2.0-flash" }
    }
  }
}

See the full configuration docs for policy rules and advanced options.

Source

Security

When used as a skill, moltblock performs policy checks only — no code is generated, written to disk, or executed. The tool analyzes task descriptions against configurable policy rules and returns a pass/fail verification result.

The CLI additionally supports a --test flag for direct user invocation that executes code verification via vitest. This flag is not exposed to agents through this skill and should only be used directly by developers in sandboxed environments. See the CLI documentation for details.

Disclaimer

Moltblock reduces risk but does not eliminate it. Verification is best-effort — policy rules and LLM-based checks can miss dangerous patterns. Always review generated artifacts before executing them. The authors and contributors are not responsible for any damage, data loss, or security incidents resulting from the use of this tool. Use at your own risk.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Ai Competitor Analyzer

提供AI驱动的竞争对手分析,支持批量自动处理,提升企业和专业团队分析效率与专业度。

Registry SourceRecently Updated
General

Ai Data Visualization

提供自动化AI分析与多格式批量处理,显著提升数据可视化效率,节省成本,适用企业和个人用户。

Registry SourceRecently Updated
General

Ai Cost Optimizer

提供基于预算和任务需求的AI模型成本优化方案,计算节省并指导OpenClaw配置与模型切换策略。

Registry SourceRecently Updated