moltblock — Trust Layer for AI Agents
Description
Moltblock provides verification gating for AI-generated artifacts. It runs policy checks to catch dangerous patterns before they reach production.
What moltblock does:
- Generates artifacts via LLM API calls, then runs policy checks against the output
- Returns a structured verification result (pass/fail with evidence)
- Reads its own config files (
moltblock.json,~/.moltblock/moltblock.json) if present - API keys are read from environment variables at runtime and sent only to the configured LLM provider endpoint
- No code execution occurs — moltblock only performs policy checks on generated artifacts
When to Use
Use moltblock when the task involves:
- High risk (always verify): Destructive operations (
rm -rf,DROP TABLE), privilege escalation (sudo,chmod 777), credential/key access (.ssh/,id_rsa,.env), system modification, raw disk writes - Medium risk (verify when uncertain): Network requests (
curl,wget,fetch), file writes, database modifications, subprocess spawning, dynamic code evaluation - Low risk (skip verification): Simple text responses, math, reading public information, code that doesn't touch the filesystem or network
Tool: moltblock
Verify a task before execution.
Usage
npx moltblock@0.11.4 "<task description>" --provider <provider> --json
Parameters
| Parameter | Required | Description |
|---|---|---|
| task | Yes | The task description to verify |
| --provider | No | LLM provider: openai, google, zai, local (auto-detected from env) |
| --model | No | Model override |
| --json | No | Output structured JSON result |
Environment Variables
Moltblock auto-detects the LLM provider from whichever API key is set. If no key is set, it falls back to a local LLM at localhost:1234. Set one of these for a cloud provider:
OPENAI_API_KEY— OpenAI (primary)ANTHROPIC_API_KEY— Anthropic/Claude (optional)GOOGLE_API_KEY— Google/Gemini (optional)ZAI_API_KEY— ZAI (optional)
Example
# Verify a task
npx moltblock@0.11.4 "implement a function that validates email addresses" --json
Output (JSON mode)
{
"verification_passed": true,
"verification_evidence": "All policy rules passed.",
"authoritative_artifact": "...",
"draft": "...",
"critique": "...",
"final_candidate": "..."
}
Installation
Use directly with npx (recommended, no install needed):
npx moltblock@0.11.4 "your task" --json
Or install globally:
npm install -g moltblock@0.11.4
Configuration
No configuration file is required. Moltblock auto-detects your LLM provider from environment variables and falls back to sensible defaults.
Optionally, place moltblock.json in your project root or ~/.moltblock/moltblock.json to customize model bindings:
{
"agent": {
"bindings": {
"generator": { "backend": "google", "model": "gemini-2.0-flash" },
"critic": { "backend": "google", "model": "gemini-2.0-flash" },
"judge": { "backend": "google", "model": "gemini-2.0-flash" }
}
}
}
See the full configuration docs for policy rules and advanced options.
Source
- Repository: github.com/moltblock/moltblock
- npm: npmjs.com/package/moltblock
- License: MIT
Security
When used as a skill, moltblock performs policy checks only — no code is generated, written to disk, or executed. The tool analyzes task descriptions against configurable policy rules and returns a pass/fail verification result.
The CLI additionally supports a --test flag for direct user invocation that executes code verification via vitest. This flag is not exposed to agents through this skill and should only be used directly by developers in sandboxed environments. See the CLI documentation for details.
Disclaimer
Moltblock reduces risk but does not eliminate it. Verification is best-effort — policy rules and LLM-based checks can miss dangerous patterns. Always review generated artifacts before executing them. The authors and contributors are not responsible for any damage, data loss, or security incidents resulting from the use of this tool. Use at your own risk.