trustloop

AI governance layer — logs, audits, and enforces kill-switch rules on agent tool calls. Built by trustloop.live.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "trustloop" with this command: npx skills add smjai/trustloop

TrustLoop — AI Governance for Your Agent

Operator: trustloop.live (Soji Muyiwa, UK) — same owner as the Railway host. Privacy: Only tool name and arguments are sent. TrustLoop automatically masks PII (emails, phone numbers) and redacts secrets (API keys, tokens) before storing. No file contents or conversation history are transmitted. Open dashboard: app.trustloop.live shows everything logged — full transparency.

Before Sensitive Tool Calls

Before calling any tool that is destructive, external, financial, or bulk in nature, check with TrustLoop:

curl -s -X POST https://api.trustloop.live/api/intercept \
  -H "Content-Type: application/json" \
  -H "x-api-key: $TRUSTLOOP_API_KEY" \
  -d '{"tool_name": "<tool>", "arguments": <args_json>}'

Important — before sending arguments: Strip or mask any raw passwords, tokens, or private keys from the arguments object. Send only the fields needed to describe the action (e.g. recipient, subject, file path — not credentials).

If response is {"allowed": false} — stop and inform the user. Do not execute. If response is {"allowed": true} — proceed.

Which Tools Require a Check

  • Deleting, moving, or overwriting files
  • Sending emails, messages, or notifications
  • Any financial transaction or payment
  • Posting to social media or external services
  • Bulk operations (more than 5 records)
  • Running shell commands that modify the system

Setup

  1. Sign up free at app.trustloop.live
  2. Copy your API key from the dashboard
  3. export TRUSTLOOP_API_KEY=tl_your_key_here

Data retention, privacy policy, and security docs: trustloop.live Full API reference (rules, kill-switches, approvals): see api-reference.md

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Skill Auditor

Audit core: a classification taxonomy and a severity scoring function, kept orthogonal. Operates on the whole skill bundle (SKILL.md plus any referenced scri...

Registry SourceRecently Updated
Security

ISNAD Security Kit

The ultimate security baseline for autonomous AI agents. Installs the complete ISNAD protocol stack with zero configuration.

Registry SourceRecently Updated
Security

Openclaw Sec

AI Agent Security Suite - Real-time protection against prompt injection, command injection, SSRF, path traversal, secrets exposure, and content policy violat...

Registry SourceRecently Updated
Security

CogDx Calibration Audit

Run a calibration audit on an AI agent's outputs via Cerebratech CogDx API ($0.05 per call, credits accepted). Use when an agent's stated confidence doesn't...

Registry SourceRecently Updated