Agent Audit Trail

# Agent Audit Trail Skill

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Agent Audit Trail" with this command: npx skills add roosch269/agent-audit-trail

Agent Audit Trail Skill

Tamper-evident, hash-chained audit logging for AI agents. EU AI Act compliant.

Why

AI agents act on your behalf. From 2 August 2026, the EU AI Act requires automatic logging, tamper-evident records, and human oversight capability for AI systems. This skill provides all three with zero dependencies.

Quick Start

1. Add to your agent's workspace

cp scripts/auditlog.py /path/to/your/workspace/scripts/
chmod +x /path/to/your/workspace/scripts/auditlog.py

2. Log an action

./scripts/auditlog.py append \
  --kind "file-write" \
  --summary "Created config.yaml" \
  --target "config.yaml" \
  --domain "personal"

3. Verify integrity

./scripts/auditlog.py verify
# Output: OK (N entries verified)

Compliance Mapping

EU AI Act ArticleRequirementHow This Skill Helps
Art. 12 Record-KeepingAutomatic event loggingEvery action logged with timestamp, actor, domain, target
Art. 12 IntegrityTamper-evident recordsSHA-256 hash chaining — modification breaks the chain
Art. 14 Human OversightHuman approval linkage--gate flag links actions to human approval references
Art. 50 TransparencyAuditable recordsHuman-readable NDJSON, one-command verification
Art. 12 TraceabilityChronological orderingMonotonic ord tokens

Event Kinds

Use these standardised event types for consistent audit trails:

KindWhen to Use
file-writeAgent creates or modifies files
execAgent runs a command
api-callExternal API interaction
decisionAI makes or recommends a decision
credential-accessSecrets or credentials accessed
external-writeAgent writes to external systems
human-overrideHuman overrides an AI decision
disclosureAI identity disclosed to user

Full Documentation

See README.md for complete usage, integration examples, security model, and EU AI Act compliance guide.

Log Format

{
  "ts": "2026-02-24T07:15:00+00:00",
  "kind": "exec",
  "actor": "atlas",
  "domain": "ops",
  "plane": "action",
  "target": "pg_dump production",
  "summary": "Ran database backup",
  "gate": "approval-123",
  "ord": 42,
  "chain": {"prev": "abc...", "hash": "def...", "algo": "sha256(prev\\nline_c14n)"}
}

OpenClaw Integration

Add to HEARTBEAT.md:

## Audit integrity check
- Run: `./scripts/auditlog.py verify`
  - If fails: alert with line number + hash mismatch
  - If OK: silent

Requirements

  • Python 3.9+ (zero external dependencies)
  • MIT License

Built with 🔐 by Roosch and Atlas

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Agentshield Audit

Trust Infrastructure for AI Agents - Like SSL/TLS for agent-to-agent communication. 77 security tests, cryptographic certificates, and Trust Handshake Protoc...

Registry SourceRecently Updated
0659
Profile unavailable
Security

AgentMesh Governance

AI agent governance, trust scoring, and policy enforcement powered by AgentMesh. Activate when: (1) user wants to enforce token limits, tool restrictions, or...

Registry SourceRecently Updated
0492
Profile unavailable
Security

CrawSecure

Offline security scanner that detects unsafe code patterns in ClawHub skills before installation to help users assess potential risks locally.

Registry SourceRecently Updated
11.6K
Profile unavailable