lumin

Observe and debug your OpenClaw agent with Lumin — local-first AI observability. Records every LLM call, tool call, and decision; view at localhost:3000. Use when the user asks: "show my agent traces", "why did my agent fail", "how much did my last run cost", "open lumin", "debug my agent", or "show recent traces".

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "lumin" with this command: npx skills add amitbidlan/lumin

Lumin — Local Agent Observability

Lumin records every LLM call, tool call, and decision your OpenClaw agent makes. View them at http://localhost:3000. Data never leaves the machine — everything runs in a single Docker container.

This skill provides three commands that hit Lumin's local HTTP API so you can check status and pull trace data without leaving the chat.

Check if Lumin is running

Run: node $SKILL_DIR/lumin.mjs status

Returns one of:

  • Lumin is running on http://localhost:8000
  • Lumin is not running on …. Start with: docker run -p 3000:3000 -p 8000:8000 zistica/lumin

Show recent traces

Run: node $SKILL_DIR/lumin.mjs traces

Returns the last 5 agent traces formatted like:

Recent agent traces:
  1. openclaw_session  3.2s  $0.0023  OK
  2. openclaw_session  1.8s  $0.0011  OK
  3. openclaw_session  5.1s  $0.0089  ERROR
  …
Open http://localhost:3000/traces for the full list.

Show one trace's spans

Run: node $SKILL_DIR/lumin.mjs trace <id>

Returns the trace metadata plus every span with type, model, tokens, cost, and any error message.

Wire OpenClaw to Lumin

This skill only reads from Lumin. To get your OpenClaw agent's traces into Lumin, install the companion exporter:

npm install @lumin-io/openclaw

…and pass luminProcessor() into your OpenClaw OTel config. See https://github.com/amitbidlan/zistica-lumin/tree/main/packages/integrations/openclaw.

Start Lumin (if not running)

docker run -p 3000:3000 -p 8000:8000 zistica/lumin

Then open http://localhost:3000.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Agent Observability Stack: Distributed Tracing, Metrics, and Alerting for Multi-Agent Systems

Agent Observability Stack: Distributed Tracing, Metrics, and Alerting for Multi-Agent Systems. Build a complete observability stack for agent commerce: OpenT...

Registry SourceRecently Updated
2500Profile unavailable
Security

SecOpsAI for OpenClaw

Conversational SecOps for OpenClaw audit logs. Run the live detection pipeline, inspect findings, triage incidents, and get mitigation guidance — all from chat.

Registry SourceRecently Updated
3261Profile unavailable
Automation

Ops & Production Bundle: 5-Guide Collection for Running Agent Systems at Scale

Ship and operate AI agent systems in production. Covers fleet management, production hardening, distributed observability, QA/chaos testing, and incident res...

Registry SourceRecently Updated
1890Profile unavailable
Automation

The Agent Testing & Observability Cookbook: Ship Reliable Agent Commerce Systems

The Agent Testing & Observability Cookbook: Ship Reliable Agent Commerce Systems. Practitioner cookbook for testing and monitoring agent commerce: tool contr...

Registry SourceRecently Updated
2480Profile unavailable