Hookaido
Overview
Implement and troubleshoot Hookaido with a config-first workflow: edit Hookaidofile, validate, run, exercise ingress/pull/exec flows, then diagnose queue health and DLQ behavior.
Treat Hookaido v2.6.0's modular architecture as additive in this skill: keep the existing workflow intact by default, and opt into modules such as postgres, gRPC workers, subprocess delivery (deliver exec), or release verification only when they materially help the task.
Use conservative, reversible changes and validate before runtime operations.
Workflow
- Confirm target topology: inbound+pull (HTTP or gRPC), push outbound, subprocess exec, or internal queue, plus the queue backend (
sqlite,memory, orpostgres). - Choose runtime mode and ensure
hookaidoexists where tools execute.- Host-binary mode: use the install action from
metadata.openclaw.install. - Host fallback: run
bash {baseDir}/scripts/install_hookaido.sh(pinnedv2.6.0, SHA256-verified). - Public repo/source mode: use the public upstream repo
github.com/nuetzliches/hookaidoviago install github.com/nuetzliches/hookaido/cmd/hookaido@v2.6.0when a source-based install is preferred. - Docker-sandbox mode: use a sandbox image that already includes
hookaido(preferred), or install inside sandbox viaagents.defaults.sandbox.docker.setupCommand. - Keep host install actions available as fallback and to satisfy
metadata.openclaw.requires.bins.
- Host-binary mode: use the install action from
- Inspect and update
Hookaidofileminimally. - Run format and validation before starting or reloading:
hookaido config fmt --config ./Hookaidofilehookaido config validate --config ./Hookaidofilehookaido config validate --config ./Hookaidofile --strict-secretswhen secret refs or Vault-backed config are involved.
- Start runtime and verify health:
hookaido run --config ./Hookaidofile --db ./.data/hookaido.dbhookaido run --config ./Hookaidofile --postgres-dsn "$HOOKAIDO_POSTGRES_DSN"whenqueue postgresis selected.curl http://127.0.0.1:2019/healthz?details=1
- Validate end-to-end behavior:
- ingress request accepted and queued
- consumer
dequeue/ack/nack/extendpath works (HTTP pull, batchack/nack, plus gRPC pull when enabled)
- For incidents, inspect backlog and DLQ first, then mutate.
Task Playbooks
Configure Ingress and Pull Consumption
- Define a route with explicit auth and pull path (HTTP pull, optional gRPC pull worker listener).
- Keep secrets in env/file refs, never inline.
- Verify route and global pull auth are consistent.
- Test with a real webhook payload and a dequeue/ack cycle, using batch
ack/nackwhen worker throughput matters.
Prefer this baseline:
ingress {
listen :8080
}
pull_api {
listen :9443
grpc_listen :9943 # optional gRPC pull-worker listener
auth token env:HOOKAIDO_PULL_TOKEN
}
/webhooks/github {
auth hmac env:HOOKAIDO_INGRESS_SECRET
pull { path /pull/github }
}
Configure Push Delivery
- Use push delivery only when inbound connectivity to the service is acceptable.
- Set timeout and retry policy explicitly.
- Validate downstream idempotency since delivery is at-least-once.
/webhooks/stripe {
auth hmac env:STRIPE_SIGNING_SECRET
deliver "https://billing.internal/stripe" {
retry exponential max 8 base 2s cap 2m jitter 0.2
timeout 10s
}
}
Configure Subprocess Delivery (deliver exec)
- Use exec delivery when the target is a local script or binary, not an HTTP service.
- Payload is piped to stdin; metadata arrives as env vars (
HOOKAIDO_ROUTE,HOOKAIDO_EVENT_ID,HOOKAIDO_ATTEMPT, etc.). - Exit code determines retry behavior:
0= ack,1-125= retry,126/127= immediate DLQ. signdirectives are not supported with exec (compile error).
/webhooks/github {
auth hmac {
provider github
secret env:GITHUB_WEBHOOK_SECRET
}
deliver exec "/opt/hooks/deploy.sh" {
timeout 30s
retry exponential max 3 base 1s cap 30s jitter 0.2
env DEPLOY_ENV production
env NOTIFY_URL {env.SLACK_WEBHOOK_URL}
}
}
Configure Provider-Compatible HMAC
- Use
provider github,provider gitea,provider stripe, orprovider citurofor webhook providers with their own signature format. - Provider mode disables timestamp/nonce replay protection (providers do not send those headers).
signature_header,timestamp_header,nonce_header, andtoleranceare forbidden in provider mode (compile error).
/webhooks/github {
auth hmac {
provider github
secret env:GITHUB_WEBHOOK_SECRET
}
pull { path /pull/github }
}
/webhooks/gitea {
auth hmac {
provider gitea
secret env:GITEA_WEBHOOK_SECRET
}
pull { path /pull/gitea }
}
/webhooks/stripe {
auth hmac {
provider stripe
secret env:STRIPE_SIGNING_SECRET
}
pull { path /pull/stripe }
}
/webhooks/cituro {
auth hmac {
provider cituro
secret env:CITURO_WEBHOOK_SECRET
}
pull { path /pull/cituro }
}
Use SSE Streaming (v2.5.3+)
- SSE replaces polling for real-time webhook delivery — use
GET {pull.path}/streaminstead of repeatedPOST .../dequeue. - ACK/NACK operations use the same existing POST endpoints; no protocol change.
- Multiple concurrent SSE connections act as competing consumers.
- Configure keepalive interval (
keepalive) and max connection duration (max_duration) in the route'spullblock.
# Connect SSE stream (persistent, server pushes events)
curl -sS -N "http://localhost:9443/pull/github/stream" \
-H "Authorization: Bearer $HOOKAIDO_PULL_TOKEN"
# ACK received event
curl -sS -X POST "http://localhost:9443/pull/github/ack" \
-H "Authorization: Bearer $HOOKAIDO_PULL_TOKEN" \
-H "Content-Type: application/json" \
-d '{"lease_id":"lease_xyz"}'
Configure Queue Backends
- Default to
sqliteunless the task explicitly needs ephemeral dev mode or shared Postgres storage. - Treat
memoryandpostgresas additive v2 modules, not replacements for existing sqlite workflows. - When using
postgres, document the DSN source and validate health plus backlog endpoints after startup.
Prefer these patterns:
queue sqlite
queue memory
queue postgres
Operate Queue and DLQ
- Start with health details and backlog endpoints.
- Inspect DLQ before requeue or delete.
- If requeueing many items, explain expected impact and rollback path.
- Require clear operator reason strings for mutating admin calls.
Use:
GET /healthz?details=1GET /backlog/trendsGET /dlqPOST /dlq/requeuePOST /dlq/delete
Use MCP Mode for AI Operations
- Default to
--role readfor diagnostics. - Enable mutations only with explicit operator intent:
--enable-mutations --role operate --principal <identity>
- Enable runtime control only for admin workflows:
--enable-runtime-control --role admin --pid-file <path>
- Include
reasonfor mutation calls and keep it specific.
Register as Claude Code MCP Plugin
Add to .claude/settings.json (or ~/.claude/settings.json for global use):
{
"mcpServers": {
"hookaido": {
"command": "hookaido",
"args": [
"mcp", "serve",
"--config", "./Hookaidofile",
"--db", "./.data/hookaido.db",
"--role", "read"
]
}
}
}
For operate role (queue mutations):
{
"mcpServers": {
"hookaido": {
"command": "hookaido",
"args": [
"mcp", "serve",
"--config", "./Hookaidofile",
"--db", "./.data/hookaido.db",
"--enable-mutations",
"--role", "operate",
"--principal", "claude"
]
}
}
}
The MCP server exposes structured tools directly — no shell output parsing. Claude Code discovers available tools at startup and uses them with typed parameters.
Verify Public Releases
- Prefer official release assets from the public Hookaido repo.
- When supply-chain assurance matters, validate checksums, signature material, and provenance before rollout.
- Keep verification optional by default so existing skill flows do not become heavier unless the task requires it.
Use:
hookaido verify-release --checksums ./hookaido_v2.6.0_checksums.txt --require-provenance
Validation Checklist
hookaido config validatereturns success before runtime start/reload.hookaido config validate --strict-secretsis used when secret refs, Vault, or public-release rollout validation matters.- Health endpoint is reachable and reports expected queue/backend state.
- Pull consumer can
dequeue,ack,nack, andextendwith valid token (HTTP, SSE, and optional gRPC transport), including batchack/nackwhen enabled. - For push mode, retry/timeout behavior is explicitly configured.
- For exec mode, handler script is executable, reads stdin, and uses exit codes correctly (0=ack, non-zero=retry, 126/127=DLQ).
- For
queue postgres, runtime is started with--postgres-dsnorHOOKAIDO_POSTGRES_DSN. - Any DLQ mutation is scoped, justified, and logged.
Safety Rules
- Do not disable auth to "make tests pass."
- Do not suggest direct mutations before read-only diagnostics.
- Treat queue operations as at-least-once; require idempotent handlers.
- Keep secrets in
env:orfile:refs.
References
- Read
references/operations.mdfor command snippets and API payload templates.