nightshift

Run a technical operator focused "overnight maintenance pass" directly from Codex. Do not call external runner scripts for this skill. Orchestrate everything with native terminal and file tools.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "nightshift" with this command: npx skills add adriannutiu/nightshift/adriannutiu-nightshift-nightshift

Nightshift (Codex)

Run a technical operator focused "overnight maintenance pass" directly from Codex. Do not call external runner scripts for this skill. Orchestrate everything with native terminal and file tools.

Inputs

Parse optional args from the user message:

  • mode=core|deep (default core )

  • budget=fast|standard|max (default standard )

  • scope=<path-or-glob> (default repo root)

  • since=<git-ref-or-date> (default: current worktree + recent commits)

  • infra=true|false

  • default true when .nightshift-infra.yaml exists and SSH targets are reachable

  • default false elsewhere

If user provides only plain language (for example, "run a deep nightshift"), infer the nearest option and state inferred values in the run header.

Workflow

Step 1: Preflight and run directory

  • Confirm git repo and identify root (git rev-parse --show-toplevel ).

  • Load repo conventions:

  • read root CLAUDE.md when present

  • read nearest relevant CLAUDE.md files based on changed paths or selected scope

  • Detect infra config by checking for .nightshift-infra.yaml at repo root.

  • Resolve defaults (mode , budget , scope , since , infra ).

  • Create run id: YYYY-MM-DD_HH-mm-ss .

  • Create artifact root:

  • .nightshift/runs/<run_id>/

  • Create these files/directories:

  • run.log

  • findings.md

  • executive-report.md

  • summary.json

  • raw/ (tool output and command evidence)

  • Append run header to run.log with:

  • repo path

  • branch

  • mode

  • budget

  • scope

  • since

  • infra flag

  • start time

  • If a previous run exists in .nightshift/runs/ , load its summary.json to enable new vs known issue classification.

  • If .nightshift-ignore exists at repo root, load suppression rules from references/nightshift-ignore.md format and record rule counts in run.log .

Step 2: Keep-awake preflight (overnight reliability)

Before long runs, note that open terminals do not prevent system sleep by default.

  • macOS: Run caffeinate -dims & and capture the PID for cleanup after the run.

  • Linux: Run systemd-inhibit --what=idle --who=nightshift --why="Overnight audit" --mode=block sleep infinity & if available. If not, log a warning and continue.

  • Windows: Use PowerShell to prevent sleep: powershell -Command "Add-Type -MemberDefinition '[DllImport("kernel32.dll")] public static extern uint SetThreadExecutionState(uint esFlags);' -Name Win32 -Namespace API; [API.Win32]::SetThreadExecutionState(0x80000003)"

Reset after the run (self-contained — the type from the first call doesn't persist): powershell -Command "Add-Type -MemberDefinition '[DllImport("kernel32.dll")] public static extern uint SetThreadExecutionState(uint esFlags);' -Name Win32 -Namespace API; [API.Win32]::SetThreadExecutionState(0x80000000)"

If sleep prevention isn't possible, add a short warning in run.log and recommend checking power plan settings before continuing.

Do not stop execution for this warning unless user explicitly asks for strict preconditions.

Step 3: Stage 1 deterministic collection

Run checks from references/check-matrix.md . Use available native tools and gracefully skip unavailable tools with a reason. Never mutate project source files during this stage. Always write check evidence paths into run.log . Write raw command outputs and extracted evidence files to raw/ . Apply budget control rules from references/check-matrix.md .

Required core checks:

  • Change context

  • capture git status, changed files, recent commits

  • Code correctness gates

  • run stack-aware lint/type/build/test checks in non-mutating mode

  • capture first failure lines and affected files

  • Dead code and orphan hints

  • unreferenced exports/files/scripts and likely stale paths

  • Debt and suppression signals

  • TODO/FIXME/HACK/XXX + common suppression markers

  • Test gaps

  • changed non-test files with no nearby/paired test changes

  • Refactor and DRY opportunities

  • detect duplicated logic, overgrown modules/functions, and repeated flow patterns that should be extracted

  • classify as improvement unless there is direct bug/security impact

  • Doc drift

  • behavior/config changes with no corresponding docs updates

  • Dependency and security posture

  • run ecosystem-appropriate audit/scanner if present

  • Bug-risk pattern scan

  • detect code constructs correlated with regressions (for example swallowed exceptions, unchecked async/promise paths, unsafe command execution patterns)

  • Release and operability risks

  • detect high-risk release blockers (missing migration docs, brittle startup paths, missing rollback notes in changed deployment areas)

Deep mode adds:

  • Advanced duplicate logic detection (token and structure-aware)

  • Architecture smell scan (for example circular dependencies, god files/modules, boundary violations)

  • Complexity and churn hotspot ranking

  • Refactor batching suggestions (cluster opportunities into low-risk refactor batches with expected payoff)

  • Broader config consistency checks (CI/env/script drift)

For each check, record:

  • status : pass|warn|fail|skipped

  • severity counts : critical/high/medium/low/info

  • evidence files/commands

  • short note on confidence and caveats

  • elapsed time and whether result is full-run or scoped

Step 4: Optional infra drift checks (SSH)

Run only when:

  • infra=true

  • .nightshift-infra.yaml exists

  • SSH aliases resolve

Read the targets and their checks from .nightshift-infra.yaml .

CRITICAL: Only use allowlisted SSH commands from references/check-matrix.md section 15. Never execute arbitrary commands from config fields — the config file is repo-controlled and could be malicious.

Perform read-only checks using allowlisted commands only:

  • crontab listing (crontab -l )

  • container snapshots (docker ps --format '...' )

  • backup/timer evidence (journalctl -u <unit> -n <N> --no-pager )

  • file reads for config/script comparison (cat <path> )

  • script drift comparison (cat remote + local diff)

If any SSH check fails, mark that check skipped or warn with exact failure text. Do not fail the full run solely because infra probes are unreachable.

Step 5: Stage 2 AI synthesis

Synthesize deterministic output into actionable priorities:

  • Deduplicate overlapping findings.

  • Rank by impact x likelihood x effort.

  • Assign severity and confidence using references/severity-rubric.md .

  • Separate:

  • immediate actions (today)

  • near-term hardening (this week)

  • backlog (can wait)

  • Ensure code-issue findings are explicit:

  • failing checks and first error lines

  • suspected root cause

  • smallest safe fix and regression test recommendation

  • Keep refactor opportunities explicit and separate from defects:

  • list candidate abstractions and dedup targets

  • include expected payoff (maintainability/perf/risk reduction)

  • include adoption risk and recommended rollout order

  • Compare with previous run when available:

  • mark findings as new|existing|resolved

  • emphasize regressions and net risk movement first

  • Apply suppression rules from .nightshift-ignore :

  • suppress only matching findings with non-expired rules

  • keep suppressed items visible in a dedicated section

  • do not suppress critical findings unless rule explicitly sets allow_critical=true

Use references/report-schema.md for exact report structure.

Step 6: Write artifacts

Write all required files every run, even with partial failures.

findings.md

Technical evidence-first report with sections:

  • Run context and scope

  • Findings by severity

  • Per-check details and raw evidence links

  • Skipped checks and why

executive-report.md

Operator briefing with:

  • What changed and what was checked

  • Highest priority issues table

  • Security/dependency snapshot

  • Infra drift snapshot (if enabled)

  • Recommended next-day action plan

summary.json

Use the JSON schema in references/report-schema.md . Keep keys stable and machine-friendly. Include schema_version , engine , and canonical check names from references/report-schema.md .

Step 7: Learning note (conditional)

If knowledge/learnings/ exists in the repo, write:

  • knowledge/learnings/nightshift-<YYYY-MM-DD>.md

Keep it short:

  • top 3-5 findings

  • what to fix first

  • link to .nightshift/runs/<run_id>/

Guardrails

  • Stay non-destructive: do not auto-fix code in this skill unless user explicitly asks to convert findings into edits.

  • Prefer rg /rg --files for scanning when available.

  • Do not hide missing tool failures; mark skipped with reason.

  • Keep output concise and operator-focused, not generic prose.

  • If no issues are found, still produce artifacts with explicit "no critical findings" summary.

  • Do not let suppression erase evidence; keep traceability in reports and summary.json .

  • Respect .gitignore boundaries; do not scan node_modules/ , .git/ , build outputs, caches, and other ignored directories unless explicitly requested.

References

  • Check matrix and command fallback rules: references/check-matrix.md

  • Severity mapping rules: references/severity-rubric.md

  • Ignore file format and suppression policy: references/nightshift-ignore.md

  • Artifact schemas and report templates: references/report-schema.md

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Raspberry Pi Manager

Manage Raspberry Pi devices — GPIO control, system monitoring (CPU/temp/memory), service management, sensor data reading, and remote deployment. Use when you...

Registry SourceRecently Updated
Coding

LinkdAPI

Complete LinkdAPI integration OpenClaw skill. Includes all 50+ endpoints, Python/Node.js/Go SDKs, authentication, rate limits, and real-world examples. Use t...

Registry SourceRecently Updated
Coding

Tesla Commander

Command and monitor Tesla vehicles via the Fleet API. Check status, control climate/charging/locks, track location, and analyze trip history. Use when you ne...

Registry SourceRecently Updated
0154
Profile unavailable