debt-audit

Technical Debt Scanner — Scan any codebase for TODOs, FIXMEs, duplicated logic, unused exports, oversized files, circular dependencies, and code smells. Generates a prioritized debt report with estimated effort. Use when the user asks to: (1) audit technical debt, (2) find TODOs or FIXMEs, (3) detect code duplication, (4) find unused exports or dead code, (5) check for circular dependencies, (6) assess code quality or code smells, (7) generate a debt report, or any request about codebase health or maintainability.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "debt-audit" with this command: npx skills add youssefmahmod/skills/youssefmahmod-skills-debt-audit

Technical Debt Audit

Systematically scan the current project and produce a prioritized debt report saved to DEBT-REPORT.md.

Workflow Overview

  1. Detect project context (language, source dirs, file extensions)
  2. Run 6 scan phases (parallelize where independent)
  3. Score and prioritize all findings
  4. Generate the report using the template in references/report-template.md

Pre-Scan Setup

  1. Detect language/framework from config files (package.json, tsconfig.json, pyproject.toml, Cargo.toml, go.mod, etc.)
  2. Identify source directories (typically src/, lib/, app/, packages/)
  3. Determine file extensions — see references/patterns.md for per-language extensions and excluded directories
  4. Glob all source files to establish the working file set

Phase 1: TODO / FIXME / HACK Comments

Grep with pattern (?i)\b(TODO|FIXME|HACK|XXX|WORKAROUND)\b across all source files, output_mode "content".

Severity by tag:

TagSeverity
FIXMEhigh
HACK / WORKAROUNDhigh
XXXmedium
TODOmedium
DEPRECATED (in comments)low

Effort: S if clear action, M if references external system, L if vague.


Phase 2: Oversized Files

Count lines for all source files using wc -l via Bash. Sort descending, take top 40.

LinesSeverity
> 800critical
> 500high
> 300medium

Effort: M if one large function to extract, L if many sections to split, XL if deeply interconnected.


Phase 3: Duplicated Logic

Three detection strategies:

A. Repeated function names — Grep for function declarations (see references/patterns.md). Flag identical names in different files.

B. Repeated code blocks — Search for string literals, fetch URLs, regex patterns, or error messages appearing 3+ times across different files.

C. Similar utilities — Look for multiple files implementing date formatting, string helpers, HTTP wrappers, auth token handling, or error formatting.

Severity: high if 3+ places, medium if 2 places. Effort: S to extract shared utility, M to consolidate with interface changes, L for significant refactor.


Phase 4: Unused Exports

Two-pass approach:

  1. Collect all exported symbols (see references/patterns.md for export patterns per language)
  2. Verify each symbol has at least one import/reference elsewhere in the codebase

Exclude entry points and public API files — see references/patterns.md for the list.

Severity: medium for unused functions/classes, low for unused types or barrel re-exports. Effort: S for simple removal, M if usage is unclear (library code).

On large codebases (100+ files), limit to the 20 largest/most central modules and note the scope.


Phase 5: Circular Dependencies

  1. Build a partial import graph for the top 30–50 most imported files (see references/patterns.md for import patterns)
  2. Resolve relative paths to project-absolute paths
  3. Detect cycles up to 4 levels deep, prioritizing direct A ↔ B cycles
Cycle LengthSeverity
2 filescritical
3 fileshigh
4 filesmedium

Effort: M to extract shared types, L to restructure, XL for deep entanglement.


Phase 6: Code Smells

Quick heuristic pass:

  • Long parameter lists — 5–6 params: medium/S, 7+: high/M
  • Loose typing (TS only) — Grep :\s*any and as any. 4+ per file: medium/S
  • Deep nesting — 4+ indentation levels in largest files. 4: medium/S, 5+: high/M
  • Magic numbers/strings — Hardcoded values (excluding 0, 1, -1) repeated across files: low/S

Scoring & Prioritization

Severity weights:  critical=4, high=3, medium=2, low=1
Effort weights:    S=1, M=2, L=4, XL=8

Priority Score = severity_weight × (4 / effort_weight)
ScorePriority
12–16P0 — Fix immediately
6–11P1 — Fix soon
3–5P2 — Next sprint
1–2P3 — Backlog

Output

Generate the report following references/report-template.md. Save to DEBT-REPORT.md in the project root.

Execution Notes

  • Phases 1, 2, and 6 are independent — run in parallel
  • Mark uncertain findings with ? suffix on severity
  • Skip irrelevant phases (e.g., no TS-specific smells for Python projects)
  • For 100+ file projects, sample strategically and note scope in report

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Skill Safe Install

L0 级技能安全安装流程。触发“安装技能/安全安装/审查权限”时,强制执行 Step0-5(查重→检索→审查→沙箱→正式安装→白名单)。

Registry SourceRecently Updated
3800Profile unavailable
Security

Skill Hunter

Find, evaluate, and install ClawHub skills. Semantic search across 10,000+ skills, security vetting before install, side-by-side comparison. The skill that m...

Registry SourceRecently Updated
5192Profile unavailable
Security

audit-website

Audit websites for SEO, performance, security, technical, content, and 15 other issue cateories with 230+ rules using the squirrelscan CLI. Returns LLM-optimized reports with health scores, broken links, meta tag analysis, and actionable recommendations. Use to discover and asses website or webapp issues and health.

Repository Source
Security

better-auth-security-best-practices

No summary provided by upstream source.

Repository SourceNeeds Review