systematic-debugging

Random fixes waste time and create new bugs. Quick patches mask underlying issues.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "systematic-debugging" with this command: npx skills add mjunaidca/mjs-agent-skills/mjunaidca-mjs-agent-skills-systematic-debugging

Systematic Debugging

Random fixes waste time and create new bugs. Quick patches mask underlying issues.

Core principle: ALWAYS find root cause before attempting fixes. Symptom fixes are failure.

The Iron Law

NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST

If you haven't completed Phase 1, you cannot propose fixes.

The Four Phases

Phase 1: Root Cause Investigation

BEFORE attempting ANY fix:

Read Error Messages Carefully

  • Don't skip past errors or warnings

  • Read stack traces completely

  • Note line numbers, file paths, error codes

Reproduce Consistently

  • Can you trigger it reliably?

  • What are the exact steps?

  • If not reproducible, gather more data - don't guess

Check Recent Changes

  • Git diff, recent commits

  • New dependencies, config changes

  • Environmental differences

Gather Evidence in Multi-Component Systems

When system has multiple components (CI -> build -> signing, API -> service -> database):

For EACH component boundary:

  • Log what data enters component
  • Log what data exits component
  • Verify environment/config propagation

Run once to gather evidence showing WHERE it breaks THEN analyze to identify failing component

Trace Data Flow

See references/root-cause-tracing.md for backward tracing technique.

Quick version: Where does bad value originate? Keep tracing up until you find the source. Fix at source, not symptom.

Phase 2: Pattern Analysis

  • Find Working Examples - Locate similar working code in same codebase

  • Compare Against References - Read reference implementations COMPLETELY, don't skim

  • Identify Differences - List every difference between working and broken

  • Understand Dependencies - What settings, config, environment assumptions?

Phase 3: Hypothesis and Testing

  • Form Single Hypothesis - "I think X is the root cause because Y"

  • Test Minimally - SMALLEST possible change, one variable at a time

  • Verify Before Continuing - Worked? Phase 4. Didn't? NEW hypothesis, don't stack fixes

Phase 4: Implementation

Create Failing Test Case - Simplest reproduction, automated if possible

Implement Single Fix - ONE change, no "while I'm here" improvements

Verify Fix - Test passes? No regressions?

If Fix Doesn't Work:

  • Count: How many fixes have you tried?

  • If < 3: Return to Phase 1, re-analyze

  • If >= 3: STOP and question the architecture

If 3+ Fixes Failed: Question Architecture

Pattern indicating architectural problem:

  • Each fix reveals new shared state/coupling

  • Fixes require "massive refactoring"

  • Each fix creates new symptoms elsewhere

STOP. Discuss with user before attempting more fixes.

Red Flags - STOP and Follow Process

If you catch yourself thinking:

  • "Quick fix for now, investigate later"

  • "Just try changing X and see"

  • "Add multiple changes, run tests"

  • "I'm confident it's X, let me fix that"

  • "One more fix attempt" (when already tried 2+)

  • Proposing solutions before tracing data flow

ALL of these mean: STOP. Return to Phase 1.

Supporting Techniques

Defense-in-Depth

When you fix a bug, validate at EVERY layer:

Layer Purpose Example

Entry Point Reject invalid input at API boundary if (!dir) throw new Error('dir required')

Business Logic Ensure data makes sense for operation Validate before processing

Environment Guards Prevent dangerous ops in specific contexts Refuse git init outside tmpdir in tests

Debug Instrumentation Capture context for forensics Log with stack trace before dangerous ops

Single validation feels sufficient, but different code paths bypass it. Make bugs structurally impossible.

Condition-Based Waiting

Flaky tests guess at timing. Wait for actual conditions instead:

BAD: Guessing at timing

await asyncio.sleep(0.05) result = get_result()

GOOD: Wait for condition

await wait_for(lambda: get_result() is not None) result = get_result()

Pattern:

async def wait_for(condition, timeout_ms=5000): start = time.time() while True: if condition(): return if (time.time() - start) * 1000 > timeout_ms: raise TimeoutError("Condition not met") await asyncio.sleep(0.01) # Poll every 10ms

Common Rationalizations

Excuse Reality

"Issue is simple, don't need process" Simple issues have root causes too. Process is fast for simple bugs.

"Emergency, no time for process" Systematic debugging is FASTER than guess-and-check thrashing.

"Just try this first, then investigate" First fix sets the pattern. Do it right from the start.

"I see the problem, let me fix it" Seeing symptoms != understanding root cause.

"One more fix attempt" (after 2+ failures) 3+ failures = architectural problem. Question pattern, don't fix again.

Verification

Run: python scripts/verify.py

References

  • references/root-cause-tracing.md - Trace bugs backward through call stack

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

working-with-spreadsheets

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

browsing-with-playwright

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

working-with-documents

No summary provided by upstream source.

Repository SourceNeeds Review