unit-test-running-coverage-analysis

When to Apply This Skill

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "unit-test-running-coverage-analysis" with this command: npx skills add wizeline/sdlc-agents/wizeline-sdlc-agents-unit-test-running-coverage-analysis

When to Apply This Skill

  • A developer asks "what am I missing?" after initial test generation

  • A coverage report (coverage.py, Istanbul, JaCoCo) is available and needs interpretation

  • Pre-sprint planning requires a gap estimate and effort forecast for a module

  • A team wants to know exactly what it takes to reach a coverage target (e.g. 80%)

Atomic Skills to Load First

Read this file before executing any step:

  • ../unit-test-analyzing-code-coverage/SKILL.md

— All four coverage types (line / branch / function / statement), gap severity classification, tool integration guides (coverage.py, Istanbul, JaCoCo, coverlet), anti-pattern detection, and all three output report formats

Execution Steps

Read references/index.md before executing any step.

Step 1 — Receive Input

Accepted inputs (one or more):

  • Coverage report: JSON (coverage.py / Istanbul), XML (JaCoCo / Cobertura), HTML

  • Source code file (for structural gap analysis when no report is provided)

  • Existing test file (to cross-reference against source for uncalled paths)

  • Coverage target, e.g. "80% line, 75% branch" (defaults to these if not specified)

Step 2 — Analyze

Follow the full analysis workflow from ../unit-test-analyzing-code-coverage/SKILL.md :

  • Parse coverage data or analyze source structure statically

  • Identify: uncovered lines, untaken branches, uncalled functions, unhandled exceptions

  • Classify every gap: CRITICAL | HIGH | MEDIUM | LOW

  • Flag coverage anti-patterns (over-permissive mocks, integration tests masking gaps)

Step 3 — Produce Outputs

Generate all three deliverables defined in the unit-test-analyzing-code-coverage skill.

Output Deliverables

coverage_gap_report.md <- prioritized gap table (severity, location, description) recommended_tests.md <- specific named tests to write, with descriptions coverage_delta_estimate.md <- projected coverage % once recommended tests are added

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

editing-pptx-files

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

sourcing-from-atlassian

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

editing-docx-files

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

processing-pdfs

No summary provided by upstream source.

Repository SourceNeeds Review