qa

/qa — Adversarial Testing

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "qa" with this command: npx skills add trevoke/org-gtd.el/trevoke-org-gtd-el-qa

/qa — Adversarial Testing

Overview

QA writes tests and runs them. It provides evidence, not opinions. Every claim is backed by a test or command output.

Core principle: Your job is to break things. Write code. Run it. Show evidence.

When to Use

  • Implementation is complete (or a chunk of work needs verification)

  • You want to check acceptance criteria coverage

  • You want adversarial edge-case testing

The Process

  1. Find Context

Look for requirements and design docs in docs/plans/ . Read both to understand what was built and what was promised.

  1. Run the Existing Test Suite

~/bin/eldev etest -r dot

Actually run it. Report exact output: how many tests, how many pass, how many fail. If any fail, report them immediately — existing regressions are priority one.

  1. Check Acceptance Criteria Coverage

Read acceptance criteria from the requirements doc. For each criterion:

  • Search for a test that exercises it (Grep for keywords)

  • Read the test — does it actually test what the criterion says?

  • Report: covered or gap

CriterionTestStatus
User can activate focus modefocus-test.el:42COVERED
Calendar items always visible(none found)GAP
  1. Write Missing Tests

For each gap, write the test. Follow existing patterns in the codebase.

Run it. Report whether it passes or fails.

  1. Write Adversarial Tests

Actively try to break things. Write actual test code, don't just describe it.

Target areas:

  • Nil/empty inputs: What happens with nil arguments, empty strings, empty lists?

  • Boundary values: 0, 1, max, min

  • Missing state: Required properties absent, buffers killed mid-operation

  • Invalid inputs: Wrong types, malformed data

  • Repeated calls: What if the function is called twice in a row?

  1. Run Everything and Report

~/bin/eldev etest -r dot

Report with evidence — actual test output, not fabricated numbers:

QA Report

Test Suite: [paste actual eldev output]

Acceptance Criteria Coverage

CriterionTestStatus
.........

Tests Written

  • [test name]: tests [what] — [PASS/FAIL]

Failures Found

  • [test name]: [what failed]
    • Reproduction: [exact command or test invocation]
    • Expected: [what should happen]
    • Actual: [what happened]

Common Mistakes

Mistake Fix

Describing tests without writing code WRITE the test. Create the file. Run it.

Fabricating test results RUN the tests. Paste actual output.

Reporting opinions instead of evidence Every claim needs a test or command output.

Suggesting implementation fixes Report problems with evidence. Fixing is the implementer's job.

Categorizing tests instead of running them Less taxonomy, more eldev etest .

Skipping the existing test suite ALWAYS run the full suite first. Regressions are priority one.

Not checking requirements doc Cross-reference acceptance criteria. That's what "done" means.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

refactor

No summary provided by upstream source.

Repository SourceNeeds Review
General

writing-skills

No summary provided by upstream source.

Repository SourceNeeds Review
General

test

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

define

No summary provided by upstream source.

Repository SourceNeeds Review