SEO Content QA
Use this skill to audit content reliability before or after publishing.
Requirements
- Python 3.10+ (scripts use modern type syntax)
curlavailable in PATH (for HTTP HEAD checks)- No pip dependencies — standard library only
Quick start
Run the unified runner for normal draft review:
python3 skills/seo-geo-qa/scripts/seo_qa_runner.py path/to/article.md --keyword "best email apps"
If you know the site's main domain, pass it so internal vs external links are counted correctly:
python3 skills/seo-geo-qa/scripts/seo_qa_runner.py path/to/article.md --keyword "best email apps" --site-domain example.com
If you want project defaults, pass a lightweight JSON config:
python3 skills/seo-geo-qa/scripts/seo_qa_runner.py path/to/article.md --keyword "best email apps" --config path/to/seo-geo-qa.json
Standard workflow
- Run
seo_qa_runner.pyon the draft. - Read the markdown report for the human audit trail.
- Use the JSON report for automation or later aggregation.
- Fix critical issues first.
- Re-run until the article reaches PASS (or REVISE in writer mode).
- After publishing, run
post_publish_check.pyon the live URL.
Lower-level tools
Use these only when debugging a specific failure mode.
Link/source verification
python3 skills/seo-geo-qa/scripts/verify_links.py path/to/article.md
python3 skills/seo-geo-qa/scripts/verify_links.py path/to/article.md --json
SERP gap analysis
Note: Auto-search uses DuckDuckGo's HTML endpoint, which may be blocked or return empty results depending on your environment. For reliable results, pass competitor URLs directly with
--urls.
# Auto-search (best-effort, may degrade)
python3 skills/seo-geo-qa/scripts/serp_gap_analyzer.py "best email apps" path/to/article.md
# Recommended: supply competitor URLs directly
python3 skills/seo-geo-qa/scripts/serp_gap_analyzer.py "best email apps" path/to/article.md --urls https://competitor1.com https://competitor2.com
Post-publish page check
python3 skills/seo-geo-qa/scripts/post_publish_check.py https://example.com/blog/post
python3 skills/seo-geo-qa/scripts/post_publish_check.py https://example.com/blog/post --json
Report persistence
The runner writes timestamped markdown + JSON reports by default.
Default behavior:
- saves to
qa-reports/<article-slug>/next to the article - does not overwrite old reports
- uses markdown for human review and JSON for machine state
Override with --report-dir or config.
Configuration
Read references/configuration.md when you need project-level defaults.
Source quality
Read references/source-tiers.md when you need to decide whether a citation is acceptable.
Verdict rules
Read references/verdict-rules.md when you need to tune PASS / FAIL / REVISE behavior.
Example output
Read references/example-report.md for a real QA report with annotations on how to interpret each section.
Design intent
This skill is not a writing assistant. It is a reliability layer.
Use scripts for deterministic checks. Use AI judgment for tone, search intent, framing, and final editorial decisions.