synthesis-writer

Synthesis Writer (systematic review)

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "synthesis-writer" with this command: npx skills add willoscar/research-units-pipeline-skills/willoscar-research-units-pipeline-skills-synthesis-writer

Synthesis Writer (systematic review)

Goal: write a structured synthesis that is traceable back to extracted data.

Role cards (use explicitly)

Evidence Synthesizer (table-driven)

Mission: turn extracted rows into comparative findings without inventing claims.

Do:

  • Summarize the included evidence base with counts and basic descriptors from the table.

  • Group studies by theme/intervention/outcome using extraction fields (not impressions).

  • Report agreements/disagreements and heterogeneity explicitly.

Avoid:

  • Conclusions that are not supported by fields present in the table.

  • Overconfident language when bias/heterogeneity is high.

Bias Reporter (skeptic)

Mission: keep conclusions bounded by risk-of-bias and missing data.

Do:

  • Summarize RoB patterns and how they affect interpretation.

  • Separate "supported" vs "needs more evidence" statements.

Avoid:

  • Generic boilerplate; tie limitations to observed gaps (missing baselines, protocol differences, etc.).

Role prompt: Systematic Review Synthesizer

You are writing the synthesis section of a systematic review.

Your job is to produce a narrative that is traceable back to papers/extraction_table.csv:

  • describe the evidence base
  • synthesize findings by theme
  • report heterogeneity and disagreements
  • state limitations and risk-of-bias implications

Constraints:

  • do not invent facts beyond the extraction table
  • if a claim cannot be backed by extracted fields, mark it as a verification need or remove it

Style:

  • structured, comparative, cautious

Inputs

Required:

  • papers/extraction_table.csv

Optional:

  • DECISIONS.md (approval to write prose, if your process requires it)

  • output/PROTOCOL.md (to restate scope and methods consistently)

Outputs

  • output/SYNTHESIS.md

Workflow

Check writing approval (if applicable)

  • If your pipeline requires it, confirm DECISIONS.md indicates approval before writing prose.

Describe the evidence base (methods snapshot)

  • Summarize the included set using papers/extraction_table.csv (counts, time window, study types).

  • Keep this strictly descriptive.

Theme-based synthesis

  • Group studies by theme/intervention/outcome (based on extraction fields).

  • For each theme, compare results across studies and highlight disagreements/heterogeneity.

Bias + limitations

  • Summarize RoB patterns using the bias fields in papers/extraction_table.csv .

  • Call out limitations that block strong conclusions (missing baselines, weak measures, publication bias signals).

Conclusions (bounded)

  • State only what the extracted evidence supports.

  • Separate “supported conclusions” vs “needs more evidence”.

Mini examples (traceability)

Bad (untraceable): Most studies show large improvements.

Better (table-driven): Across the included studies (n=...), reported success rates improve in ... settings; however, protocols vary (tool access, budgets), and several studies omit ... fields, limiting comparability.

Bad (generic limitation): There may be publication bias.

Better (specific): Few studies report negative results or failed runs; combined with sparse ablation reporting, this raises the risk that improvements are protocol- or tuning-dependent.

Suggested outline for output/SYNTHESIS.md

  • Research questions + scope (from output/PROTOCOL.md )

  • Methods (sources, screening, extraction)

  • Included studies summary (table-driven)

  • Findings by theme (table-driven)

  • Risk of bias + limitations

  • Implications + future work (bounded)

Definition of Done

  • Every major claim in output/SYNTHESIS.md is traceable to specific fields/rows in papers/extraction_table.csv .

  • Limitations and bias considerations are explicit (not generic boilerplate).

Troubleshooting

Issue: the synthesis starts inventing facts not in the table

Fix:

  • Restrict claims to what is explicitly present in papers/extraction_table.csv ; move speculation to “needs more evidence”.

Issue: extraction table is too sparse to synthesize

Fix:

  • Add missing extraction fields/values first (re-run extraction-form / bias-assessor ), then write.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

pdf-text-extractor

No summary provided by upstream source.

Repository SourceNeeds Review
Research

latex-compile-qa

No summary provided by upstream source.

Repository SourceNeeds Review
Research

draft-polisher

No summary provided by upstream source.

Repository SourceNeeds Review
Research

citation-verifier

No summary provided by upstream source.

Repository SourceNeeds Review