content-experimentation-best-practices

Content experimentation and A/B testing guidance covering experiment design, hypotheses, metrics, sample size, statistical foundations, CMS-managed variants, and common analysis pitfalls. Use this skill when planning experiments, setting up variants, choosing success metrics, interpreting statistical results, or building experimentation workflows in a CMS or frontend stack.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "content-experimentation-best-practices" with this command: npx skills add sanity-io/agent-toolkit/sanity-io-agent-toolkit-content-experimentation-best-practices

Content Experimentation Best Practices

Principles and patterns for running effective content experiments to improve conversion rates, engagement, and user experience.

When to Apply

Reference these guidelines when:

  • Setting up A/B or multivariate testing infrastructure
  • Designing experiments for content changes
  • Analyzing and interpreting test results
  • Building CMS integrations for experimentation
  • Deciding what to test and how

Core Concepts

A/B Testing

Comparing two variants (A vs B) to determine which performs better.

Multivariate Testing

Testing multiple variables simultaneously to find optimal combinations.

Statistical Significance

The confidence level that results aren't due to random chance.

Experimentation Culture

Making decisions based on data rather than opinions (HiPPO avoidance).

Resources

Start with the resource that matches the current problem, such as design, statistics, CMS integration, or pitfalls. See resources/ for detailed guidance:

  • resources/experiment-design.md — Hypothesis framework, metrics, sample size, and what to test
  • resources/statistical-foundations.md — p-values, confidence intervals, power analysis, Bayesian methods
  • resources/cms-integration.md — CMS-managed variants, field-level variants, external platforms
  • resources/common-pitfalls.md — 17 common mistakes across statistics, design, execution, and interpretation

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

sanity-best-practices

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

seo-aeo-best-practices

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

content-modeling-best-practices

No summary provided by upstream source.

Repository SourceNeeds Review