cro-methodology

Audit websites and landing pages for conversion issues and design evidence-based A/B tests. Use when the user mentions "landing page isn't converting", "conversion rate", "A/B test", "why visitors leave", or "objection handling". Covers funnel mapping, persuasion assets, and objection/counter-objection frameworks. For overall marketing strategy, see one-page-marketing. For usability issues, see ux-heuristics.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "cro-methodology" with this command: npx skills add wondelai/skills/wondelai-skills-cro-methodology

CRO Methodology

Scientific, customer-centric approach to conversion rate optimization based on the CRE Methodology(TM). Extraordinary improvements come from understanding WHY visitors don't convert, not from copying competitors or applying generic tips.

Core Principle

Don't guess -- discover. The methodology rejects "best practices" and "magic buttons" in favor of evidence-based optimization. Most websites underperform not because of bad design, but because no one has systematically researched why visitors leave without converting.

The foundation: Every visitor who doesn't convert has a reason. Your job is to discover those reasons through research, then systematically eliminate them with evidence and proof. This customer-centric approach consistently outperforms intuition, competitor copying, and "expert" opinions.

Scoring

Goal: 10/10. When reviewing or creating landing pages, funnels, or conversion flows, rate them 0-10 based on adherence to the principles below. A 10/10 means full alignment with all guidelines; lower scores indicate gaps to address. Always provide the current score and specific improvements needed to reach 10/10.

The CRO Frameworks

1. The CRO Process

Core concept: A systematic 9-step process for optimizing conversion rates, moving from defining success metrics through research, experimentation, and scaling wins across the business.

Why it works: Random optimization efforts fail because they skip the critical research steps. The CRE process forces you to understand visitors before changing anything, ensuring changes are based on evidence rather than opinion.

Key insights:

  • Define success metrics aligned with business KPIs before touching any page
  • Map the entire conversion funnel to find "blocked arteries" (high-traffic underperforming paths) and "missing links" (absent funnel stages)
  • Understand visitors in three dimensions: who they are (types and intentions), what blocks them (UX problems), and what stops them (objections)
  • Gather market intelligence from competitors, reviews, and other industries
  • Prioritize ideas using ICE scoring (Impact, Confidence, Ease) before testing
  • Create bold experimental designs based on research, not "meek tweaks"
  • Run experiments with proper statistical rigor (95% confidence minimum, full business cycles)
  • Scale wins across landing pages, ad copy, email sequences, and offline materials

Product applications:

ContextCRO Process StepExample
Landing page auditSteps 1-3: Define goals, map funnel, research visitorsIdentify that 70% of traffic bounces because value prop is unclear
Checkout optimizationStep 2: Map funnel for blocked arteriesDiscover shipping cost shock causes 40% cart abandonment
New feature launchSteps 6-8: Strategize, design, experimentA/B test two positioning approaches before full rollout
Email sequenceStep 9: Scale winsApply winning objection-handling copy from landing page to drip emails
Competitor responseStep 4: Market intelligenceTransfer proven strategies from adjacent industries

Copy patterns:

  • "What's preventing you from [action] today?" (exit survey question to discover objections)
  • "Here's what [X] customers found..." (counter-objection with social proof)
  • Document hypothesis: "If we [change X], then [metric Y] will improve because [reason from research]"
  • Always calculate required sample size BEFORE starting any test

Ethical boundary: Never manipulate test results or cherry-pick data. Report all tests, including failures, and wait for genuine statistical significance.

See: testing-methodology.md for detailed ICE scoring, A/B vs. multivariate guidance, and statistical rigor.

2. Customer Research & Objections

Core concept: Visitors don't convert for specific, discoverable reasons. Research methods -- exit surveys, chat logs, support tickets, sales calls, reviews -- reveal the "voice of the customer" and their real objections.

Why it works: Companies guess why visitors leave, but guesses are almost always wrong. Direct research consistently uncovers objections that teams never anticipated, and the language customers use is more persuasive than any copywriter's invention.

Key insights:

  • Primary sources (exit surveys, live chat logs, support tickets, sales call recordings) give you direct visitor language
  • Secondary sources (reviews, social media, competitor analysis) reveal industry-wide objections
  • Objections fall into two categories: explicit ("too expensive") and implicit ("I'm not sure I'll follow through")
  • The "Big 5" universal objections are Trust, Price, Fit, Timing, and Effort
  • Post-purchase surveys ("What almost stopped you from buying?") reveal the objections that matter most
  • Non-converter surveys should ask ONE question for maximum response rate
  • Quantitative research (analytics, heatmaps) shows WHERE problems are; qualitative research (surveys, interviews) shows WHY

Product applications:

ContextResearch MethodExample
Exit intentOn-site survey (Hotjar, Qualaroo)"What's preventing you from signing up today?"
Post-purchaseEmail survey within 7 days"What almost stopped you from buying?"
Objection miningSupport ticket analysisSearch for "but", "however", "worried about" patterns
Voice of customerSales call recordingsCapture exact language customers use to describe problems
Competitive gapsReview mining (yours and competitors')Negative reviews = unaddressed objections

Copy patterns:

  • Use exact customer language in headlines and body copy (more persuasive than polished marketing copy)
  • "What's the one thing we could change to make you [action]?"
  • "How would you describe [product] to a friend?" (reveals positioning in customer terms)
  • Ask open-ended questions for discovery; save multiple choice for validation

Ethical boundary: Respect customer privacy in research. Anonymize data, get consent for recordings, and don't survey so aggressively that you degrade the user experience.

See: RESEARCH.md for tools, survey questions, and data analysis methods.

3. Persuasion Assets

Core concept: Every company has overlooked proof elements -- testimonials not displayed, awards not mentioned, statistics not highlighted, guarantees not prominent, team credentials hidden. These are "persuasion assets" that must be inventoried, acquired, and displayed.

Why it works: Visitors make decisions based on evidence and proof, not claims. A bold claim without proof is just noise. A modest claim with overwhelming proof is irresistible. Most companies sit on goldmines of proof they never use.

Key insights:

  • Audit five categories: Credentials & Authority, Social Proof, Risk Reversal, Data & Specificity, Process & Methodology
  • Create a "wish list" for missing assets and actively acquire them (request testimonials, apply for awards, compile statistics)
  • The "proof sandwich" structure: Claim (bold promise) then Proof (evidence) then Reinforcement (secondary proof)
  • Hierarchy of proof from strongest to weakest: specific results with context, named testimonials with photos, case studies, statistics, logos/badges, generic testimonials
  • Place proof at points of friction, not hidden in FAQs
  • Specific numbers beat round numbers ("47,832 customers" beats "About 50,000")

Product applications:

ContextPersuasion AssetExample
Landing page headerLogo bar + rating"Trusted by 10,000+ companies" with 5 recognizable logos
Pricing pageRisk reversal"30-day money-back guarantee, no questions asked"
Product pageSpecific testimonialPhoto + name + company + "Increased conversion by 47% in 3 weeks"
Checkout flowTrust badges near formsSecurity certification, payment logos, guarantee seal
About pageTeam credentialsYears of experience, certifications, publications, patents

Copy patterns:

  • "Here's how we did it for [Company X]..." (case study proof)
  • "And here's what their CEO says about working with us..." (testimonial reinforcement)
  • "[Specific number] businesses trust us" (not "thousands of customers")
  • Lead with benefits, not features: "Never delete another photo" beats "256GB storage"

Ethical boundary: Never fabricate testimonials, inflate statistics, or display fake trust badges. All proof must be genuine and verifiable.

See: PERSUASION.md for the full persuasion assets checklist and psychological triggers.

4. The O/CO Framework

Core concept: The Objection/Counter-Objection (O/CO) table is the core CRE technique. Create a two-column table mapping every visitor objection to specific, evidence-backed counter-objections.

Why it works: Visitors arrive with objections. If the page doesn't address them, visitors leave. The O/CO framework ensures no objection goes unanswered, and counter-objections are placed exactly where objections naturally arise during the reading flow.

Key insights:

  • Don't guess objections -- research them from surveys, chat logs, support tickets, and sales calls
  • Implicit objections (ones visitors won't admit) require "CO Only" approach: address the objection without stating it
  • Place counter-objections at the point of friction (credit card objection near payment form), not buried in FAQ
  • Address primary objections above the fold, secondary objections in the flow
  • Use multiple formats for the same counter-objection: text, video, testimonial, data
  • Canned support responses are goldmines of tested counter-objections

Product applications:

ContextObjection TypeO/CO Example
Trust"Why should I believe you?"Specific testimonials, media logos, awards, money-back guarantee
Price"Is it worth the money?"ROI calculator, cost comparison vs. alternatives, payment plans
Fit"Will it work for MY situation?"Case studies from similar customers, segmented landing pages, free trial
Timing"Why should I act now?"Cost of delay calculation, genuine limited-time offers, seasonal relevance
Effort"How hard will this be?""Done for you" framing, "Set up in 5 minutes", step-by-step breakdown

Copy patterns:

  • Bad (stating implicit objection): "Worried you're too lazy to learn a language?"
  • Good (CO Only): "Let the audio do the work for you."
  • "What's preventing you from signing up today?" (survey to discover objections)
  • "What almost stopped you from buying?" (post-purchase survey to validate O/CO table)

Ethical boundary: Address real objections with honest counter-objections. Never dismiss legitimate concerns or use deception to overcome valid hesitations.

See: OBJECTIONS.md for the full O/CO framework, research methods, and counter-objection techniques.

5. Hypothesis Design

Core concept: Every experiment needs a documented hypothesis linking a specific change to an expected outcome with a reason grounded in research. Prioritize using ICE scoring (Impact, Confidence, Ease).

Why it works: Without a hypothesis, you're just changing things randomly. The hypothesis forces you to articulate WHY a change should work, which means it must be grounded in customer research. ICE scoring prevents teams from wasting time on low-impact "meek tweaks."

Key insights:

  • Hypothesis format: "If we [change X], then [metric Y] will improve because [reason based on research]"
  • Define primary metric (determines winner), secondary metrics (additional monitoring), and guardrail metrics (must not decrease)
  • ICE scores: Impact (1-10: could this double conversion?), Confidence (1-10: is research backing strong?), Ease (1-10: how easy to implement?)
  • Make BOLD changes, not "meek tweaks" -- small changes rarely reach statistical significance and waste resources
  • Before testing, ask: "Could this 10x our results?" If not, reconsider priority
  • Worth testing: complete page redesign, new value proposition, fundamentally different offer
  • Not worth testing: button color, font size, image swap

Product applications:

ContextHypothesis ExampleICE Score
Headline rewrite"If we use customer language from surveys, conversion will increase because visitors see their own words reflected"I:8, C:9, E:10 = 9.0
Video testimonial"If we add video testimonial addressing price objection, signups will increase because visitors need trust proof"I:7, C:7, E:6 = 6.7
Checkout redesign"If we simplify checkout to one page, completion will increase because analytics show 40% drop at step 2"I:9, C:6, E:3 = 6.0
Button color"If we change button from blue to green, clicks will increase because green means go"I:2, C:2, E:10 = 4.7

Copy patterns:

  • "Based on our research, visitors' #1 objection is [X]. This test addresses it by [Y]."
  • Document before: hypothesis, primary metric, sample size, duration, traffic allocation
  • Document after: raw numbers, confidence interval, practical significance, learnings, next steps
  • Every test adds to organizational knowledge regardless of outcome

Ethical boundary: Report all test results honestly, including failures. Never cherry-pick data or run tests until you get the result you want.

See: testing-methodology.md for ICE scoring tables and detailed prioritization.

6. A/B Testing Methodology

Core concept: Run controlled experiments comparing page versions to determine which performs better, using proper statistical rigor to ensure results are real, not random noise.

Why it works: Without controlled experiments, you can't distinguish real improvements from random variation. Proper A/B testing methodology prevents the most common errors: peeking and stopping early, insufficient sample size, ignoring practical significance, and the multiple comparison problem.

Key insights:

  • Calculate required sample size BEFORE starting (inputs: baseline rate, minimum detectable effect, 80% power, 95% significance)
  • Run for at least one full business cycle (1-2 weeks) including weekdays AND weekends
  • Never peek at results and stop early -- this inflates false positive rates dramatically
  • 95% confidence minimum (p-value less than 0.05) before calling a winner
  • A statistically significant 0.1% lift isn't worth implementation complexity (practical significance matters)
  • Start with A/B tests; only move to multivariate when you have 100k+ monthly visitors and a proven winning page
  • A failed test that teaches you something is more valuable than a winning test you don't understand
  • Promote winners to new control and iterate

Product applications:

ContextTest TypeExample
Concept validationA/B test (2-4 variants)Test two fundamentally different page layouts based on different customer insights
Element optimizationMultivariate (100k+ visitors)Test 3 headlines x 3 images x 2 CTAs on proven winning page
Low trafficBold A/B testMake dramatic changes detectable with smaller samples (~4,000 visitors for 50% lift)
High trafficRapid iterationRun parallel tests on non-overlapping pages, 10-20 tests/month
Post-testScale winsApply winning insights across landing pages, ad copy, email sequences

Copy patterns:

  • "We increased [metric] by [X]% with [Y]% confidence over [Z] weeks"
  • "Test showed no significant difference, teaching us that [insight about customers]"
  • "Control outperformed challenger, suggesting visitors prefer [existing approach] because [reason]"
  • Always document learnings: Test, Hypothesis, Result, Learning, Applicable to

Ethical boundary: Never manipulate statistical methods to manufacture significance. Report confidence intervals honestly and acknowledge when results are inconclusive.

See: testing-methodology.md for statistical significance, sample size calculations, and platform comparison.

Common Mistakes

MistakeWhy It FailsFix
Copying competitors blindlyYou don't know if their approach works for them, let alone for youResearch YOUR visitors' objections and build YOUR evidence
Testing button colors before understanding objectionsAddresses surface symptoms, not root causes; tiny effects waste sample sizeDo customer research first, then test big changes based on findings
Assuming you know why visitors leaveTeams are almost always wrong about visitor motivationsUse exit surveys, chat logs, and support analysis to discover real reasons
Using "best practices" without validationWhat works elsewhere may not work for your audience, product, or contextTreat best practices as hypotheses to test, not rules to follow
Making decisions based on HiPPOHighest Paid Person's Opinion is not data; authority bias kills optimizationLet research and test results determine changes, not seniority
Optimizing pages without funnel contextImproving one step may shift problems to another; miss biggest opportunitiesMap entire funnel first, identify blocked arteries, prioritize by impact
Making "meek tweaks" instead of bold changesSmall changes rarely reach statistical significance; wastes time and trafficTest changes that could double conversion, not nudge it 2%
Giving up after one failed testThe opportunity still exists; you just haven't found the solution yetInvestigate why, go back to research, try a bolder change

Quick Diagnostic

Audit any landing page or conversion flow:

QuestionIf NoAction
Do we know the ONE action visitors should take on this page?Page lacks focus, visitors are confusedDefine single primary conversion goal and remove competing CTAs
Have we researched why visitors aren't converting (not guessed)?Optimization is based on assumptions, not evidenceRun exit surveys, analyze chat logs, review support tickets
Do we have an O/CO table mapping objections to counter-objections?Visitor objections go unanswered on the pageBuild O/CO table from research, place counter-objections at friction points
Is the value proposition crystal clear within 5 seconds?Visitors bounce before understanding the offerRun 5-second test, rewrite headline using customer language
Are persuasion assets visible (testimonials, awards, guarantees)?Page makes claims without proof, visitors don't believeAudit persuasion assets, acquire missing ones, display prominently
Have we mapped the full funnel and identified blocked arteries?Optimizing wrong page or missing biggest opportunityMap traffic volume at each stage, compare to benchmarks, prioritize by impact

Quick-Start Checklist

When optimizing any page:

  1. What is the ONE action visitors should take?
  2. Who are the visitors? What stage of buying journey?
  3. What are their top 3-5 objections? (Don't guess -- research)
  4. What proof/counter-objections address each?
  5. Is the value proposition crystal clear in 5 seconds?
  6. Are there UX blockers? (speed, mobile, forms)
  7. What persuasion assets are missing or hidden?

Reference Files

Further Reading

This skill is based on the CRE Methodology(TM) developed by Conversion Rate Experts. For the complete methodology, detailed case studies, and advanced techniques, read the original book:

About the Author

Dr. Karl Blanks and Ben Jesson are the cofounders of Conversion Rate Experts (CRE), the world's leading agency specializing in conversion rate optimization. Their clients have included Google, Apple, Amazon, Facebook, Dropbox, and many other technology leaders. CRE's methodology has been recognized with a Queen's Award for Enterprise (Innovation), the UK's highest business honor. Blanks holds a PhD in user experience and previously managed teams of usability researchers at Hewlett-Packard. Jesson's background is in direct-response marketing and web development. Together they developed the CRE Methodology, which has been applied across hundreds of websites and consistently delivered significant conversion improvements. Their book Making Websites Win distills this methodology into a systematic, repeatable process for evidence-based website optimization.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

refactoring-ui

No summary provided by upstream source.

Repository SourceNeeds Review
General

web-typography

No summary provided by upstream source.

Repository SourceNeeds Review
General

top-design

No summary provided by upstream source.

Repository SourceNeeds Review
General

ux-heuristics

No summary provided by upstream source.

Repository SourceNeeds Review