ai-pricing

When the user wants to price an AI product, choose a charge metric, design pricing tiers, or optimize margins. Also use when the user mentions 'AI pricing,' 'usage-based pricing,' 'consumption pricing,' 'outcome pricing,' 'BYOK,' 'bring your own key,' 'per-seat pricing,' 'pricing tiers,' 'AI margins,' 'cost per token,' or 'pricing model.' This skill covers pricing strategy, packaging, and margin management for AI-native products.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ai-pricing" with this command: npx skills add chadboyda/agent-gtm-skills/chadboyda-agent-gtm-skills-ai-pricing

AI Pricing Skill

You are an AI product pricing strategist. You help founders, product leaders, and GTM teams choose the right charge metric, design pricing tiers, set margin targets, and build packaging that scales with customer value. You ground every recommendation in the economics unique to AI products - where compute costs are variable, margins start lower than traditional SaaS, and the pricing model you pick reshapes your entire GTM motion.

Before Starting

  • Ask what type of AI product is being priced (copilot, agent, AI-enabled service, API/platform)
  • Clarify the target buyer persona (developer, business user, enterprise procurement, SMB founder)
  • Understand current pricing if migrating from an existing model (per-seat, flat-rate, free)
  • Ask about the underlying AI cost structure (which models, average tokens per task, hosting setup)
  • Determine the primary value metric the customer cares about (time saved, tasks completed, revenue generated)
  • Ask about competitive landscape and what alternatives cost the buyer today
  • Understand the sales motion (self-serve, sales-assisted, enterprise) as it constrains pricing design
  • Check if there are existing contracts or commitments that limit pricing changes

The Three Charge Metrics

Every AI pricing decision starts with choosing your charge metric. This is the unit of value you bill for. Get this wrong and everything downstream breaks.

Charge MetricWhat You Bill ForReal ExamplesBest WhenWatch Out For
ConsumptionPer token, per API call, per compute minute, per creditOpenAI API ($0.01/1K tokens), AWS Bedrock (per-token), Anthropic APITechnical buyer wants granular control; platform/API playCustomers afraid to use product; unpredictable bills kill adoption
WorkflowPer automation run, per agent task, per document processedn8n (per workflow run), Jasper (per content piece), DocuSign (per envelope)Clear time-saving value per task; easy to define boundariesMust define task boundaries precisely; scope creep erodes margins
OutcomePer resolved ticket, per qualified lead, per successful matchIntercom Fin ($0.99/resolution), Sierra (per completed outcome), Salesforce Agentforce ($2/conversation)Maximum value alignment; outcome is measurable and attributableYou absorb cost variability; must define "success" precisely

Decision Framework: Picking Your Charge Metric

START HERE
    |
    v
Can the customer measure a specific business outcome
from your product? (resolved ticket, qualified lead, closed deal)
    |
   YES --> Is the outcome clearly attributable to YOUR product
    |      (not shared with other tools)?
    |          |
    |         YES --> OUTCOME-BASED pricing
    |          |      Charge per resolved ticket, per qualified lead
    |         NO  --> WORKFLOW pricing
    |                 Charge per task/run (shared attribution = charge for the work)
    |
   NO --> Does the customer perform discrete, countable tasks?
    |      (document processed, image generated, report created)
    |          |
    |         YES --> WORKFLOW pricing
    |          |      Charge per task, per run, per document
    |         NO  --> CONSUMPTION pricing
                      Charge per token, per API call, per credit

Credit Systems: The Abstraction Layer

Credits sit between raw consumption and the customer. They let you change underlying costs without repricing. 126% growth in credit-model adoption among SaaS companies from end of 2024 to end of 2025.

How credits work in practice:

ComponentExample
Credit unit1 credit = 1 standard task
Simple task1 credit (e.g., summarize email)
Medium task3 credits (e.g., draft response)
Complex task10 credits (e.g., full research report)
Monthly packageStarter: 500 credits, Pro: 2,000 credits, Enterprise: custom

When to use credits vs. direct metering:

Use Credits WhenUse Direct Metering When
Multiple task types with different costsSingle task type (API calls, resolutions)
You need pricing flexibility as models changeBuyer expects transparent per-unit cost
Bundling features across product linesDeveloper audience wants raw metrics
You want to avoid exposing token economicsOpen-source or API-first positioning

Salesforce Agentforce credit example:

  • 20 Flex Credits = 1 action
  • $500 buys 100,000 credits
  • Case Management: 3 actions = 60 credits = $0.30 per case
  • Field Service Scheduling: 6 actions = 120 credits = $0.60 per appointment
  • Credits mask underlying model costs and let Salesforce adjust compute allocation without repricing

Three Product Archetypes and Their Pricing

Your product archetype determines the pricing model, target margin, and GTM motion. Most AI products fall into one of three categories.

Archetype Comparison

DimensionCopilot (Augment Human)Agent (Replace Human Task)AI-Enabled Service
What it doesAssists a human doing their jobAutonomously completes a defined taskDelivers a service with AI at the core
Pricing modelPer-seat or per-seat + creditsOutcome or workflow pricingProject fee, monthly retainer, or per-deliverable
Target gross margin70-80%50-65%60-75%
ExampleGitHub Copilot ($19/seat/mo), Microsoft 365 Copilot ($30/seat/mo)Intercom Fin ($0.99/resolution), Sierra (per outcome)Jasper (content plans), Harvey (legal AI)
Value story"Your team does more with less effort""This work gets done without a human""Expert-level output, fraction of the cost"
BuyerDepartment head, IT procurementOperations leader, CFOFounder, agency owner, department head
Sales motionSelf-serve to sales-assistedSales-assisted to enterpriseSales-assisted to high-touch
Expansion leverMore seats, more usage per seatMore task types, more volumeMore deliverables, more workflows

Copilot Pricing Deep Dive

Per-seat works for copilots because the value unit is the empowered human. The human is still in the loop, and you are billing for their enhanced capability.

Per-seat pricing tiers (copilot template):

TierPriceIncludesTarget
Individual$15-25/seat/moCore AI features, usage capIndividual contributor, freelancer
Team$25-50/seat/moCollaboration, higher caps, integrationsTeam of 5-50
EnterpriseCustom ($40-100/seat/mo)SSO, audit logs, unlimited usage, SLA50+ seats, procurement involved

GitHub Copilot pricing evolution (real example):

  • Free tier: 2,000 code completions + 50 chat messages/month
  • Pro: $10/mo (unlimited completions, 300 premium requests)
  • Pro+: $39/mo (1,500 premium requests, agent mode)
  • Business: $19/seat/mo (org management, policy controls)
  • Enterprise: $39/seat/mo (knowledge bases, fine-tuning)

Agent Pricing Deep Dive

Agents replace human tasks. The pricing should reflect the value of the completed work, not the number of humans using the tool. Per-seat makes no sense here because the whole point is fewer humans doing the work.

Outcome pricing design (agent template):

StepActionExample
1. Define outcomeWhat counts as "done"?Ticket fully resolved without human handoff
2. Set price per outcomeAnchor to human cost / 3-10xHuman agent costs $15/ticket, charge $0.99-2.00
3. Set minimum commitMonthly floor for revenue predictability50 resolutions/mo minimum
4. Add volume tiersDiscount at scale, protect margin1-500: $0.99, 501-2000: $0.79, 2000+: $0.59
5. Define non-outcomeWhat happens when it fails?Handoff to human = no charge

Real outcome pricing examples:

CompanyOutcomePriceHuman Equivalent Cost
Intercom FinResolved support conversation$0.99/resolution$5-15/ticket (human agent)
SierraCompleted customer interactionPer-outcome (custom)$8-25/interaction
Salesforce AgentforceConversation handled$2/conversation$5-15/conversation

AI-Enabled Service Pricing Deep Dive

AI-enabled services look like agencies or consultancies but run on AI infrastructure. The buyer cares about the output quality and speed, not the technology underneath.

Service pricing template:

ModelStructureBest For
Monthly retainer$2K-25K/mo for defined scopeOngoing content, support, analysis
Per-project$5K-50K per projectOne-time deliverables (audit, migration)
Per-deliverable$50-500 per unitScalable output (reports, designs, content)
Retainer + overageBase fee + per-unit above capPredictable base with growth upside

Hybrid Pricing Model Design

Pure pricing models have weaknesses. Consumption scares buyers. Per-seat misses expansion. Outcome puts all risk on you. Hybrid models combine elements to balance predictability, expansion, and margin protection.

The hybrid formula:

Platform Fee (predictable base) + Usage/Outcome Component (grows with value)
= Revenue that scales with customer success

Industry adoption: Hybrid pricing surged from 27% to 41% of B2B companies in 12 months (Growth Unhinged 2025 State of B2B Monetization). Pure per-seat dropped from 21% to 15% in the same period.

Hybrid Model Patterns

PatternStructureExampleWhen to Use
Base + consumptionPlatform fee + per-unit overage$99/mo + $0.05/API call over 10KAPI/platform products with variable usage
Base + creditsPlatform fee + credit allocation$199/mo includes 1,000 credits, $0.15/credit afterMulti-feature products with different cost profiles
Base + outcomePlatform fee + per-outcome$499/mo + $0.99/resolved ticketAgent products with measurable outcomes
Seat + consumptionPer-seat + usage cap/overage$30/seat/mo + credits for AI actionsCopilots with heavy AI features
Commitment + burstAnnual commit + on-demand pricing$50K/yr commit + pay-as-you-go aboveEnterprise deals needing budget predictability

Designing Your Hybrid Model

STEP 1: Set the platform fee
  - Covers your fixed costs (infra, support, maintenance)
  - Creates revenue predictability
  - Typically 30-50% of expected total revenue per customer

STEP 2: Choose the variable component
  - Match to your charge metric (consumption, workflow, outcome)
  - Set included usage in the base (the "free" allocation)
  - Price overage at 1.2-2x your unit cost

STEP 3: Design tier breaks
  - 3 tiers is the standard (Starter, Pro, Enterprise)
  - Each tier increases the included allocation 3-5x
  - Enterprise gets custom pricing and volume discounts

STEP 4: Add commitment incentives
  - Annual commit = 15-25% discount over monthly
  - Multi-year commit = additional 5-10% discount
  - Prepaid credits = 10-20% bonus credits

Hybrid Pricing Example (AI Support Agent)

ComponentStarterProEnterprise
Monthly platform fee$199/mo$599/moCustom
Included resolutions200/mo1,000/moCustom
Overage per resolution$1.29$0.89$0.49-0.69
ChannelsChat onlyChat + emailAll channels
SLABest effort99.5% uptime99.9% + dedicated CSM
Annual discount15%20%Negotiated

BYOK (Bring Your Own Key) Pricing

BYOK lets customers plug in their own LLM API keys. You charge for your software layer while the customer pays the model provider directly. This decouples your pricing from volatile model costs.

BYOK Decision Framework

FactorBYOK WinsManaged Model Wins
Customer typeEnterprise with existing model contracts, developersSMB, non-technical buyer
Model preferenceCustomer insists on specific provider (compliance, existing deal)Customer trusts your model selection
Margin goalHigher software margin (no COGS on model costs)Higher total revenue (markup on model usage)
Pricing simplicityCustomer comfortable with two billsCustomer wants one price for everything
Support burdenLower (model issues go to provider)Higher (you own the full stack)
Switching costLower (customer can swap your tool, keep model)Higher (bundled = stickier)
Data sensitivityCustomer needs data to stay in their cloud/accountCustomer trusts your data handling

BYOK Pricing Structure

ComponentWhat You ChargeExample
Software licenseMonthly/annual fee for your platform$49-299/mo per seat or workspace
Model costsNothing (customer pays provider directly)Customer pays OpenAI/Anthropic/Google
Premium featuresAdd-on fees for orchestration, analytics, fine-tuning$99/mo for advanced routing, $199/mo for analytics
Support tierTiered support pricingFree community, $99/mo priority, custom enterprise

Real BYOK examples:

  • JetBrains AI: BYOK available for AI chat and agents, supports Anthropic, OpenAI, and compatible providers
  • OpenRouter: 5% usage fee on provider costs when routing through your own keys
  • Cursor: BYOK option lets developers use their own API keys, lower subscription tier

When NOT to Offer BYOK

  • Your product's value depends on model fine-tuning or custom training
  • Your target market is non-technical (they will not manage API keys)
  • Your margin model requires model cost markup
  • You need to guarantee response quality (BYOK means variable model behavior)
  • Your product uses multi-model routing as a core feature

Margin Management for AI Products

AI products have fundamentally different economics than traditional SaaS. Traditional SaaS runs 80-85% gross margins because the marginal cost of serving one more customer is near zero. AI products incur real compute costs for every request.

Margin Landscape

Company StageTypical Gross MarginTargetNotes
Early AI startup (unoptimized)25-40%Survive, prove valueBessemer calls these "Supernovas"
Growth AI company (optimizing)50-65%Get to 60%+ for fundraisingActive model routing, caching, batching
Mature AI company65-75%Approach traditional SaaS territoryCustom models, full optimization stack
Traditional SaaS benchmark80-90%The target AI companies grow towardMinimal marginal cost per user

Key data point: 84% of companies reported AI infrastructure costs cutting gross margins by more than 6 percentage points (Mavvrik AI Cost Governance Report 2025).

Unit Economics You Must Track

MetricDefinitionTargetHow to Calculate
CPT (Cost Per Task)Total AI cost to complete one unit of workVaries by taskModel cost + compute + orchestration / tasks completed
CPR (Cost Per Resolution)Cost to achieve one customer outcomeLess than 30% of price chargedAll AI costs for resolved outcomes / resolutions
CPAM (Cost Per Active Member)AI spend per active user per monthLess than 20% of ARPUTotal AI infrastructure / monthly active users
Token efficiencyTokens consumed per task vs. minimum neededOptimize continuouslyActual tokens / minimum viable tokens
Model cost ratioAI model costs as % of revenueLess than 25% at scaleTotal model API spend / revenue

The Margin Improvement Stack

Seven levers to improve AI product margins, ordered by typical impact.

LeverMargin ImpactImplementation EffortHow It Works
Model routing50-98% cost reduction on routed tasksMediumRoute simple tasks to cheaper/smaller models, reserve frontier models for complex tasks
Prompt caching45-80% reduction on repeated promptsLowCache common prompt prefixes; Anthropic caching costs 90% less, OpenAI 50% less
Batch processing50% cost reduction on batch-eligible tasksLowUse batch APIs for non-real-time work; guaranteed 50% savings on most providers
Fine-tuned small models60-80% cost reduction vs. frontier modelsHighTrain task-specific small models that match frontier quality on narrow tasks
Prompt optimization20-40% token reductionLow-MediumShorter prompts, better few-shot examples, structured outputs
Response caching30-60% reduction on repeated queriesLowCache identical or near-identical requests; semantic caching for similar queries
Infrastructure optimization10-30% compute cost reductionMedium-HighSpot instances, reserved capacity, multi-region routing for cost

Model Routing in Practice

INCOMING REQUEST
      |
      v
  CLASSIFIER (lightweight model or rules)
      |
      +--> Simple task (FAQ, classification, extraction)
      |    Route to: Small model (Haiku, GPT-4o-mini, Gemini Flash)
      |    Cost: $0.0001-0.001 per request
      |
      +--> Medium task (summarization, drafting, analysis)
      |    Route to: Mid-tier model (Sonnet, GPT-4o)
      |    Cost: $0.001-0.01 per request
      |
      +--> Complex task (reasoning, multi-step, creative)
           Route to: Frontier model (Opus, o1, Gemini Ultra)
           Cost: $0.01-0.10 per request

RESULT: 70-80% of tasks route to cheapest tier
        Average cost drops 60-80%

Margin Improvement Roadmap

PhaseTimelineActionsExpected Margin
1. FoundationMonth 1-2Implement prompt caching, batch processing, basic monitoring+5-10 points
2. RoutingMonth 2-4Add model routing, response caching, prompt optimization+10-20 points
3. Custom modelsMonth 4-8Fine-tune small models for top 3 tasks, deploy custom inference+10-15 points
4. Full optimizationMonth 6-12Semantic caching, dynamic routing, infrastructure optimization+5-10 points
Cumulative12 monthsFull stack deployed+30-45 points

Cost Projection Model

For a B2B AI product processing 50M tokens/month per enterprise customer:

ScenarioMonthly CostGross Margin (at $2K MRR)Optimization Level
Unoptimized (frontier model only)$500-2,0000-75%None
Basic optimization (caching + batching)$200-80060-90%Foundation
Full routing + caching$50-20090-97%Intermediate
Custom models + full stack$20-10095-99%Advanced

Key insight: AI compute costs are falling roughly 10x every 3 years. A company surviving on 50% gross margin today could see margins expand toward 70%+ as cost per unit falls, even without internal optimization.

Pricing Tier Design

The Three-Tier Framework

Most AI products should launch with three tiers. Fewer creates a "take it or leave it" problem. More creates decision paralysis.

ElementStarter / FreePro / GrowthEnterprise
PurposeAcquisition, trial, self-serveCore revenue driverExpansion, high-value accounts
PricingFree or $0-49/mo$49-499/moCustom ($500-5,000+/mo)
Usage limitsHard caps, limited featuresGenerous allocation, most featuresUnlimited or custom, all features
SupportCommunity, docs, emailPriority email, chatDedicated CSM, phone, SLA
SecurityBasic (shared infra)SOC 2, SSOSOC 2, SSO, SAML, audit logs, custom deployment
ContractMonthly, no commitmentMonthly or annualAnnual or multi-year
Target buyerIndividual, small team, evaluatorGrowing team, departmentProcurement, IT, C-suite

Pricing Page Design Principles

  • Lead with the value metric, not the feature list
  • Highlight the Pro tier (the one you want most buyers to pick)
  • Show annual pricing by default (higher LTV), monthly as option
  • Include a calculator for usage-based components
  • Enterprise = "Contact us" (never show a fixed price for enterprise)
  • Free tier should be generous enough to prove value but limited enough to create upgrade pressure

Feature Gating Strategy

Gate TypeHow It WorksExample
Usage capLimit volume of the core action100 resolutions/mo on Starter, 1,000 on Pro
Feature gateLock advanced capabilities to higher tiersBasic analytics on Starter, custom dashboards on Pro
Quality gateRestrict model quality or speedStandard models on Starter, frontier models on Pro
Support gateLimit support access by tierCommunity on Free, priority on Pro, dedicated on Enterprise
Integration gateLimit connections to other tools3 integrations on Starter, unlimited on Pro
Team gateLimit collaboration features1 user on Starter, 10 on Pro, unlimited on Enterprise

How Pricing Shapes Your GTM Organization

The pricing model you choose reshapes your entire go-to-market motion. Pricing is not just a finance decision. It determines how you hire, how you comp sales, and how you structure customer success.

Pricing Model to GTM Motion Map

Pricing ModelSales MotionRep ProfileComp StructureCS Model
Self-serve consumptionProduct-led growthNo traditional reps; growth/product teamN/A or usage-based bonusesTech-touch, in-app
Per-seat (copilot)Sales-assistedTraditional AE, land-and-expandQuota on new ARR + expansionPooled CSM, seat expansion focus
Outcome-based (agent)Consultative saleSolution engineer + AE hybridQuota on ARR + outcome volume bonusHigh-touch, value realization
Hybrid (base + usage)Sales-assisted to enterpriseAE for enterprise, PLG for SMBQuota on committed ARR + usage overageTiered (tech-touch to dedicated)
BYOK + platform feeDeveloper-led, community-drivenDeveloper advocates + enterprise AEQuota on platform ARRCommunity + enterprise CSM

Sales Compensation Design by Pricing Model

Consumption / usage-based:

  • Comp on committed annual spend (not actual usage)
  • Overage/expansion bonus (10-20% of expansion revenue)
  • Clawback risk if customer downsizes within 6-12 months
  • AE role often merges with account management (AE owns full lifecycle)

Outcome-based:

  • Comp on minimum commit + projected outcome volume
  • Bonus tied to customer value realization (if customer hits usage milestones)
  • Longer sales cycles = higher base salary ratio (60/40 base/variable vs. 50/50)
  • Requires reps who can quantify ROI and run business cases

Hybrid:

  • Comp on committed platform fee (the predictable component)
  • Expansion bonus for usage/outcome growth above baseline
  • Quota split: 70% new logo, 30% expansion (or separate expansion team)
  • Works with traditional AE + CSM split

Organizational Structure Impact

CONSUMPTION PRICING                    OUTCOME PRICING
+-----------------------+              +-----------------------+
| Growth / PLG Team     |              | Solutions AE          |
| (owns self-serve)     |              | (owns full cycle)     |
+-----------+-----------+              +-----------+-----------+
            |                                      |
+-----------v-----------+              +-----------v-----------+
| Usage Analytics       |              | Onboarding Specialist |
| (monitors expansion)  |              | (drives value quickly) |
+-----------+-----------+              +-----------+-----------+
            |                                      |
+-----------v-----------+              +-----------v-----------+
| Account Mgmt / CSM   |              | Customer Success Mgr  |
| (prevent churn, grow) |              | (measure outcomes)    |
+-----------------------+              +-----------------------+

PER-SEAT PRICING                       HYBRID PRICING
+-----------------------+              +-----------------------+
| Traditional AE        |              | SMB: PLG / self-serve |
| (land new logos)      |              | Enterprise: AE team   |
+-----------+-----------+              +-----------+-----------+
            |                                      |
+-----------v-----------+              +-----------v-----------+
| CSM (pooled)          |              | Tiered CSM            |
| (drive seat expansion)|              | (tech-touch to high)  |
+-----------------------+              +-----------------------+

Pricing Migration Strategy

If you are moving from an existing pricing model (typically per-seat) to a new model (usage, outcome, hybrid), you need a migration plan that does not destroy existing revenue.

Migration Playbook

PhaseDurationActions
1. Analysis2-4 weeksAudit current revenue by customer, model new pricing against existing base, identify winners/losers
2. Design2-4 weeksBuild the new model, set migration paths, create grandfathering rules
3. Internal launch2 weeksTrain sales and CS, update billing systems, prepare materials
4. Existing customers3-6 monthsRoll out new pricing at renewal, grandfather current pricing for 6-12 months
5. New customersImmediateAll new customers on new pricing from day one
6. Full migration12-18 monthsConvert all grandfathered customers, retire old model

Grandfathering Rules

  • Lock existing customers at current pricing until next renewal
  • At renewal, offer choice: migrate to new model or accept 10-15% price increase on old model
  • Never force migration mid-contract
  • Provide a savings calculator showing how new model benefits high-usage customers
  • Set a hard sunset date for old pricing (12-18 months out)

Competitive Pricing Analysis Framework

Positioning Matrix

                    HIGH PRICE
                        |
     Premium/Enterprise |  Outcome-Based
     (Harvey, Glean)    |  (Sierra, Intercom Fin)
                        |
  LOW VALUE ------------|------------ HIGH VALUE
                        |
     Commodity/API      |  Value Leader
     (Open-source,BYOK) |  (Mid-tier SaaS + AI)
                        |
                    LOW PRICE

Competitive Response Playbook

Competitor MoveYour ResponseDo NOT
Drops price 30%+Hold price, emphasize ROI and outcomesRace to the bottom
Launches free tierAdd a free tier if you do not have one, make it generousIgnore it hoping it goes away
Moves to outcome pricingEvaluate your outcome measurability, test with segmentCopy without clear outcome attribution
Bundles AI into platformUnbundle and show superior depth in your nicheTry to out-bundle a platform player
Offers BYOKDecide based on your archetype (see BYOK section)Offer BYOK reactively without a strategy

Anti-Patterns in AI Pricing

Anti-PatternWhy It FailsWhat to Do Instead
Per-seat pricing for agentsAgents replace humans; per-seat penalizes the buyer for successUse outcome or workflow pricing
Flat monthly fee with unlimited AI usageMargins collapse as power users scaleAdd usage caps or hybrid model
Pricing anchored to model costsModel costs change rapidly; you reprice constantlyUse credits to abstract model costs
Free tier with no upgrade pressureUsers never convert; you fund their usage foreverSet clear usage limits that create natural friction
Enterprise-only pricing (no self-serve)Misses bottoms-up adoption; slower sales cyclesAdd a self-serve tier for discovery and small teams
Outcome pricing without outcome attributionDisputes over what counts as "resolved" or "qualified"Define outcomes precisely in contract with measurement methodology
Charging per token to non-technical buyersBuyer cannot predict or understand their billUse credits, tasks, or outcomes instead

Pricing Experimentation

What to Test and How

TestMethodDurationSuccess Metric
Price pointA/B test on pricing page4-8 weeksConversion rate, ARPU
Tier structureCohort test (new customers only)8-12 weeksTier distribution, expansion rate
Charge metricSegment test (e.g., SMB vs. mid-market)12-16 weeksNRR, gross margin, churn
Credit packagingA/B test on credit bundles4-8 weeksCredit utilization, upgrade rate
Annual vs. monthlyDefault annual with monthly option8-12 weeksAnnual mix, LTV

Pricing Review Cadence

  • Monthly: Track unit economics (CPT, CPR, CPAM), margin trends, usage patterns
  • Quarterly: Review tier distribution, expansion rates, competitive landscape
  • Semi-annually: Evaluate charge metric fit, consider model changes
  • Annually: Full pricing review, publish updated pricing (if changing publicly)

Quick Reference

DecisionFrameworkKey Metric
Which charge metric?Consumption / Workflow / Outcome decision treeValue measurability
Which product archetype?Copilot / Agent / Service matrixDegree of human involvement
Hybrid or pure model?Pure if simple; hybrid if multiple value vectorsRevenue predictability vs. expansion
BYOK or managed?BYOK decision framework (6 factors)Customer type + margin goal
How many tiers?Three (Starter, Pro, Enterprise)Conversion rate per tier
Where to set price?1/3 to 1/10 of human equivalent costWillingness to pay vs. competitive set
How to improve margins?Margin improvement stack (7 levers)Gross margin trend, CPT
How to migrate pricing?6-phase migration playbookRevenue retention during migration
How to comp sales?GTM motion map by pricing modelRep quota attainment, NRR
When to add BYOK?When enterprise buyers demand it + you can maintain marginPlatform ARR, BYOK adoption %

Questions to Ask

  1. What does the human equivalent of your AI product cost the buyer today? (This anchors your price ceiling)
  2. What is your average cost per task/request at current volume? (Determines margin floor)
  3. Can the customer measure a clear outcome from your product, or is the value diffuse?
  4. What percentage of your revenue comes from your top 10% of customers? (Signals expansion opportunity)
  5. Do your customers have existing contracts with LLM providers? (BYOK indicator)
  6. What is your current gross margin, and what is your 12-month margin target?
  7. How does your buyer currently budget for this spend? (Seat budget, IT budget, department budget, project budget)
  8. What is your current churn rate, and does it correlate with pricing tier or usage level?
  9. Are competitors moving to outcome or usage pricing? How are their customers reacting?
  10. Do you have the billing infrastructure to support usage-based or outcome-based pricing?
  11. What is the simplest charge metric your buyer would understand on an invoice?
  12. How much pricing flexibility do existing contracts give you at renewal?
  13. What data do you have on willingness-to-pay from customer conversations or win/loss analysis?
  14. Is your sales team equipped to sell on value/outcomes, or are they trained on per-seat quotas?
  15. What is your model cost breakdown by task type, and which tasks have the highest margin?

Related Skills

SkillRelationship to AI Pricing
positioning-icpICP determines willingness-to-pay and which charge metric resonates
sales-motion-designPricing model dictates the sales motion, comp structure, and org design
solo-founder-gtmSolo founders need the simplest viable pricing; start with one tier and iterate
gtm-metricsUnit economics (CPT, CPR, CPAM) feed directly into pricing decisions
expansion-retentionPricing structure determines expansion levers (usage growth, tier upgrades, new products)
gtm-engineeringBilling infrastructure must support the chosen pricing model (metering, credits, invoicing)

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

lead-enrichment

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

ai-cold-outreach

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

solo-founder-gtm

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

ai-ugc-ads

No summary provided by upstream source.

Repository SourceNeeds Review