estimation-patterns

Practical estimation techniques for software tasks — methods comparison, decomposition, complexity multipliers, buffer calculation, bias awareness, and communication strategies. Use when estimating features, sprint planning, or presenting timelines to stakeholders.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "estimation-patterns" with this command: npx skills add wpank/estimation-atterns

Estimation Patterns (Meta-Skill)

Systematic approaches for producing accurate, defensible software estimates.

Installation

OpenClaw / Moltbot / Clawbot

npx clawhub@latest install estimation-patterns

When to Use

  • Estimating a feature, bug fix, or project timeline
  • Breaking down work for sprint planning or roadmap forecasting
  • Presenting estimates to stakeholders or product managers
  • Reviewing historical accuracy to calibrate future estimates
  • Noticing a pattern of missed deadlines or blown budgets

Estimation Methods

Choose the method that matches your context and audience.

MethodBest ForGranularityProsCons
T-Shirt SizingRoadmap planning, backlog groomingXS, S, M, L, XLFast, low-friction, good for relative rankingNot actionable for scheduling
Story PointsSprint planning, team velocityFibonacci (1-21)Abstracts away individual speed, tracks velocityMeaningless outside the team, gaming risk
Time-BasedClient quotes, contractor workHours / daysUniversally understood, maps to budgetsAnchoring bias, implies false precision
Three-PointHigh-uncertainty tasksMin / likely / maxCaptures uncertainty range, enables PERTRequires discipline to set honest bounds
Reference ComparisonRecurring task typesRelative to pastGrounded in real data, hard to argue withRequires historical records, breaks on novelty

Three-point formula (PERT):

Expected = (Optimistic + 4 x Likely + Pessimistic) / 6
Standard Deviation = (Pessimistic - Optimistic) / 6

Use the standard deviation to express confidence ranges (e.g., "3-5 days at 68% confidence, 2-6 days at 95%").


Task Decomposition

Break work down until every sub-task is < 4 hours of effort. Anything larger hides unknowns.

LevelExampleTarget Size
EpicUser authentication system2-6 weeks
FeatureOAuth2 login with Google3-10 days
TaskImplement callback handler1-3 days
Sub-taskParse and validate OAuth token1-4 hours
Atomic stepWrite token expiry check function30-90 minutes

Decomposition checklist:

  1. Can I describe what "done" looks like in one sentence?
  2. Is there exactly one unknown, or zero?
  3. Could a teammate pick this up without a walkthrough?
  4. Is it under 4 hours? If no — split again.

If you cannot decompose a task, it signals a spike is needed. Timebox the spike (2-4 hours), then re-estimate.


Complexity Multipliers

Apply these multipliers to your base estimate when complexity factors are present. Multipliers stack multiplicatively.

FactorMultiplierRationale
New technology / stack1.5xLearning curve, unexpected gotchas, doc-hunting
Unclear requirements2.0xDiscovery work, rework cycles, stakeholder alignment
Legacy code1.5xUndocumented behavior, fragile tests, hidden coupling
Cross-team dependency1.5xCoordination overhead, blocking, API negotiation
First-time task2.0xNo reference point, unknown unknowns dominate
Regulatory / compliance1.5xAudit trails, review gates, documentation overhead

Example: A 2-day base estimate on legacy code (1.5x) with unclear requirements (2.0x) becomes 2 x 1.5 x 2.0 = 6 days.

Rule: Never apply more than 3 multipliers — if that many factors converge, the task needs a spike or a scope reduction, not a bigger number.


Buffer Calculation

Raw estimates are point predictions. Reality is a distribution.

Buffer TypeRule of ThumbWhen to Apply
Known unknowns+20% of total estimateIntegration points, third-party APIs, minor gaps
Unknown unknowns+50% of total estimateNew domain, first release, greenfield system
Team velocity factor/ focus ratio (e.g., 0.7)Account for meetings, reviews, context switching
Sequential dependency+10% per handoffEach team/person boundary adds coordination drag

Effective estimate formula:

Effective = (Base Estimate x Multipliers) / Focus Ratio + Buffer

Focus ratio guidelines:

ScenarioTypical Focus Ratio
Dedicated to one project0.75-0.85
Split across 2 projects0.50-0.60
On-call rotation active0.60-0.70
Heavy meeting load (> 3h/day)0.45-0.55

Historical Calibration

Track actual vs estimated to improve over time. This is the single most effective way to get better at estimation.

Tracking table:

TaskEstimatedActualRatio (A/E)Notes
Auth flow3 days5 days1.67OAuth docs were outdated
Dashboard charts5 days4 days0.80Reused existing component
DB migration2 days6 days3.00Discovered data quality issues

Accuracy ratio: Calculate your rolling average of Actual / Estimated over the last 10-20 tasks.

  • Ratio < 0.8 — you're overestimating (sandbagging or excessive buffers)
  • Ratio 0.8-1.2 — well calibrated
  • Ratio > 1.2 — you're underestimating (apply the ratio as a correction factor)

Calibration action: Multiply future estimates by your rolling accuracy ratio until it converges toward 1.0.


Common Estimation Biases

Recognize these cognitive traps — awareness alone reduces their effect.

BiasDescriptionMitigation
Planning FallacyAssuming best-case scenario despite past evidenceUse historical data, not intuition
AnchoringFirst number heard dominates all subsequent estimatesEstimate independently before discussing
Optimism Bias"It'll be simpler than last time"Apply the three-point method, honor the pessimistic
Scope CreepEstimate stays fixed while scope growsRe-estimate when scope changes, always
Hofstadter's Law"It always takes longer, even when you account for it"Add buffer, then add more buffer for novel work
Dunning-KrugerNovices underestimate; experts sometimes overestimateCross-check with a second estimator
Sunk Cost PressureRefusing to re-estimate because the original was "approved"Treat estimates as living artifacts, update often

Estimation by Task Type

Use these ranges as starting heuristics, then adjust with multipliers and historical data.

Task TypeTypical RangeKey Variables
Bug fix (isolated)2-8 hoursReproducibility, code familiarity, test coverage
Bug fix (systemic)1-3 daysRoot cause depth, blast radius, regression risk
Small feature1-3 daysSpec clarity, UI complexity, number of endpoints
Medium feature3-10 daysCross-cutting concerns, data model changes
Large feature2-4 weeksArchitecture decisions, team coordination
Refactor (local)1-3 daysTest coverage, coupling, blast radius
Refactor (systemic)1-4 weeksNumber of callers, migration strategy needed
Spike / research2-8 hours (timeboxed)Always timebox — output is knowledge, not code
DevOps / infra1-5 daysProvider docs quality, IAM complexity, testing

Communication

How you present an estimate matters as much as the number itself.

Always present as a range, never a single number:

  • Bad: "It'll take 5 days."
  • Good: "3-7 days, most likely 5. The range depends on the payment API response format — I'll know more after the spike."

Confidence levels:

ConfidenceWhat It MeansWhen to Use
High (+-15%)Well-understood scope, done similar beforeFamiliar task, clear spec
Medium (+-30%)Some unknowns, reasonable decompositionMost sprint-level estimates
Low (+-50%+)Significant unknowns, rough order of magnitudeRoadmap forecasts, presale quotes

Stakeholder communication rules:

  1. State the range and the confidence level together
  2. Name the top 1-3 risks that could push toward the upper bound
  3. Offer to de-risk with a timeboxed spike before committing
  4. Explicitly state what is not included (e.g., "does not include QA, deployment, or docs")
  5. Update estimates proactively when new information surfaces — don't wait until the deadline

Anti-Patterns

Anti-PatternWhy It's HarmfulBetter Approach
Padding silentlyErodes trust when discovered; hides real uncertaintyUse explicit buffers with stated rationale
SandbaggingDestroys velocity data; breeds complacencyTrack accuracy ratio, aim for calibration
Not decomposingLarge estimates hide unknowns and compound errorsBreak to < 4-hour sub-tasks, estimate bottom-up
Single-point estimatesImplies false certainty, no room for varianceAlways give a range with confidence level
Estimating under pressureAnchoring to what the stakeholder wants to hearAsk for time to decompose; never estimate on the spot
Copy-paste estimatesEvery task has different context and risk profileEstimate fresh, use references as starting points only
Ignoring rework cyclesFirst pass is rarely final — reviews, feedback, QAFactor in at least one review-and-revise loop

NEVER Do

  1. NEVER give a single-number estimate without a range — it communicates false precision and sets you up for failure
  2. NEVER estimate a task you haven't decomposed — large estimates are guesses wearing a suit
  3. NEVER let an old estimate stand after scope changes — estimates are invalidated the moment requirements shift
  4. NEVER estimate in someone else's units — your days are not their days; clarify assumptions about focus time and interrupts
  5. NEVER skip recording actuals — estimation without feedback is astrology, not engineering
  6. NEVER commit to an estimate made under pressure — say "let me break this down and get back to you in an hour"
  7. NEVER treat an estimate as a promise or a deadline — estimates are probabilistic forecasts, not contracts

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Ai Competitor Analyzer

提供AI驱动的竞争对手分析,支持批量自动处理,提升企业和专业团队分析效率与专业度。

Registry SourceRecently Updated
General

Ai Data Visualization

提供自动化AI分析与多格式批量处理,显著提升数据可视化效率,节省成本,适用企业和个人用户。

Registry SourceRecently Updated
General

Ai Cost Optimizer

提供基于预算和任务需求的AI模型成本优化方案,计算节省并指导OpenClaw配置与模型切换策略。

Registry SourceRecently Updated