Definition
Forecast accuracy is the measurement of how closely a sales organization's predicted revenue matches the revenue it actually closes over a defined period — typically a quarter. It is usually expressed as a percentage, where 100% means the forecast was exactly right. An organization that forecasts $5M and closes $4M has 80% forecast accuracy. An organization that forecasts $5M and closes $6.5M also has a forecast accuracy problem — over-delivery is a forecasting miss, not a success.
Forecast accuracy is a lagging indicator of sales execution discipline. It does not cause good execution — it reveals whether good execution is happening. When pipeline stages are consistently defined, exit criteria are enforced, deal qualification is rigorous, and management inspection is structured, forecast accuracy improves as a natural consequence. When any of those components break down, forecast accuracy degrades.
The typical B2B SaaS organization operates at 30-45% forecast accuracy at the beginning of the quarter, improving to 70-80% by the final month. Organizations with mature sales execution systems can achieve 40-60% accuracy at the start of the quarter and 85-95% by month three. The difference is not better guessing — it is better pipeline data, which produces better signal.
Why It Matters
Forecast accuracy is the metric that connects the sales floor to the boardroom. It determines hiring plans, capacity models, cash flow projections, and investor communications. When forecast accuracy is poor, every downstream decision built on the forecast is wrong — and the compounding cost of those wrong decisions is enormous. A company that consistently misses forecast by 20% will make systematically wrong decisions about headcount, marketing spend, product investment, and capital allocation.
For PE-backed companies specifically, forecast accuracy is a proxy for management team credibility. A management team that cannot forecast within 15% of actual results over multiple quarters is either not inspecting the pipeline, not enforcing process, or not being honest about deal quality. None of those are good signals for an investor who is underwriting a growth plan.
What to Look For
- Accuracy tracked over multiple quarters — A single quarter's accuracy can be noise. The trend over 4-6 quarters reveals whether the organization has a systematic forecasting capability or is getting lucky
- Accuracy measured at multiple points in the quarter — Week 1, week 4, week 8, and week 12 accuracy reveals how quickly the organization can converge on the final number
- Accuracy decomposed by segment — Overall accuracy can mask segment-level problems. An org that forecasts enterprise within 5% but misses mid-market by 40% has a mid-market pipeline discipline problem
- Accuracy compared to pipeline coverage — High coverage (4x+) with low accuracy suggests pipeline quality issues. Low coverage (sub-2x) with high accuracy suggests either a very disciplined process or an artificially conservative forecast
- Methodology transparency — Can the organization explain how they get from pipeline data to a commit number? If the answer involves "gut feel" or "manager judgment" without structured criteria, the accuracy is accidental
Red Flags
- Forecast accuracy has not been formally measured — the organization does not track it as a metric
- Accuracy is consistently below 50% at the midpoint of the quarter over multiple quarters
- The forecast is generated by summing "commit" flags that reps apply without defined criteria for what constitutes a commit
- Over-delivery is celebrated rather than investigated — the organization treats forecast misses in both directions as a problem when the miss is down but a win when the miss is up
- No forecast methodology exists — the commit number is whatever the VP of Sales tells the CEO it will be
Related Terms
- Pipeline Discipline — the management practices that produce the pipeline data forecasting depends on
- Deal Stage Exit Criteria — the stage definitions that make pipeline stage data meaningful for forecasting
- CRM Hygiene — the data quality foundation without which forecasting is guesswork
- Deal Desk Review — the escalation mechanism that validates high-value deal forecasts