Megaprojects fail in remarkably consistent ways
The academic literature on megaprojects is brutal: roughly 65% exceed their original cost estimate by more than 25%, and the average schedule overrun is over 18 months. This is true across decades, geographies, and industries. The diagnoses are well understood — optimism bias in the planning team, strategic misrepresentation in the sanction case, and reference-class neglect — but the standard risk-register process at sanction does not correct for them.
Megaproject risk management means doing two things the risk register cannot: running probabilistic models that quantify what the cataloged risks actually do to the cost and schedule distribution, and applying reference-class forecasting that compares the project to a class of similar past projects whose outcomes are known. These are the two corrections that show measurable accuracy improvement in the academic record.
What megaproject risk management actually involves
Four practices, applied at sanction and re-applied at each stage gate. The risk register continues running in parallel; this is the analytic layer that the register has historically lacked.
Reference-class forecasting
Build a class of structurally similar past megaprojects — same industry, same scale, same geography where possible — and compare the bottom-up estimate against the actual outcome distribution of that class. When the bottom-up estimate disagrees materially, the planning team is almost certainly missing something the reference class is signaling.
Probabilistic cost and schedule modeling
Each cost line and schedule activity becomes a distribution rather than a point. The output is the full P10 / P50 / P80 / P95 outcome range, plus the variance drivers. Sanction at P50 — what most organizations unknowingly do — leaves an unfunded P50-to-P80 gap that explains most of the historical megaproject cost overruns.
Optimism-bias adjustment
Engineering productivity, vendor lead times, weather windows, and regulatory review timing are systematically underestimated by planning teams. Capital Project AI applies empirically calibrated optimism-bias corrections to each input class, so the sanctioned plan reflects realistic — not aspirational — assumptions.
Stage-gate stress testing
At each gate, re-run the model with updated information and stress-test the project against the macro and execution scenarios that would change the answer. The output is an explicit "if X happens, this project becomes uneconomic" trigger list that survives turnover in the project team.
Why Capital Project AI
- Built around the megaproject failure modes. Reference-class forecasting, probabilistic modeling, optimism-bias correction — the three corrections with measurable accuracy improvement in the published record.
- Complements your risk register. The register catalogs risks. We quantify what they do to the outcome distribution and surface the biases the register doesn't catch.
- Stage-gate-ready. Outputs designed for the sanction case and gate reviews — not for daily project-controls reporting.
- Built by an ex-Shell capital owner. Founded by an engineer with $800M of delivered megaproject experience and direct exposure to the failure modes.
Stress-test a megaproject before sanction
Upload the cost and schedule estimate — get the reference-class comparison, the probabilistic forecast, and the optimism-bias-adjusted recommendation in under a minute.
Open the Dashboard →What it looks like in practice
An integrated energy company is approaching FID on a $4.2B LNG train. The bottom-up estimate from the joint EPC team puts capex at $4.2B and schedule at 56 months from FID to first cargo. Contingency at 12% covers the project to roughly P55 on the deterministic risk register. The board is asking whether the contingency is right.
Capital Project AI runs the project as a megaproject risk analysis. The reference class — 14 LNG trains of similar scale completed in the past 15 years — has a mean cost overrun of 22% and a mean schedule overrun of 11 months. The probabilistic model puts the project's P50 capex at $4.7B (vs. $4.2B sanctioned) and P80 at $5.4B. Optimism-bias correction on engineering productivity and modular fabrication adds another $200M to P50. Three drivers explain 70% of the variance: modular fabrication yard productivity, long-lead cryogenic equipment, and final commissioning. Recommendation: sanction at P70 capex of $5.0B (not $4.2B), with $300M of management reserve held against the three identified drivers, and an early-action plan for parallel modular fabrication that buys down 35% of the variance on that path for $80M of incremental capex.
The same engine powers capital project management software at the portfolio layer and Monte Carlo project simulation on the underlying math. For the scheduling layer, see AI project scheduling.
Frequently asked questions
What counts as a megaproject?
Conventionally, capital projects above $1B in total installed cost. The risk profile shifts qualitatively at that scale: the project becomes too large to fail gracefully, the schedule extends past most macro forecasts, and traditional bottom-up risk registers consistently under-state the actual outcome distribution.
Why do megaprojects fail so consistently?
Three reasons identified in the academic literature: optimism bias in the planning team, strategic misrepresentation in the sanction case, and reference-class neglect — the failure to compare the project to similar past projects that all overran. Probabilistic modeling and reference-class forecasting are the two corrections that show measurable accuracy improvement.
How does this fit with our existing risk-register process?
It complements the risk register. The register catalogs known risks and tracks mitigation actions. Capital Project AI runs the probabilistic model that quantifies what those risks actually do to the cost and schedule distribution, and surfaces the systemic biases — like optimism on engineering productivity — that risk registers usually miss.