The simulation isn't the hard part
Monte Carlo is mature technology. The simulation engine is not what's broken in most capital project risk analyses. What's broken is the inputs: triangular distributions guessed at by the activity owner, no correlations between dependent activities, and outputs that get summarized as a single P50 number on a slide rather than as a decision-ready variance decomposition.
Capital Project AI does the simulation correctly: calibrated input distributions from your firm's historical data and a curated reference dataset, explicit correlation handling between activities and cost lines, convergence monitoring instead of arbitrary iteration counts, and a variance decomposition that tells the project team which risks are actually moving the answer.
What "Monte Carlo done right" actually means
Five things that distinguish a useful simulation from a math-shaped wallpaper exercise.
Calibrated input distributions
Inputs come from your firm's historical productivity, cost, and schedule data — not from triangular guesses by activity owners. Where you don't have data, the platform falls back to a curated reference dataset of public capital project outcomes calibrated by industry and scope.
Correlation matrices
Activities exposed to the same vendor, crew, weather window, or commodity price are correlated. Ignoring that correlation under-states tail risk by 20-40%. Capital Project AI estimates the correlation structure from historical data and applies it in the simulation.
Convergence monitoring
The simulation stops when additional iterations no longer change P80 and P95 within your specified precision — not at an arbitrary "10,000 iterations" stopping rule that may be over- or under-converged depending on the underlying variance structure.
Variance decomposition
The output isn't just the distribution. It's the decomposition: which inputs explain the spread between P50 and P90? Long-lead deliveries? Engineering productivity? Permit timing? Three or four drivers usually explain 70%+ of the variance, and management attention should be there.
Buy-down quantification
For each major variance driver, the platform computes the cost of buying it down (parallel fabrication, accelerated vendor commitment, redundant resources) and the resulting reduction in P80. Management reserve becomes a quantified investment decision instead of a percentage debate.
Why Capital Project AI
- The hard part is the inputs. Calibrated distributions, real correlations — the parts of Monte Carlo that determine whether the output is a forecast or a guess.
- Decision-ready outputs. Variance decomposition and buy-down analysis, not just a probability distribution. The project team knows what to do next.
- Fast enough to use live. 100,000+ iterations in under two minutes. Re-run with revised inputs during the planning meeting itself.
- Built by an ex-Shell capital owner. Founded by an engineer who has lived through the gap between sanctioned plans and actual outcomes at $800M megaproject scale.
Run a Monte Carlo with calibrated inputs
Upload your schedule and cost estimate — get the converged distribution, variance decomposition, and buy-down options in under two minutes.
Open the Dashboard →What it looks like in practice
A pipeline operator is sanctioning a $1.6B compression expansion. The existing risk analysis was done in spreadsheet Monte Carlo with triangular distributions guessed at by activity owners. P80 capex came out at $1.78B, suggesting 12% cost contingency was sufficient. The board is asking whether to trust the number.
Capital Project AI re-runs the simulation with calibrated inputs from the operator's historical pipeline projects and explicit correlation handling. P80 capex moves to $1.94B — a $160M difference driven by correlation between previously-independent inputs (the same fabrication yard handles three skid packages; the same drill crew runs four river crossings). Variance decomposition identifies four drivers explaining 78% of the spread: skid fabrication productivity, river-crossing schedule risk, compressor commissioning duration, and right-of-way acquisition timing. Buy-down analysis: $14M of incremental spend on parallel skid fabrication and an early-action right-of-way program reduces P80 by $90M. Recommendation: sanction with 18% contingency (not 12%), commit the $14M of buy-down investment, and revisit at the next gate.
The same engine powers megaproject risk management for the largest projects and AI project scheduling on the activity layer. For the broader portfolio question, see capital project management software.
Frequently asked questions
Doesn't every project controls team already do Monte Carlo?
Many do. The problem is rarely the simulation engine; the problem is the input distributions. Most teams use triangular distributions guessed at by the activity owner. Capital Project AI calibrates input distributions against your firm's historical data and a curated reference dataset — which is what determines whether the output is a forecast or a number-shaped guess.
How are correlations handled?
Explicitly. Two activities exposed to the same vendor, the same crew, the same weather window are correlated, and ignoring that correlation produces materially under-stated tail risk. Capital Project AI estimates correlation matrices from historical data and applies them in the simulation.
How do you decide how many iterations to run?
Enough that the P80 and P95 stabilize within the precision you need — typically 50,000 to 200,000 iterations for a complex project. The platform monitors convergence as it runs and stops automatically when additional iterations stop changing the answer.