Skip to content
Analytica > Blogs > Monte Carlo analysis for schedules: more than just a mouse click

Monte Carlo analysis for schedules: more than just a mouse click

Project managers using Analytica to model schedules benefit from the built-in Monte Carlo engine. That makes the simulation runs for a given model effortless from the user’s standpoint. However, determining whether or not to do a Monte Carlo analysis goes deeper than a simple decision to click on a mouse or not. The starting point is an understanding of what it can do. Project scheduling, for instance, may benefit from the insights, but not all project managers ‘get’ it. The process of Monte Carlo analysis may nevertheless get project managers to constructively expand the way they think about schedules.

Getting to the click

The Monte Carlo analysis process can be divided up into a few main steps. First, make a diagram in Analytica (an influence diagram) to show the different factors involved and if they influence or are influenced by each other. Next, model the relationships (where they exist) between the different factors. Then assign suitable probability distribution functions to each factor. At this stage, you’re a mouse click away in Analytica from running your Monte Carlo simulations.

Careful with those estimates

Monte Carlo simulation has the edge on deterministic methods like PERT (Program Evaluation and Review Technique), which often underestimates true project length. However, it still needs to be based on realistic representations of the parameters involved, including their degree of uncertainty. Monte Carlo analysis won’t improve a model that starts off with flaws.

Changing the probability distributions

One of the suggestions about improving results is to broaden the range of candidates for the probability distributions of the different tasks of a project. The immediate assumption may be to use a triangular distribution with closed-ends, as this corresponds to the view of a task as having set start and finish dates. In a perfect world this might be true – but in such a world, uncertainty would not exist and neither would the need for Monte Carlo simulation. In the real world, project tasks may overrun: an open-ended distribution may therefore be better suited to modeling the possibility of the upper task length limit being pushed back.

Moving with the times

Previous criticisms of Monte Carlo analysis to evaluate project schedules included the computational power needs and the multiple decisions for assigning probability distribution functions to different variables. Understanding of the power of the results was another. Even the possibility to assign a percentage chance of a project finishing within a certain time left many project managers unenthusiastic about abandoning previous ‘hard’ dates and estimates. Today however, Analytica makes light work of the computation needed, gives project managers the choice for each variable between assigning a probability distribution or leaving it deterministic, and showing the results in easy to grasp graphic format.

If you’d like to know how Analytica, the modeling software from Lumina, can help you to better model project schedules and related aspects, then try a free trial of Analytica to see what it can do for you.

Share now 

See also

Heat pumps and hybrid systems in cold climates

Recent advancements in cold-climate heat pump technology have proven their effectiveness in heating homes even in areas with harsh winters. However, there’s been less research

The units state of America

It would be much easier to do calculations if we used kWh and GJ versus British thermal units and gallons. How did the US end up as the only place

Analytica software top in user satisfaction on G2

We’re thrilled to announce that Analytica has achieved the highest satisfaction score in G2’s business process simulation (BPS) category! G2 is the leading software
The imitation game

Does GPT-4 pass the Turing test?

In 1950, Alan Turing proposed “The Imitation Game”, today known as the Turing test, as a hypothetical way of measuring whether a computer can think [1]. It stakes out the