Wherein lies the uncertainty with experts – in the situation that is being assessed or in the assessment itself? Expert elicitation and probability assessment can be a useful adjunct to empirical data on a problem or opportunity. However, the inherent subjectivity brings its own challenges. How expert opinions or beliefs are combined – and how they are selected in the first place – can have a major effect on the quality of the outcome.
Getting to grips with the probability distribution functions
The preliminary result of expert elicitation and probability assessment is the definition of a probability density function (PDF) that represents the expert’s belief about the matter at hand. The process for getting to such a PDF may vary, although some generally accepted methods exist, like the Stanford/SRI Protocol. To maximize compatibility between expert judgments, the quantity being assessed and the scale and unit are defined in the same way. To then model an expert’s opinion, the extremes of the expert’s degrees of belief are defined; so are the shape and percentiles of the PDF of the belief. Analytica gives modelers a wide variety of possibilities to define any particular probability distribution function, and to represent it graphically. This allows them to get confirmation from the expert that it is a fair representation of his or her degree of belief.
Combining degrees of belief from different experts
From a modeling point of view, Analytica makes it easy to combine probability distribution functions and model an outcome based on this aggregation. Uncertainty can be evaluated; sensitivity to inputs detected; and an importance analysis run to see how inputs compare in terms of their influence on the overall result. However, the inputs themselves are conditioned by the subjectivity inherent in the expert elicitation and probability assessment. Further points to note are:
- The proportion of experts with the same view is not necessarily an indication of the chance that view is correct
- If experts have different opinions that cannot be reconciled, then different weights may be attributed to the probability distributions before combining them. But this is only possible if it is in accordance with the competence of the expert in each case – competence being again a subjective measure…
Surprise, surprise
Their individual opinions may differ, but expert elicitation and probability assessment typically have at least one thing in common: elicitation overconfidence. Tests done with experts using questions with known answers show that their answers fall outside the credible elicited range far more often than expected. This ‘surprise index’ would in theory be at 2 per cent of the total results (outside a 98% credible interval). Studies by Morgan and Henrion showed that real results ranged in fact between 5 and 55 per cent of the time. Other potential problems to overcome include ‘availability’ (subjective ease of recall of comparable situations), ‘anchoring’ (previous answers affect the next answer), representativeness bias (being fooled by an irrelevant ‘likeness’) and motivational bias (‘that’s the answer that I want’).
A possible solution – Heterogeneity and size of the expert panel
If you can sample randomly for a large number of simulations in an Analytica model, why not sample (quasi) randomly among a large number of different experts? When empirical data is more complete, a small expert panel may be acceptable. When there is little empirical data however, a single expert or even a small panel may not suffice; in this case, a broader range of expertise, experience and professional backgrounds may be the only way to results of reasonable quality for expert elicitation and probability assessment.
If you’d like to know how Analytica, the modeling software from Lumina, can help you to analyze and manage uncertainty of any kind, then try the free edition of Analytica to see what it can do for you.