When you say something is probable, are you communicating or obfuscating?

DicePosted by Ray Poynter, 18 June 2020


I have just been re-reading an interesting US document “Words of Estimative Probability” from Sherman Kent that dates back to 1964 and which highlights the problems with words like ‘probable’, ‘unlikely’ and ‘serious possibility’, and the lack of consistency in the way that forecasts and estimates are communicated and received. You can read the briefing here and below I have set out my thoughts about the points raised and the way we in the insights business should respond to them.

Sherman Kent was a history professor at Yale, during the Second World War and for 17 years of the Cold War was a senior figure in US intelligence, he has been called the ‘father of intelligence analysis’. The briefings he prepared for senior military and political figures mattered – so the accuracy of the probabilities of whether country X is about to take some unfriendly action is important. But, what Kent realised was that not only the accuracy of what was said important, so was the accuracy of what was heard.

Near the start of the paper, Kent shows an example of the problem using a briefing in March 1951 about the probability of a Soviet invasion of Yugoslavia before the end of 1951. (Note, although from its liberation in 1945 Yugoslavia was a communist state, it broke from the Soviet block in 1948 and received financial aid from the US from 1949.) The National Intelligence Estimate (NIE 29-51), that Kent’s team produced, included the view that an attack on Yugoslavia “should be considered a series possibility”.

A few days after the briefing was submitted to the senior decision makers, Kent was having a chat with one of the recipients of the report who asked what was meant by the term “serious possibility”. Kent commented that he meant it was more likely to happen than not happen, about two-thirds versus one-third. The policy maker was taken aback and commented that the people receiving the report had assumed the odds were very much lower than that. This worried Kent who went back to the team who had put the report together and asked them to try to put numbers to what they thought the phrase meant in the context of this briefing. The odds from his team varied from 80% likely to happen to 20% likely to happen. And this was from a team who had all agreed to use the term “serious possibility”.

The rest of the paper (which is very readable) goes on to explore strategies for improving the consistency of the message being given with the message being received. Note, this is a quite separate problem from how accurate the initial analysis was. One useful device that Kent developed was an “Odds Table” and explanatory note. I have reproduced both the table and the note below.

Probability Table

“You should be quite clear that when we say “such and such is unlikely” we mean that the chances of its NOT happening are in our judgment about three to one. Another, and to you critically important, way of saying the same thing is that the chances of its HAPPENING are about one in four. Thus if we were to write, “It is unlikely that Castro will attempt to shoot down a U-2 between now and November 1965,” we mean there is in our view around a 25-percent chance that he will do just that. If the estimate were to read, “It is almost certain Castro will not . . . ,” we would mean there was still an appreciable chance, say 5 percent or less, that he would attempt the shootdown.”

In his book Superforecasting: The Art and Science of Prediction, Philip Tetlock highlights this paper from Sherman Kent when describing the problem with imprecise wording when making predictions. Tetlock also noted that imprecise words give the person making the prediction wriggle room and make it harder for others to assess the quality of predictions. Tetlock highlights something he calls the “the wrong side of maybe fallacy”. If a consultant says that there is a 70% chance of a new product being successful, and the product is not successful, are they wrong? If they make 100 forecasts and 70 are successful, and 30 are not, they will be ‘right’ in a strict sense, however, the owners of the 30 products that failed are likely to say the consultant was wrong. As a consequence, forecasters like to use phrases like ‘a good chance of success’ and try to be a little vague about what ‘success’ means.

The lessons for insight professionals

When we are talking about things that are not 100% certain, we need to convey our estimate clearly and with consistency. For example, I will definitely have the report with you in four hours (plus or minus 1 hour). Or, there is about a 10% chance that this choice will fail. Or, as in some of the UK’s early COVID-19 predictions, an estimate that 20% to 80% of the population will catch it (which is another way of saying they don’t know yet). To improve, we need to benchmark our predictions, and to benchmark our predictions we need to make estimates that can be checked.

The second point is to ask for feedback from your audience to assess what they think the message means. For example, ask ‘If you had to summarise what I have just said, in terms of chances (say, out of ten), what would be the chance that X will happen?’ Where possible, ask for a different set of terms than the ones you used in communicating your prediction. If you talked in terms of percentages, perhaps ask for feedback with a 7-point scale that runs from Extremely Unlikely to Extremely Likely.

Scenarios as an alternative to predictions

As readers of my posts will know, I am a fan of the Scenario Thinking approach, and this is partly because of the fallibility of predictions. Scenarios are plausible futures, mutually exclusive futures that could happen. In the case of Yugoslavia above, the scenarios would have included invasion, non-invasion, and perhaps some third option based on sanctions or indirect action. The report would say all three are plausible and set out a) methods of assessing which scenario is beginning to happen, b) the implications of it happening, and c) ways of influencing events so that a more favourable scenario becomes more likely or so that a scenario becomes less unfavourable.

Want to learn more about scenarios?

Check out these two courses I am running, one for the Australian Research Society and one for the UK’s MRS.

Leave a Reply

Your email address will not be published. Required fields are marked *