The Economist, with “The prediction games: Our winners and losers from last year’s edition,” shows the lead in a courageous, crucial and yet hardly ever done exercise: going back to our own foresight and assessing, in the light of the present, what was right and what was wrong. This exercise is not about getting awards or apportioning blame, but about improving foresight. Thus, all results, be they right or wrong, should go with a detailed explanation for the success or failure. This approach is essential to obtain progress. Systematically implementing such a practice in all anticipatory processes should be a major goal for practitioners and a criteria of quality for users.

The Economist self-assessment provides us with a very interesting example of how such lessons learned could be endeavoured, underlines questions that should be asked and key challenges for anticipation, and exemplifies how biases can derail foresight.

Evaluating success

It is often said that it is impossible to validate success, because the very fact to have been accurate will change the world, thus the predicted events will not take place but be altered. This very logical argument depends actually upon the actor’s capacity to influence events. For actors that directly act on their foresight with sufficient might, this challenging problem can be overcome by a particular attention paid to the overall process, distinguishing analysis and anticipation from the response chain. In the case of media, as they are not meant to act but to inform, as their analysis should consider all actors involved and not depend upon the strategic aims of one specific player (see for an explanation Assessing the “Strategic” in Surprise), evaluating success is easier (or more difficult because the influence of mass media should be considered).

The fact is that The Economist underlines the accuracy of its predictions in many areas (and we shall take it at face value). Beyond polite modesty, it would actually be very interesting to understand why the analysts themselves are surprised by their success. Certainly, they were not betting, but relied on something to predict events, phenomena, and this something is what matters (even while betting we use some kind of model). This emphasises the importance of making models explicit, as only those give us the tools for precise assessment. If the model used (and, following Epstein, cognitive models belong to this category) led to successful predictions, then our reliance on this model should increase, and we should pay attention not to destroy it. Changes brought to the model should be documented and integrated cautiously.

Biases, the vexing problem of time and the case of Syria

“A more serious mistake was failing to foresee how bloody the conflict in Syria would become. We thought President Bashar Assad was unlikely to last the year in office.” The Economist, 21 November 2012.

The Economist’s sentence echoes a point made by anthropologist Andrew Turton in his work on Thailand and everyday politics, according to which we often tend to underestimate the power of coercion and violence. Considering the relatively peaceful and mild twenty or so last years, the general disinterest in war and politics (qua politics, not politician politics) because “only economy matters,” then it was – and still is – even more likely to see violence, war and politics grossly underestimated. As global tension is now rising, it would be crucial if we want to improve our foresight to revise the old models to integrate what should never have been forgotten, political dynamics, both international and domestic. This does not mean over-favouring them, but getting more balanced and thus adequate models.

In turn, this underestimation led to another mistake, an erroneous assessment of timing. Apprehending time properly is one of the most difficult problems of foresight and warning analysis (see for example Creating Evertime) and certainly one of the most neglected, as hardly anyone seems to be working directly on it. Here we might have one important element that should be integrated in research: time and timing depends on other variables and their interactions, which is congruent with one of the ways indications and warning deal with this challenge, through timeline indicators.*

Timing again

“A better call, for the progress of the Arab spring more broadly, was that Islamists would make ground but play a cautious and pragmatic game.” The Economist, 21 November 2012.

The Economist article was published on 21st November, just before Mohamed Morsi, Egypt’s President, and “a leader of the Muslim Brotherhood” decided to issue “a decree… granting himself broad powers above any court as the guardian of Egypt’s revolution” (New York Times, 2 Dec 2012; 22 Nov 2012), which led to concern and widespread domestic protest. It was also published before demonstrations flared again in Tunisia on 27th November, for reasons similar to those that triggered the Jasmine revolution (Sarah Mersch, Deutsche Welle, 2 Dec 2012; Amnesty International via Bikyamasr, 1 Dec 2012).

This questions the assessment of success for this specific prediction. On the day of the evaluation, The Economist was indeed successful. However, can a forecast be right one day and wrong the next day, when it is about political dynamics, and especially considering the types of decisions that could be taken according to the prediction?

It is the very framework that we use to evaluate the success or failure of prediction that is at stake here, and this framework is calendar time. We need this framework because all our activities, thus the responses we would design and implement, are planned according to it, but there is also an element of absurdity if we limit ourselves to specific dates. A more interesting way to put predictions when dealing with political dynamics would be to point those out, assessing the patterns at work, the rise or decrease of tension and, more difficult, their estimated timing, which brings us back to the challenge previously underlined.

Black swans or biases?

“As ever, we failed at big events that came out of the blue. We did not foresee the LIBOR scandal, for example, or the Bo Xilai affair in China or Hurricane Sandy.” The Economist, 21 November 2012.

In those cases, the explanation given for the failures shows cognitive biases, most probably the same ones that were at work during the analysis and led to the incapacity to foresee, thus we may expect the same mistake to be reproduced.

Starting with Sandy, the storm did not come out of the blue; it is neither a black swan event (a concept Nassim Nicholas Taleb borrowed from Karl Popper to describe an unpredictable event, which is, with hindsight, re-imagined as predictable) as suggested by The Economist sentence, nor even a wild card (a high impact, low probability event). Any attention paid to climate change, to the statistics and documents produced by Munich-re (e.g the video below on North American weather) or Allianz, for example, to say nothing about the host of related scientific studies, show that extreme weather events have become a reality and we are to expect more of them and more often, including in the so-called rich countries, whatever ideologists say.

It may be impossible to predict the exact event, the day and precise path of a storm, but the likelihood to see “Frankenstorms” in the Eastern part of the US at this time of the year is high and in no way can be seen as an unpredictable surprise. How many similar events and related signals need to occur before we start considering them as likely and thus integrating them systematically in our various forecasts, foresight analyses and warnings.

A similar logic may be applied to the LIBOR scandal and even to the Bo Xilai affair. In a world where the financial establishment believes (rightly because political authorities let it do it) it is all-powerful, where most shy away from the shadow banking liability, where regulation is seen as cumbersome at best, then financial institutions and those working for them can easily conceive of themselves as being above the laws, which means that manipulating the LIBOR becomes completely plausible and not surprising.

The methodological problem we are facing here is as follows: Are we trying to predict discrete events (hard but not impossible, however with some constraints and limitations according to cases) or are we trying to foresee dynamics, possibilities? The answer to this question will depend upon the type of actions that should follow from the anticipation, as predictions or foresight are not done in a vacuum but to allow for the best handling of change.

In an ideal world, it would thus be logical to start with the second goal, that would then allow for the creation of proper mitigating policies, as well as for the design of further directives in terms of surveillance of problems, specific intelligence requirements, etc.

Then one could move to the prediction of discrete events (within the dynamics previously identified), with resources and analytical methodologies correctly allocated and designed according to the nature and characteristics of the potential events. In many cases such as the LIBOR or Bo Xilai, this would imply systematic investigation and intelligence collection, and those have traditionally been part of the media role. In the case of Sandy, we are in the field of warning of natural events, which is handled by the scientific community, by state’s (e.g. meteorological offices) and international governmental organisations.

The Economist courageous and interesting self-assessment of last year’s predictions has thus pointed out the need to make explicit and revise our analytical models (including cognitive ones), notably to fully integrate political dynamics, violence and wars, the importance and difficulty of time’s evaluation, the necessity to think about the use various clients could make of foresight when endeavouring and then phrasing a forecast, while the struggle against all biases must remain constant.


*See, for example, Grabo chapter 6, “‘Timing and Surprise”, who underlines the particular difficulty to foresee timing in the case of military attacks. Grabo, Cynthia M., Anticipating Surprise: Analysis for Strategic Warning, edited by Jan Goldman, (Lanham MD: University Press of America, May 2004).

Epstein, Joshua M. “Why Model?” Santa Fe Institute Working Papers, 2008.

Taleb, Nassim Nicholas, The Black Swan: the impact of the highly improbable (Random House (U.S.) Allen Lane (U.K.), 2007).

Turton, Andrew “Patrolling the middle ground: methodological perspectives on ‘everyday peasant resistance,’” in Everyday Forms of Peasant Resistance in South-East Asia, ed. James C. Scott and Benedict J. Tria Kerkvliet, (London: Frank Cass & Co.; 1986), pp. 36-48.

Featured image: Four Horsemen of Apocalypse, by Viktor Vasnetsov. Painted in 1887. Via wikimedia Commons.