Foreseeing the future, whatever the name given to the endeavour, includes two major tasks.
The first one is, of course, the analysis, the process according to which the foresight, forecast, warning, or, more broadly, anticipation is obtained.
The second one is less obvious, or rather so evident that it may be overlooked. It is, however, no less vital than analysis. We need to deliver the output of the analytical process to those who need the foresight, the decision-makers or policy-makers. Ideally, the recipients must understand that output, because they will act on it. They need to integrate the new knowledge received in the decisions they will take.*
A huge challenge runs across these tasks: biases.
We must overcome the various natural and constructed biases – systematic mental errors – that limit human understanding. This article will present first the classical way we deal with biases: we consider them – quite rightly – as “enemies” and we devote much effort to mitigate them. Then, considering the specificity of the delivery stage, this article suggests that another strategy is necessary. We need to turn our usual strategy on its head and befriend biases. In that case, scenarios become a tool of choice for an enhanced delivery of our foresight to decision-makers […]
(This article is a fully updated version of the original article published in November 2011 under the title “Creating a Foresight and Warning Model: Mapping a Dynamic Network (I)”). Mapping risk and uncertainty is the second step of a proper process to correctly anticipate and manage risks and uncertainties. This stage starts with building a model, which, once completed, will describe and explain the issue or question at hand, while allowing for anticipation or foresight. In other words, with the end of the first step, you have selected a risk, an uncertainty, or a series of risks and uncertainties, or an issue of concern, with its proper time frame and scope, for example, what are the risks and uncertainties to […]
Having detailed the various potential scenarios for Libya’s future over the next three to five years, we shall now evaluate the likelihood of the scenarios thanks notably to their indicators. We shall use the methodology developed by The Red (Team) Analysis Society, building upon Heuer (“Assessing Probability of a Scenario”, in Psychology of Intelligence Analysis, […]
The methodology of SF&W and risk management allows addressing these points. They should become rules and principles all analysts follow. Indeed, without paying attention to them, good analysis is impossible. The first article on The Black Swan can be accessed here.
(Notably pp.190-200) Considering uncertainty, but also our imperfect condition of human beings, the complexity of the social world, feedbacks, our more than insufficient knowledge and understanding, we must ….
Second, “black swans” refer to events that could absolutely not be predicted, as, for example for the Economist in ”The prediction games: Our winners and losers from last year’s edition”. Unfortunately, in this case, the label “black swans” excuses foresight errors. It tends to stop explanations and evaluation. Similarly, some will make statements along the line of “oh, but there is no point to do any foresight (or futures work or forecast), did you not read Taleb’s Black Swan? One cannot predict or foresee anything.”
This is a rather bold statement, especially when one seeks to anticipate uncertainty and to foresee and warn. We thus need to explore the unpredictability claim further.