Evaluating Scenarios and Indicators for the Syrian War

Every year, The Economist, in its “The World in…” series, assesses it successes and failures regarding its past yearly forecasts (e.g. for 2012). This is an exemplary behaviour that should be adopted by all practitioners: if we are to deliver good and actionable strategic foresight and warning, and to improve our process, methodology and thus our final products, then we should always evaluate our work. Having now completed our last series of updates on the state of play for the Syrian war, we can now start assessing how our own scenarios and indicators fared so far, if they need to be updated and the potential methodological improvements that we should endeavour. Evaluating the scenarios As the Geneva conference took place (see previous […]

To access this article, you must become one of our members. Log in if you are a member.

Assessing End of Year Predictions: How Did they Fare? (2)

The evaluation of our 2012 predictions’ sample underlines notably a widespread conventional view of national security, novel issues being ignored; a relative inability to assess timing whilst our understanding of issues fares relatively well; the existence of major biases, notably regarding China, Russia, and the U.S; the difficulty of prediction for novel issues and old issues in new context.

To access this article, you must become one of our members. Log in if you are a member.

An Experiment in Assessing End of Year Predictions (1)

This post will present the experiment – assessing a sample of open source predictions for the year 2012 – address the methodological problems encountered while creating the evaluation itself, and underline the lessons learned. The second part (forthcoming) will discuss results.

To access this article, you must become one of our members. Log in if you are a member.