Revisiting influence analysis

Once variables (also called factors and drivers according to authors) have been identified – and in our case mapped, most foresight methodologies aim at reducing their number, i.e. keeping only a few of those variables.

Indeed, considering cognitive limitations, as well as finite resources, one tries obtaining a number of variables that can be easily and relatively quickly combined by the human brain.

Furthermore, considering also the potential adverse reactions of practitioners to complex models, being able to present a properly simplified or reduced model (however remaining faithful to the initial one) is most often necessary.

Ranking analysis

When the foresight methodology does not include links between variables, thus if we don’t have a graph or a map, then the way to select variables is by ranking them according to specific criteria. Among the criteria most used, one finds likelihood and impact, or impact and uncertainty (i.e. one does not know how the variables will evolve). However, in a complex world, which includes feedback loops and where ripple effects exit, not linking variables is a serious handicap.

Influence analysis

When variables are linked (as here), the method most commonly used is to identify the most influential variables – influence analysis – and then to reduce the foresight analysis to those variables. Hence, scenarios will be constructed around those influential variables. There are different ways to proceed.

Some systems, such as Parmenides Eidos™ and all similar approaches, e.g. Singapore RAHS 1.6 (watch the technological demonstration video, especially from 3:32mn onwards), use what graph and network theory call indegree and outdegree centrality measures. The indegree of a node (our variable) is the number of head endpoints adjacent to the node, i.e. how many edges (arrows) arrive to this node. It represents the number of variables that influence this node. The larger the indegree, the more influenced the variable is.

The outdegree of a node is the number of tail endpoints adjacent to the node, i.e. how many edges (arrows) leave from this node. It represents the number of variables that are influenced by this node. The larger the outdegree, the more influential the variable is.

However, as underlined by Arcade, Godet et al. (Structural Analysis with the MICMAC Method, 2009), this method only considers direct influence. What happens if one variable exerts only a single influence on one group of variables, but if this group of variable exerts a strong influence on the whole system? The importance of the initial variable would be downplayed if we were considering only direct influence.

In graph and network theory, various measurements – centrality measures – exist that allow identifying various types of importance of a node in relation to the whole network or graph. However, those measurements were created initially with social analysis in mind, not for foresight analysis.

8 most important nodes according to Eigenvector centrality measures

After having tested them against the direct influence idea and with foresight in mind, the measure that is chosen here to determine the initial set of criteria is the Eigenvector centrality. Again, using Gephi allows for easy and instantaneous calculation of all centrality measures.

We should however underline that we lose here the information differentiating between influenced and influential nodes. This is why graph and network analysis use an array of measurement and not a single one, and why we used here Eigenvector centrality measures against indegree and outdegree when selecting Eigenvector centrality measures.

Further tests should be designed to refine the choice of measurement for this revisited influence analysis.

The influence graph

Influence graph

Once influence (or degree centrality in graph and network theory) is measured, the variables are positioned on a graph, the “influence graph,” with as abscissa (x axis) dependency or influence received, and as ordinate (y axis) influence.

This can be easily done with Gephi by choosing the layout called “Geo Layout” and entering, in the case of the degree measurements, “indegree” for longitude and “outdegree” for latitude.

The location of variables on the graph expresses their type of influence and they are labelled accordingly:

  • Top left quadrant – most influential variables: drivers (usual) or influent variables (MicMac method, Godet)
  • Top right quadrant – most influenced and influential nodes: pivots (usual) or relay variables (MicMac method, Godet)
  • Bottom right quadrant – most influenced variables: outcomes (usual) or depending variables (MicMac method, Godet)
  • Bottom left quadrant – neglected variables, considered as less important in strict influence analysis terms.
  • The MicMac method of Godet adds further distinctions, as shows in red on the graph.
Influence graph (degree) with Eigenvector centrality (size of the node) for the entire graph

Once variables have been sorted out according to influence, then the variables that are seen as most important are usually selected and used to proceed to the next step. For example, on may redraw a map using only the drivers, pivots and outcomes and then move to creating foresight scenarios or to Morphological Analysis with those selected variables.

However, as seen, foresight methods usually start from an explicitly limited number of variables, which allow them to select at the end of the influence analysis between 2 and 10 variables. Here, on the contrary we use as many variables as needed for the map to be a good enough model, and thus the influence analysis, whatever the measurement chosen, does not lead to the easy selection of this very limited number of variables.

Furthermore, the selection of only those variables, if it can seem helpful initially, becomes a disadvantage when we move to the creation of scenarios. Experience shows that it is practically impossible to construct a serious narrative with only those selected variables. When constructing narratives, one automatically and unconsciously reintroduces other variables that had been previously eliminated. As those variables are now reintroduced unsystematically and without guidelines, the door is opened for any kind of mistake and biases can easily creep in.

Finally, it is scientifically absurd and wrong to dispense with one variable, when we know it is there. It could lead to erroneous foresight and then warning. Furthermore, in the case of warning, ignoring variables could deprive us of crucial indicators and thus impede the overall warning on the issue at hand.

Revisiting influence analysis: from reduction to setting initial criteria

Hence, in the methodology used here, we shall do otherwise. We shall NOT reduce the number of variables but use a method that could be called propagation and is made possible by the existence of ego networks in social network analysis.

Does it mean that we can fully dispense with influence analysis? The answer is evidently no – or this post would not have been written – but we shall use it for a different purpose.

We shall use influence analysis to set the initial criteria that will give us the point of departure for constructing the narrative of our scenarios.

According to cases, between 5 and 10 variables can be chosen as initial criteria. The rule of thumb is to try to determine variables truly standing out. Visualization is here very helpful. If the software used is Gephi, then one will also be able to choose various features such as filters allowing selecting various ranges of variables according to measurement. As no variable has been suppressed, fixing the number of initial variables is not a crucial problem and cannot lead to major mistakes. It is more a matter of convenience to be able to start telling the stories of the future (narrating the scenarios).

Cite as Helene Lavoix, (2011), “Revisiting influence analysis,” The Chronicles of Everstate, Red (Team) Analysis, http://wp.me/p1S3g8-50.

References

Arcade, Jacques, Godet, Michel, Meunier, Francis, and Roubelat, Fabrice, “Structural Analysis with the MICMAC Method & Actors’ Strategy with MACTOR Method,” The Millennium Project: Futures Research Methodology, Version 3.0, Ed. Jerome C. Glenn and Theodore J. 2009, Ch 11.

Glenn, Jerome C. and The Futures Group International, “Scenarios,” The Millennium Project: Futures Research Methodology, Version 3.0, Ed. Jerome C. Glenn and Theodore J. 2009, Ch 19.

Hanneman, Robert A. and Mark Riddle. 2005. Introduction to social network methods. Riverside, CA: University of California, Riverside ( published in digital form at http://faculty.ucr.edu/~hanneman/ )

Ritchey, Tom; “Morphological analysis,” The Millennium Project: Futures Research Methodology, Version 3.0, Ed. Jerome C. Glenn and Theodore J. 2009, Ch 1.