The definition of system risk described here still falls short of answering the central question of complex system collapse – how is cascade failure accounted for in risk models and methods? The expected utility theory definition given above is static, not dynamic. The static definition ignores the domino effect of one node causing another node to fail. For example, power grids do not collapse in their entirety. Only part of the Eastern Power Grid failed in 2003, and most of the time power outages involve only a small fraction of all substations, power lines, and power plants. To be useful, a system risk model must include the probability and consequences of cascades as well as single-asset failures. Simulations are capable of capturing the dynamic behavior of cascade failures. They offer a solution to the problem of static versus dynamic risk assessment. Here is how it works. Simulate cascade failures by propagating a series of single-asset failures of nodes and links that “contaminate” or spread to adjacent nodes and links with probability defined by TiVi. Sum the consequences of failed or contaminated nodes and links, and repeat the process again. Each episode involves an initial single-asset failure followed by a chain of propagated failures spread through adjacent nodes according to their TV values. Perform the simulated cascades thousands of time, and construct an exceedence probability versus consequence distribution like the ones shown in Figure 4. |

Network risk is actually a curve – not a single number. The risk curve is obtained by multiplying exceedence probability by consequence, which yields PML risk as before. For reasonably small values of TV, the exceedence probability curves obtained by simulation are also long-tailed power laws, see Figure 6. |

Longer-tailed exceedence means higher risk due to cascades. Shorter-tailed exceedence means the network is more resilient against cascades. More significantly, Lewis showed that the number of links, size of hubs, and betweeness of nodes and links is directly related to self-organization, which in turn manifests as longer-tailed exceedence probability curves.17 And, longer-tailed exceedence curves translate into higher system risk. The topology of a network has been demonstrated, both mathematically and in practice, to accurately model cascade failure risk. Thus, there is a direct relationship between self-organization, risk, resilience, and cascade failure in complex networks. Network theory provides a comprehensive and unified framework for evaluating risk. It is an operational foundation that analysts and policy-makers can us to inform risk-informed decision-making. |

Imagine a simple game of “attack the block”, played by rational opponents as follows. One opponent – the defender – attempts to |

Stackelberg competition is very similar to the familiar 2-person attacker-defender paradigm in game theory that pits an attacker against a defender, both with limited resources to allocate. When applied to risk-informed decision-making, 2-person game theory tells us how best to allocate resources to simultaneously reduce risk on the part of the defender, and maximize risk on the part of the attacker. It is a realistic model of struggle under uncertainty and limited resource constraints. In short, it is a perfect match with network theory. |
Heinrich Freiherr von Stackelberg |

A Stackelberg game can be played on a network of nodes and links. The attacker juxtapositions threats against the nodes and links of the network to cause the most damage, while the defender hardens the nodes and links to fend off attacks or imagined attacks by the attacker. Stackelberg 2-person game theory introduces a new idea – rather than trying to estimate T and V, why not let Stackelberg calculate T and V for both attacker and defender? Given a model that relates T and V to the costs associated with raising T and lowering V, a software program can find the optimal strategy for both attacker and defender such that T and V reduce risk for the defender and maximize risk for the attacker. Game theory can solve for T and V in the network formulation of risk – for every component of the system – given C. Furthermore, an optimal Stackelberg solution can include self-organized criticality as a factor in network risk, if network properties such as hub size and betweeness are included in the analysis. This simplification reduces the data burden imposed on the analyst, because Stackelberg competition requires only consequence, C, to play the game. [T and V are obtained as a byproduct of optimization]. For example, given individual risks associated with the nodes and links of the Boston MTA shown in Figure 5, MBRA (available from www.CHDS.us/resources) will calculate the optimal Stackelberg allocations for both attacker and defender, given a budget (capability) for attacker and another budget for defender. Resources are allocated optimally to each node/link, according to Stackelberg competition. Using the optimal allocations to determine T and V for every node and link in the simulation of cascades as described above, MBRA calculates the exceedence curves as shown in Figure 6. |

The before and after exceedence probability curves for the MTA system of Figure 5 are shown in Figure 6. Note that MTA system resiliency is increased and risk reduced by the Stackelberg algorithm. The optimal attacker strategy is not shown in Figure 5, but the strategy produces one T value for every node and link in the MTA network. MBRA assumes T and V obey a law ofdiminishing returns – increases in T and decreases of V versus allocated resources diminish as more resources are applied. The first few dollars allocated to T and V are more effective than the last few dollars. For example, building a fence and deploying CCTV systems typically return large security benefits, but additional measures such as posting human guards typically have less impact. T, V, and C follow exponential functions with respect to allocated resources. |

Figure 6 illustrates the effectiveness of modeling systems as self-organized networks and then using Stackelberg optimization to discover the system’s risk profile and calculate optimal resource allocation. The model and computational methods of MBRA, for example, seem to “solve the problem”. But risk assessment is not a solved problem, for a variety of reasons. Two of the most significant limitations are 1) expected utility theory and mathematical methods like Stackelberg competition assume a rational actor – terrorists and nature intelligently seek optimal advantage, and 2) measurement is far from perfect – T, V, C, and other quantities such as prevention and response costs are difficult to estimate, accurately. But perhaps the most serious limitation of risk assessment is |

What are practitioners to think of risk assessment? Is it appropriate to use risk-informed decision-making as a strategy? Does it allocate resources, properly? Does it yield important insight? |

Two psychologists, Daniel Kahneman (1934-) and Amos Tversky (1937-1996) won the 2002 Nobel Prize in economic sciences for decades of research into why people can’t assess risk, properly. They found that most people are risk-averse when it comes to winning prizes, but risk-seeking when it comes to risking losses. Risk-averse means they under-estimate EUT, while risk-seeking means they over-estimate EUT. |

For example, when subjects were invited to either accept $10 on the spot or gamble on winning $100 with probability of 10%, most people opted to take the $10 immediately rather than the chance of winning $100 (or nothing). On the other hand, when asked to pay $10 to an experimenter or risk losing $100 with probability 10%, most subjects opted to take a chance. Subjects avoided risk when they could win $100 but sought risk when they could avoid losing $100 by paying $10. |

Expected utility theory produces the same expected utility in each case - $10. But for some reason, humans interpret risk contextually. In one context $10 looks better; in the other context, zero looks better. In fact, the behavior of subjects is highly biased by context. When asked which is worse, “saving 100 of 300 people from a fire”, versus, “losing 200 of 300 people in a fire”, most subjects preferred the first context to the second, even though the outcomes are the same. |

Taleb humorously explains the role emotion plays in risk assessment. He characterizes it as a struggle between different parts of our human brain. “It is a fact that our brain tends to go for superficial clues when it comes to risk and probability, these clues being largely determined by what emotions they elicit or the ease with which they come to mind. In addition to such problems with the perceptions of risk, it is also a scientific fact, and a shocking one, that both risk detection and risk avoidance are not mediated in the ‘thinking’ part of the brain, but largely in the emotional one. The consequences are not trivial: it means that rational thinking has little, very little, to do with risk avoidance. Much of what rational thinking seems to do is rationalize one’s actions by fitting some logic to them.” |
Nicholas Taleb |

Decision-making is motivated by emotion, but rational decision-making based on expected utility theory ignores the emotional dimension, entirely. Hence, the field of risk assessment, and the methods and models used to make rational decisions are at odds with human emotion. Frankly, risk does not compute with most people. Kahneman and Tversky called this Context distorts decision-making even more when politics enters into risk-informed decision-making. Resources are allocated to populous areas of the country because it garners votes even as risk of tampering with the largest nuclear power plant in the US – located in a remote area – may be higher. Highly populated cities receive less-than-adequate funding, because they lack Congressional clout. Critical infrastructures go unprotected because government lacks the expertise to assess true risk. Emotion runs high when money is involved, and so we lose rationality. Prospect theory and context is an important consideration when evaluating risk methods and models, because PT and context can undermine rationality. It blurs clear thinking. And without rational choices, we have no scientific basis for deciding the best use of limited resources. And without a scientific basis, valuable resources are squandered and lost, while high-risk assets go wanting. The presence of human emotion and its potential for corrupting decision-making underscores the importance of implementing a rational, unemotional, quantifiable method of risk assessment and corresponding resource allocation. Overcoming PT bias may be the biggest challenge of all. |

Lewis, Ted G., Bak, Per, and Chao Tang, Kurt Weisenfeld, “Self-Organized Criticality: An Explanation of 1/f Noise”, Bernstein, Peter, L., Boyer, Carl B., Buchanan, Mark, Dobson, I, and B.A. Carreras, V.E. Lynch, D.E. Newman, “Complex systems analysis of series of blackouts: cascading failure, critical points, and self-organization”, Gladwell, Malcom, Grossi, Patricia and Howard Kunreuther, Hanson, Robin, “Catastrophe, Social Collapse, and Human Extinction”, Kuhn, Thomas,
Perrow, Charles, Ramo, Joshua Cooper, |