A third criticism of PRA and expected utility methods is that T may be a dynamic -- as opposed to static -- property of an opportunistic rather than rational adversary.  That is, a terrorist might take advantage of weaknesses as they find them, and a terrorist might adapt to defensive moves by the defender in real time. When the available evidence changes, probabilities should also change. For example, a terrorist might stake-out and observe a target many times before deciding to attack. Or the opportunistic terrorist might pass over one target in favor of a more attractive target during an operation. Thus, T changes with the situation.

A corollary of this criticism is that T and V estimates in PRA are often baseless, because they ignore evidence or situations in play. One can argue that T and V should be a posteriori probabilities obtained from historical data, for example. However, historical data can be inaccurate, too, because of false positives – indications of threats when there are none. T and V, say the critics, are conditional on historical evidence, which when cleansed of false positives is a better indicator of likelihood than static estimates based on a prior probability.

An adaptation of the conditional probability model suggested above --called a Bayesian belief network and abbreviated BN -- overcomes the limitations of static models described thus far. Thomas Bayes (1701 –1761) was a Presbyterian minister in England whose work was published only after his death in 1762. His 300-year old innovation was largely ignored until recently.

Bayes defined probability as a belief rather than a frequency. Propositions such as “a terrorist attack is likely” are assigned a number indicating how certain we are in the proposition’s accuracy. The number is a measure of belief: zero means we believe the proposition is false; one means we believe the proposition is true, and any number in between is the degree to which we believe the proposition. For example, 0.75 indicates a high degree of belief, while 0.25 indicates a relatively low degree of belief.

If we can somehow combine various propositions into a system or model of beliefs we can test the model against actual data. This is the idea of a Bayesian network, BN. The BN model becomes a reasoning system. We input initial beliefs as probabilities, run the model to see if it is predictive, and adjust the inputs as more data is acquired and uncertainty is reduced.

A BN contains propositions (nodes) and their influence (links) on one another as seen here...

 

If proposition B influences proposition A, then a link connects B to A, and we say B is the parent and A is the child. In Figure 3(a) proposition SURVEILLANCE is the parent of FERTILLIZER and ATTACK. The links define conditionality relationships between parent and child propositions: conditionality flows through a link from parent to child.

[Conditionality can work both ways, depending on the application of the Bayesian network]. Conditionality is stored in each proposition as a Conditional Probability Table, CPT as illustrated in Figures 3(b)-(c). Conditionality is stored in each proposition as a Conditional Probability Table, CPT as illustrated in Figures 3(b)-(c).

A BN “executes” by “reasoning” about the propositions represented as nodes. The method of reasoning is based on Bayes’ Theorem:

 

If Bayes’ Theorem were used in PRA, TV would be replaced with a Bayesian proposition:

where

The difference between PRA reasoning and Bayesian reasoning is clear when comparing the Bayesian formulation with the PRA formulation. Bayesian reasoning depends largely on historical evidence to populate the right-hand-side of the proposition. As data changes, certainty in the proposition also changes. PRA estimates are static and don’t change with changes in evidence. Bayesian estimates change with changes in evidence. Bayesian reasoning admits to faulty estimates, e.g. false positives, while PRA does not. Accordingly, Bayes’ equivalent of TV can be re-written in terms of the likelihood of a false positive as:

Suppose, for example, historical data shows that 75% of the time, threat information predicted a successful attack, and 10% of the time the historical data predicted a successful attack when it did not happen. Therefore, Pr(threat|attack) equals 0.75 and Pr(false_positive) equals 0.10. This information reduces uncertainty, and adjusts our belief in a successful attack according to:

Just for the sake of completing the example, suppose early warning indicators (or subject matter experts) say the probability of an attack is, Pr(attack) = 60%. Then, belief in an imminent successful attack can be revised:

If, on the other hand, indicators or expert opinion revises the estimate of an imminent attack downward, say Pr(attack) = 40%, then belief is also revised downward to 0.75 or 75%.

This kind of machine reasoning combines evidence-based logic with probability theory. As conditional probabilities of pre-cursor events become known from accumulated evidence, uncertainty in BN propositions declines, yielding more “belief” in the proposition. Bayes’ Theorem treats probabilities as evidence – a confusing departure from Pascal’s interpretation of probability – so a more thorough example is given here to clarify the significance of Bayes’ work. Consider the elementary BN of Figure 3. Suppose a law enforcement agency (LEA) wants to estimate the probability of a terrorist attack given evidence obtained through SARs – suspicious activity reports. The agency begins by building a model of a typical terrorist bombing incident as shown in Figure 3. Historical SARs provide evidence that bombers often visit their target site several times before an attack. In addition, historical evidence suggests that terrorists have made bombs from fertilizers, so the LEA also tracks fertilizer purchases. How are these “facts” used to calculate threat?

The relationship between surveillance, fertilizer purchases, and attacks is represented in Figure 3(a) as a network consisting of SURVEILLANCE, FERTILIZER, and ATTACK nodes. These nodes represent propositions – unsubstantiated claims – with associated degrees of believability.  Their “truth” is questionable. Initial estimates of likelihood are mere guesses, but these guesses should get better as more evidence is used to update the “truth-ness” of one or more of the propositions.

Recall that conditionality is represented by links from parent-to-child nodes. In the simple example of Figure 3, SURVEILLANCE is the parent of both FERTILIZER and ATTACK nodes, and FERTILIZER is the parent of ATTACK. Truth, or the degree of belief is transmitted through the network via these links. Therefore, a change in one proposition node propagates to others. Truth emerges as a byproduct of this propagation.

Figures 3(a)-(c) show the priori estimates of the likelihood that a suspect will visit a target many times (5%), and the likelihood that the suspect will buy fertilizer after visiting the target (90%). Every proposition has an output value (true or false) that is conditional on its input values. Therefore, if a proposition has one input link that can be either true or false, it must have a Bayesian estimate of the output for each of the possible inputs. If a proposition has 2 inputs, each with a possibility of being true or false, then the output depends on 4 cases: TT, TF, FT, and FF, where T = True and F = false. [The combinations can also be Yes, No, or On, Off, or Buy, Don’t Buy as well]. Figure 3(c) shows all possible combinations of input values for the ATTACK proposition, which is conditional on two links with values of None, None; None, Yes; Buy, None; and Buy, Yes. Note that these probabilities must sum to 1.0 across each row of the CPT of each proposition.

Initially, the BN represents reality as users perceive it. For example, the initial likelihood of a terrorist visiting a target several times before attacking it is assumed to be 5%, see Figure 3(a).

This is merely a belief, and is unsubstantiated by evidence. It is interesting to note that these estimates need not be especially accurate, because their impact on the final answer will be altered as new evidence comes in and is incorporated into the BN. This “fuzziness” is one of the major advantages of using Bayesian belief networks in place of subject matter experts, alone.

Figures 3(d)-(e) show how new evidence is used to update the BN and thereby increase the believability of the likelihood of an attack. As situational awareness reports arrive and are incorporated into the network it is updated and a new estimate of the posteriori probability of an attack is automatically calculated. The BN applies Bayes’ Theorem to the network to revise affected propositions. [Typically, a BN software application is used to perform these calculations].

Suppose a SAR indicates an unusual interest in a certain target by a suspect. The priori probability of SURVEILLANCE can now be changed from 5% to 100%, because of the new evidence. The certainty created by the new evidence increases the posteriori threat to 91.6%, see Figure 3(d). Reducing uncertainty in one part of the BN increases our belief in an attack in another part of the BN. When a subsequent SAR indicates that the same suspect has purchased a large quantity of fertilizer, the FERTILIZER node is updated to 100%, and the BN automatically re-calculates the posteriori attack probability, see Figure 3(e). As more evidence is gathered, uncertainty is reduced and the posteriori probability increases from 9.33% to 91.6%, and finally 99%.

Bayesian network theory is a tool for calculating posteriori probabilities – it attempts to predict the future from the (recent) past. Unlike static and rigidly determined probabilities, Bayesian probabilities are beliefs containing uncertainty. But a BN is only a model of what we believe to be true about the real world when the data contains uncertainty. A proposition is more likely to be true if its degree of belief is high, but keep in mind that these estimates are only as good as the model and input data.

Bayesian belief networks resonate with Laplace’s rule of succession, because both theories incorporate doubt in their model of reality. Laplace squeezes out lingering uncertainty using overwhelming historical evidence. Bayes squeezes out uncertainty using convincing evidence and conditional probability. Bayesian networks are, however, based on sound (mathematical) principles – Bayes’ Theorem provides a mechanical method of expressing the amount of uncertainty reduction that is made possible by incorporating more information.

Unfortunately, the knowledge required to build and operate a Bayesian network may exceed the capabilities of an agency or risk assessment operator. BN construction requires a combination of subject matter expertise and facility with Bayes’ Theorem and corresponding modeling tools. Fortunately, a number of software packages exist to do the calculations once a BN is constructed. But someone must customize each model for each situation. I used Norsys Software’s Netica to illustrate BN modeling in Figure 3, which made it possible to build a model without knowing the math.

[Additionally, the calculations become intertwined and exponentially more complicated, as the BN gets bigger]