Just as the BP spill is showing how wrong a "safe" oil recovery project can go, the lesson of Chernobyl is the scale of a nuclear disaster when they occur.
It is clear we don't use the same technology, but to say that eliminates the possibility of an event of similar scale is an absurd proposition on its face. Here is a test of what you are basing your overconfidence on:
Experts Are Often Wrong
That expert interpretations in the area of science and technology are often questionable, and that there is no positivist rule to guarantee their complete reliability, is illustrated by a recent study by hazard assessors in the Netherlands.
They used actual empirical frequencies obtained from a study done by Oak Ridge National Laboratories to calibrate some of the more testable subjective probabilities used in the famous Rasmussen Report, WASH-1400, probably one of the most famous and most extensive risk assessments ever accomplished.14
The Oak-Ridge frequencies were obtained as part of an evaluation of operating experience at nuclear installations.
These frequencies were of various types of mishaps involving reactor subsystems whose failure probabilities were calculated in WASH-1400.
The Oak-Ridge study used operating experience to determine the failure probability for seven such subsystems, and the Dutch researchers then compared these probabilities with the 90 percent confidence bounds for the same probabilities calculated in WASH-1400.
The subsystem failures included loss-of-coolant accidents, auxiliary feedwater-system failures, high-pressure injection failures, long-term core-cooling failures, and automatic depressurization-system failures for both pressurized and boiling water reactors.
Amazingly, all the values from operating experience fell outside the 90 percent confidence bands in the WASH-1400 study.
However, there is only a subjective probability of ten percent that the true value should fall outside these bands.
This means that, if the authors’ subjective probabilities were well calibrated, we should expect that approximately ten percent of the true values should lie outside their respective bands.
The fact that all the quantities fall outside them means that WASH-1400, the most famous and allegedly best risk assessment, is very poorly calibrated.
Moreover, the fact that five of the seven values fell above the upper confidence bound suggests that the WASH-1400 accident probabilities, subjective probabilities, are too low.
This means that, if the Oak-Ridge data are correct, then WASH-1400 exhibits a number of flaws, including an overconfidence bias.
This direct test of the process of risk assessment you are relying on shows that there is a very real and significant problem with the level of certainty that the nuclear industry asserts the assessments prove.
Kahneman and Tversky have uncovered other biases of experts. They corroborated the claim that, in the absence of an algorithm completely guaranteeing scientific rationality, experts do not necessarily or always make more correct judgments about the acceptability of technological risk than do laypersons.
Kahneman and Tversky showed that virtually everyone falls victim to a number of characteristic biases in the interpretation of statistical and probabilistic data. For example, people often follow an intuition called representativeness, according to which they believe samples to be very similar to one another and to the population from which they are drawn; they also erroneously believe that sampling is a self-correcting process.16
In subscribing to the representativeness bias, both experts and laypeople are insensitive: to the prior probability of outcomes; to sample size; to the inability to obtain a good prediction; to the inaccuracy of predictions based on redundant and correlated input variables; and to regression toward the mean. Nevertheless, training in elementary probability and statistics warns against all these errors.
Both risk assessors and statistics experts also typically fall victim to a bias called “availability,” assessing the frequency of a class, or the probability of an event, by the ease with which instances or occurrences can be brought to mind.
In subscribing to the availability bias, they forget that they are judging a class on the basis of the retrievability of the instances, and that imaginability is not a good criterion for probability.18
Most people also fall victim to the “anchoring” bias, making estimates on the basis of adjusting values of an initial variable.
In so doing, they forget:
that diverse initial starting points typically yield different results;
that insufficient adjustments can skew results;
and that probabilities of failures are typically underestimated in complex systems.
Although employing each of these biases (representativeness, availability, and anchoring) is both economical and often effective, any of them can lead to systematic and predictable errors.19
These systematic and predictive errors are important because technology and:
"... risk assessment must be based on complex theoretical analyses such as fault trees, rather than on direct experience. Hence, despite an appearance of objectivity, these analyses include a large component of judgment. Someone, relying on educated intuition, must determine the structure of the problem, the consequences to be considered, and the importance of the various branches of the fault tree."
In other words, the risk assessor must make a number of unavoidable, sometimes incorrect, epistemic value judgments.
Kahneman and Tversky warned that “the same type of systematic errors,” often found in the epistemic or methodological value judgments of laypersons, “can be found in the intuitive judgments of sophisticated scientists. Apparently, acquaintance with the theory of probability does not eliminate all erroneous intuitions concerning the laws of chance.”21 The researchers even found that psychologists themselves, who should know better, used their feelings of confidence in their understanding of cases as a basis for predicting behavior and diagnosing ailments, even though there was no correlation between their feelings of confidence and the correctness of the judgments.22
Such revelations about the prevalence and causes of expert error are not totally surprising since, after all, the experts have been wrong before. They were wrong when they said that irradiating enlarged tonsils was harmless. They were wrong when they said that x-raying feet, to determine shoe size, was harmless. They were wrong when they said that irradiating women’s breasts, to alleviate mastitis, was harmless. And they were wrong when they said that witnessing A-bomb tests at close range was harmless.23
For all these reasons it should not be surprising that psychometric analysts have found, more generally, that once experts go beyond the data and rely on value judgments, they tend to be as error-prone and overconfident as laypeople.
With respect to technological risk assessment, psychometric researchers have concluded that experts systematically overlook many “pathways to disaster.”
These include:
(l) failure to consider the way human error could cause technical systems to fail, as at Three Mile Island;
(2) overconfidence in current scientific knowledge, such as that causing the 1976 collapse of the Teton Dam; and
(3) failure to appreciate how technical systems, as a whole, function. For example, engineers were surprised when cargo- compartment decompression destroyed control systems in some airplanes.
Experts also typically overlook:
(4) slowness to detect chronic, cumulative effects, e.g., as in the case of acid rain;
(5) the failure to anticipate inadequate human responses to safety measures, e.g., failure of Chernobyl officials to evacuate immediately; and
(6) the inability to anticipate “common-mode” failures simultaneously afflicting systems that are designed to be independent. A simple fire at Brown’s Ferry, Alabama, for example, damaged all five emergency core cooling systems for the reactor.
Scientific Method, Anti-Foundationalism and Public Decisionmaking
Kristin Shrader-Frechette*
(FAIR USE APPLIES)