15 Oct 2022

Is Nuclear Power Safe? Part 1

Is nuclear power safe? No. There is no safety to be had in this world. End of. Get used to it. Ah, but is nuclear power safe enough?

By William Collins: That is a different, and more sensible question. The public do not like the idea that there are degrees of safety. They beg to be infantilised by some authority figure who will pretend to provide the absolute guarantee of safety they crave. A bridge, a dam, a power station – the public seeks only to be told “they are safe”. No ifs, no buts, no “degrees of safety”.

Nuclear safety is an important issue. I will have more to say about it. But I start with an essay I wrote seven years ago in response to a visiting academic in the sociology department of the University of Bristol. You can read his paper first if you wish, but his key points are quoted in my essay so there is no need. The thrust of his article was that the nuclear accident at Fukushima in 2011 demonstrated that the nuclear industry’s claims about plant safety could not be trusted. I have only updated a few details since the 2015 text.

 

**************

 

A Personal Commentary on: Disowning Fukushima: Managing the Credibility of Nuclear Reliability Assessment In The Wake Of Disaster by John Downer

1. What is the real sociological issue?

How can the assurances of nuclear experts remain credible in the wake of Fukushima?” asks John Downer. It’s a reasonable question. Rather dramatically he asserts,

The only fact that Fukushima demonstrates absolutely unambiguously is that devastating oversights can exist in what authoritative experts ardently claim to be rigorous, objective and conservative risk calculations.

Leaving aside the word “only”, this is a true statement.

The thrust of John Downer’s argument is that nuclear disasters like Fukushima demonstrate the inadequacy of the nuclear assessments of the failed plant, which claimed such failures would not happen. On this basis he challenges the credibility of continuing to justify nuclear facilities based on similar assessments, specifically probabilistic safety assessments (PSA).

As regards the first of these claims, he is obviously correct.

Examination of the second claim is the main purpose of this response. As regards the weaker claim that probabilistic safety assessment failed in particular instances (e.g., of core meltdowns), this may be substantiated on the basis of the contrast between the assessed and the experiential failure frequency. There have been four indisputable core destructions over the last 65 years: Windscale, Three Mile Island, Chernobyl and Fukushima. The latter was actually three reactor meltdowns. There was also an INES Level 5 core damage accident at Chalk River, Canada, in 1952.

(The Kyshtym disaster in the Mayak reprocessing plant, Russia, in 1957, involved the explosion of a storage tank, an INES Level 6 event which caused nearly 200 late cancer deaths, though this was not reactor related. There have also been five reactor accidents which cost the order of hundreds of millions of dollars to repair and led to radiological releases, but invariably localised to near the facility and not INES rated. Several other reactor accidents were similarly expensive to fix but without radiological consequences).

The frequency of meltdowns on the basis of this actual plant experience is greater than 10-4 per reactor year (pry), more than two orders of magnitude greater than design basis claims for new reactors.  

Downer perceives a wilful refusal by the industry to acknowledge the shortcomings of nuclear design assessments demonstrated by these catastrophes. He argues that a unique sociological phenomenon is at work here; unique, that is, to nuclear power and PSA in particular.

But this is a most odd position to adopt. In truth, every engineering failure is an example of the failure of its design basis. There is nothing unique about nuclear accidents in this respect. Nor is there anything unique about probabilistic assessments in this respect. Every bridge or dam or aeroplane ever built was claimed to be safe. And every one that failed is a failure of that assurance, be it based on probabilistics or traditional deterministic assessments.

How are we to respond to such engineering failures, whether nuclear or conventional? Are we to cease building dams and bridges and aeroplanes – and nuclear power stations – because there have been failures? Or should we seek to improve our construction of them, including the reliability of their design basis? I know the public’s answer to this question in the case of conventional structures. It is evident in society’s attitude which is implicit in their behaviour: an undiminished enthusiasm for conventional engineering industries and their products.

Why is the answer problematic in the case of nuclear?

The interesting sociological question is not that nuclear power stations continue to be designed and built despite the history of failures, for this is merely business as usual for engineering ventures. Failures have not stopped bridges and dams being built, and accidents have not stopped travel by air, sea, rail and road. The interesting sociological question is why nuclear is perceived as different by John Downer and much of the public. Why the accusation of “disowning Fukushima” but no accusation of “disowning Piper Alpha” or “disowning Deepwater Horizon” or “disowning Torrey Canyon” or “disowning Hyatt Regency” or “disowning Mississippi I-35W” or “disowning the Banqiao Reservoir Dam” or “disowning the Herald of Free Enterprise” or “disowning MS Estonia“….a list that could run to thousands?

Is it because nuclear accidents have a far more adverse human impact than conventional accidents? The evidence is the very opposite. Or is it because nuclear accidents are perceived as more severe by the public? And if this is a case of public perception being misaligned with reality, is this not the true sociological issue?

One might observe, correctly, that all the above conventional disasters led to improvements in engineering or operating standards. But this marks no distinction with the nuclear case because exactly the same is true in the nuclear context. John Downer appears to be unimpressed by a reaction to Fukushima which is merely to implement improvements in standards of engineering, operation and emergency preparedness, dismissing these as “the redemption defence”. Similarly, the investigative post-mortems to identify causes, a necessary precursor to improvement, are rather unfairly categorised as attempts to excuse, i.e., the “interpretive defence”, the “relevance defence” and the “compliance defence”.

Instead of accepting this process of improvement, Downer promotes the view that past nuclear failures fatally compromise the credibility of any future assurances regarding the safety of nuclear plant – on principle – a view which is inconsistent with the universal attitude towards conventional engineering structures and facilities. For example he writes,

These sociological arguments about why nuclear risk calculations must be insufficient are important, especially given that expert authorities are inherently unwilling to undermine their own credibility and self-interest in a sustained way (Wynne 2011). At the same time, however, they can obscure a more interesting sociological question about why Fukushima (and, indeed, the disaster-punctuated history of nuclear power more broadly) doesn’t speak for itself.

But we do not permit the vastly greater catalogue of conventional disasters to “speak for themselves” as regards deciding to terminate the building of dams, bridges, aeroplanes, ships, oil rigs, etc. That is, we do not refuse to remount the horse that has thrown us in the conventional context.

The interesting sociological question is not why the response to nuclear failures is to make improvements and continue the endeavour. That is not the interesting question because it is simply the same attitude that we adopt in the non-nuclear context. The interesting sociological question is why is there a widespread societal attitude, exemplified by Downer’s paper, that the reaction to nuclear accidents should be absolutist, unforgiving and condemnatory. Is this distinction in attitude to the nuclear actually justified? Is it, in fact, logical – or is it sociological – or even ideological?

2. The public perception of risk

The following extract from Downer’s paper is very telling regarding the perception of risk,

“..airplanes very occasionally, but nevertheless routinely, crash, even though any specific crash is always unexpected. We accept such accidents as an inevitable cost of the technology and we anticipate them in our plans and institutions. The same logic does not hold in the nuclear sphere, however. Reactors, in this respect, are more comparable to dams or bridges, in that public decisions about them are predicated on an understanding that the chance of them failing catastrophically is so low as to be negligible. Modern democracies, we might say, are institutionally blind to the possibility of nuclear meltdowns.”

In truth, I would argue, reactors are not perceived by the public, or by John Downer, as “comparable to dams or bridges“. Whatever might be the case for nuclear plant, it is certainly not the case for dams and bridges that “public decisions about them are predicated on an understanding that the chance of them failing catastrophically is so low as to be negligible.” It is interesting that John Downer appears to believe this. Here are some bald facts about dam and bridge failures.

In the ten years 2000 to 2009 alone, more than 200 notable dam and reservoir failures occurred worldwide. In the USA alone there have been 1,645 dam failures since records began in 1848, an average of about 10 per year. Dam failures have occurred in every US state. The worst decade on record for US dam failures was 1990 to 1999 in which there were 451 dam failures. Whilst only 4% of US dam failures lead to loss of life, in the period from 1850 to 2017 an estimated 3,495 fatalities have occurred as a result of 64 dam failure events. Even when loss of life does not occur, dam failures have frequently caused immense property and environmental damage.

Starting the clock from 1957, the year of the Windscale fire, worldwide there have been 12 dam failures which killed over 100 people, and 4 which killed over 1000. People still remember the Windscale fire, but who remembers the Monte Toc reservoir failure in Italy in 1963 which killed an estimated 1,917 people? Dam construction led to water pressure overloading of the valley walls which collapsed and wiped out several villages.

In 1985, the year before Chernobyl, the Stava Tailings Dam in Italy failed. Along its path the mud killed 268 people and completely destroyed 3 hotels, 53 homes, and six industrial buildings; 8 bridges were demolished and 9 buildings were seriously damaged. Do you recall it? No? Yet the death toll was far bigger than the direct death toll at Chernobyl.

In 1979, the year of the Three Mile Island meltdown, the Machchu-2 Dam in Morbi, India, failed due to heavy rains and flooding beyond spillway capacity. The death toll is unknown, the Government’s estimate of 1,000 being almost certainly a serious under-estimate, the opposition putting it at 20,000. No one was killed at Three Mile Island, and there was no significant off-site release of radioactivity. Yet which incident is remembered? Is this a sociological phenomenon?

But the Chinese beat everyone by more than an order of magnitude. In 1975 the failure of the Banqiao and Shimantan Reservoir Dams and other dams in Henan Province, China caused more casualties than any other dam failure in history by far. The disaster killed an estimated 230,000 people and 11 million people lost their homes. A state-controlled newspaper maintained that the dam was designed to survive a once-in-1000-years flood (300 mm of rainfall per day) but a once-in-2000-years flood occurred in August 1975, following 1000mm of rain in one day due to the collision of Typhoon Nina and a cold front. Sound familiar? The dam was subsequently rebuilt. The Chinese kept the disaster secret until the 1990s.

What about bridges? Are they too “as safe as houses” (i.e., not so safe they don’t fall down sometimes)?

There have been at least 214 documented bridge failures since 1950, 137 of which occurred in the last 22 years, an average of one every couple of months. 37 killed more than 10 people, 9 killed more than 100 people. In the USA in 1981 an overhead walkway inside a Hyatt Regency hotel in Kansas City collapsed killing 114 and injuring 216 more.

The same year as the Windscale fire, 1957, saw a train crash on the St Johns Station railway bridge, Lewisham. The bridge collapsed causing 90 deaths and 173 injuries. We remember the Windscale fire, which caused no direct deaths, but this bridge collapse has left no lasting impression on the public psyche, it merely joins the immeasurably long list of ‘conventional’ disasters and accidents.

Does this history tally with John Downer’s view that public decisions about dams and bridges “are predicated on an understanding that the chance of them failing catastrophically is so low as to be negligible”? If so it could only rest upon the public’s woeful ignorance of reality.

Is there some sociological phenomenon at work which leads to public acceptance of these structures as adequately safe despite their history of failures? Is the sociological response to these conventional disasters reasonable or unreasonable? And if the muted societal response to these conventional disasters is reasonable, is the societal response to nuclear accidents unreasonable? These, surely, are the interesting sociological questions.

From the litany of conventional disasters, one might ask whether the issue is not so much “denying Fukushima” as “denying thousands of dam failures and hundreds of bridge failures” – to name but two engineering structures. In fact, neither accusation of denial is valid. The response of the engineering industries to both, conventional and nuclear, is broadly the same and the very opposite of denial.

Downer asks, “how can the assurances of nuclear experts remain credible in the wake of Fukushima?” But one might just as well ask why the public should continue to have confidence in the designers of dams, bridges, ships, aeroplanes, etc.? Yet walking over the Haweswater dam or driving over the Severn Bridge or flying across the Atlantic excite no public fear. Why the difference, and which of the conflicting reactions to nuclear and non-nuclear accidents is the more rational? This is the interesting sociological question.

The engineering world is, and always will be, fallible. But the acceptability of the risks attendant upon any engineering venture can be judged only when set against the benefits. Failure to consider benefits renders any discussion of the acceptability of risk meaningless, because no risk is acceptable without benefit.

The obligation upon the engineering community is to admit failures when they occur, to be open about their causes, and to continuously learn from experience based on an honest acceptance of fallibility. The interests of society are served by ensuring the benefits outweigh the risks. Confidence in decision making requires a rationally based appraisal of both benefits and risks. The obligation upon sociology, if it is to play an active role in these issues rather than merely that of an observer, is to facilitate a truly rational appraisal of these issues.  

3. The human cost of nuclear accidents

The public perception of nuclear accidents is one of almost supernatural horror. The reality is not so extreme. If immediate deaths were the full story, the nuclear accidents to-date would be unworthy of consideration. There were no direct deaths at all at three of the four nuclear disasters (Windscale, Three Mile Island and Fukushima). Even at Chernobyl the immediate death toll was ‘only’ about 30, the majority due to acute radiation sickness (ARS) or shortly following ARS.

When the 2011 Japanese earthquake and tsunami are mentioned, people immediately think “Fukushima”. And yet, of the 20,000 people killed in Japan in just a couple of days during that disaster, none were at Fukushima – not one. Even with the benefit of hindsight, the briefly entertained spectre of an evacuation of Tokyo still has the power to horrify, as Downer exploits. Yet for eight years now there have been many hundreds of staff working full time on the Fukushima Daiichi site in rebuilt offices, and dining on food grown in Fukushima. Yet again, the extreme contrast between reality and perception raises obvious, but unaddressed, sociological questions.

Fukushima was undoubtedly a huge financial catastrophe. Not only was a power station destroyed, but the clean-up operation will take many decades and cost eye watering sums of money. But as a human disaster it was a non-event. In fact, it is somewhat distasteful to focus on Fukushima in view of the wholesale deaths elsewhere in Japan at the time.

The rest of this section will focus on Chernobyl because this is the only one of the four meltdown accidents with a human death toll. Also, Chernobyl was about as bad as a nuclear accident could be. RBMK reactors are large, and about one-third of its radiological inventory went up in smoke. Consequently Chernobyl is an important benchmark as to how bad nuclear accidents can be.

Starting with people who are known to have been exposed to high doses or were involve in clean-up operations, 134 people suffered acute radiation sickness (ARS), of whom 28 died due to radiation effects within four months and another two died from injuries. A further 19 of the ARS survivors have died in the twenty years after the accident (i.e., as of 2006), mostly of causes not attributable to radiation exposure. Among the ARS survivors there have been four cases of solid cancers, three cases of myelodysplastic syndrome, one case of acute myelomonoblastic leukaemia and one case of chronic myeloid leukaemia. In addition, radiation induced cataracts are common amongst the ARS survivors. All the ARS sufferers at Chernobyl were station workers, none were members of the public.

The relatively low death rate amongst those who suffered ARS (i.e., very high doses) suggests that the hundreds of thousands of people who took part in recovery operations, and subject to far lower doses, would be unlikely to suffer serious ill effects. A higher incidence of cataracts might be expected amongst those receiving the highest doses. The 2008 UNSCLEAR report, Annex D also notes,

Amongst adults, the most meaningful evidence (regarding leukaemia) comes from studies of the recovery operations workers. Although at this time, some evidence exists of an increase in the incidence of leukaemia among a group of recovery operation workers from the Russian Federation, this is far from conclusive.

About 600,000 people were involved in recovery operations and so potentially were exposed to larger radiation doses than the general public. The average dose to recovery workers has been conservatively estimated at 120 mSv, which compares to typical background radiation levels of ~2 mSv/year, and to the industry dose limit of 20 mSv/year. Apart from the potential increase in the incidence of leukaemia and cataracts among those who received higher doses, as noted above, there is no evidence of health effects that can be attributed to radiation exposure.

The reason for the possible increase in leukaemia amongst clean-up workers being described as “far from conclusive” is as follows. The magnitude of any leukaemia effect must be estimated from models which are intrinsically very uncertain because they involve the product of a very small probability and a large number of people. The 2006 Chernobyl Forum report estimates perhaps 4000 fatalities from all forms of cancer eventually. Unfortunately a figure of this magnitude, or smaller, will never be confirmable from epidemiological studies because it is a small fraction of the naturally expected number of cancer fatalities in this population (around 100,000). Thus, the 4000 figure is only a 4% elevation in natural rate, far smaller than the error in its estimation (i.e., consistent with zero if error bars were added).

Turning now to the general public. The 2006 Chernobyl Forum report tells us,

Among the 5 million persons residing in other ‘contaminated’ areas (i.e., other than the Cherobyl site itself), the doses are much lower and any projected increases are more speculative, but are expected to make a difference of less than one per cent in cancer mortality. Such increases would be very difficult to detect with available epidemiological tools, given the normal variation in cancer mortality rates. So far, epidemiological studies of residents of contaminated areas in Belarus, Russia and Ukraine have not provided clear and convincing evidence for a radiation-induced increase in general population mortality, and in particular, for fatalities caused by leukaemia, solid cancers (other than thyroid cancer), and non-cancer diseases.”

The 2008 UNSCLEAR report, Annex D, repeats the same message in the context of the wider public, concluding,

There appears, at present, to be no hard evidence of any measurable increased incidence of all solid cancers taken together among the populations of the Russian Federation and Ukraine.

Among those exposed in utero and as children, no persuasive evidence has been found of a measurable increase in the incidence of leukaemia attributable to radiation exposure. This is not unreasonable given that the doses involved were generally small, comparable with natural background doses, and therefore epidemiological studies lack the statistical power to confirm any radiation-related increases had they occurred.

However, there is a strong link between childhood thyroid cancer and the radiation released by Chernobyl, the incidence of this cancer amongst the under-18s being sharply peaked over the five years following the accident. 6,848 thyroid cancers have been seen among those in the three republics who were under 18 at the time of the accident, of which a substantial fraction is likely to have been due to radiation exposure. However, the survival rate for this type of cancer is extremely high.

For children born after 1986 there was no increase in thyroid cancer rate. For adults in the general population at the time of the accident there is no evidence of increased thyroid cancer.

The 2006 Chernobyl Forum report states that there had been 15 cases of deaths due to thyroid cancer by that time which could be attributed to childhood exposure to radioactive iodine (hence to Chernobyl).

It is worth noting, however, that even successful treatment for thyroid cancer might have adverse health impact. It is also worth noting that it would have been easy to avoid this problem if prompt action had been taken to distribute potassium iodate pills more extensively than was done.

Other than thyroid cancer, the 2008 UNSCLEAR report, Annex D, concludes,

To date, there has been no persuasive evidence of any other health effect in the general population that can be attributed to radiation exposure.

In summary, the number of deaths deterministically attributable to Chernobyl is around 54. Any excess cancer deaths (due solely to leukaemia) have been estimated to be a sufficiently small percentage of the normal expected cancer death rate that it is unlikely to be discernible from natural variations in the normal rate. The most credible estimate is ~4000 excess deaths amongst the recovery workers, a population in which 100,000 cancer deaths would be expected. Even this large figure makes the Chernobyl death toll comparable with the more severe conventional accidents.

4. PSA

I note in passing that the Downer paper states that some projected accident frequencies are claimed to be calculated “to seven decimal places”. This is a preposterous misrepresentation. I presume the author is actually referring to claims that a frequency is 10-7, which is not at all the same thing. Actually such numbers are generally not even claimed to be accurate to one decimal place.

Nevertheless, I have considerable sympathy with the view that extremely low probabilities of failure, like 10-6 pry, are not calculable. One reason is the uncertainty in the probabilities of the individual events of which any fault sequence is composed. But the more serious problem is that it is fundamentally impossible to list every possible scenario, and hence one cannot even begin to quantify the total failure probability. This point is made forcibly in Downer’s paper and is correct in my opinion.

For probabilities which are not too small this problem does not arise because more exotic scenarios than those actively considered may be dismissed intuitively as far less probable. Where the boundary lies I do not know, but I guess perhaps around 10-4 pry.

This does not, of course, mean that PSA is not worth doing. The best should not be made the enemy of the good. PSA is a useful discipline for weeding out sequences of events which might otherwise lead to unacceptably large failure probabilities. Such studies can, and do, have very practical engineering consequences, for example, in terms of influencing the number of pumps, valves, etc., which are required to drive the assessed probability down to an acceptable level. That the calculated probability may not really be the absolute failure probability does not detract from this.

Nevertheless, Downer is right to criticise the industry for giving the impression that failure probabilities as small as 10-6 or 10-7 pry are calculable. My opinion, like his, is that they are not. But is the industry alone to blame for this convenient over-interpretation of the results of PSA? Or has this attitude been effectively forced upon it by outside influences which require the industry to express its confidence in these terms?

Downer opines,

We would be better positioned to govern the atomic age if we could institutionalise the idea that nuclear risk assessments are contestable judgments more than they are objective truths.

Again I agree, with the proviso that “contestable judgment” is not taken to mean “unfounded” or “anyone’s guess”. Both conventional engineering assessment and PSA provide high levels of confidence, far higher than would be available with mere qualitative judgment. Numerical calculation is, and will remain, crucial. The issue is only that the outcome should not be misrepresented as certainty, or as a definitive statement regarding absolute failure probabilities where these are extremely small (e.g., 10-6 pry or smaller).

Within the nuclear industry there is a widespread conception that arguments in safety cases may be of two kinds: numerical or judgmental. I have long argued that this is a false dichotomy. In truth, all arguments are judgmental – and so all safety cases are judgmental. That there appears to be two distinct types of argument results only from how, and by whom, the judgments are made. In the case of numerical calculations the judgments are made by engineers and are invisible, or less visible, to those entrusted with independent assessment. This absolves those who should rightly be responsible for confirming judgments from the necessity to do so, since they may never be exposed to the judgments which are implicit in the calculations or may lack the expertise to identify them. Numerical assessments are, in effect, a means by which responsibility is delegated to less senior levels in an organisation.

Engineers are aware that their calculations are only as good as the set of assumptions on which they are based. But it may be convenient for others to interpret numerical assessments as absolute assurances. They are not. All calculations are also judgments – it is merely that the judgments are made by the engineer in choosing the input assumptions. This conveniently obviates any obligation upon other parties to exercise judgment, and hence can obscure that judgments are involved. Worse, it can encourage a belief that no judgments are involved. This is Downer’s point, and it is a point well made. By pushing the responsibility down to the engineer, responsibility is conveniently avoided by those higher up the salary scales.

But it is not only within the industry itself that this effect operates. A similar phenomenon allows the public to disengage from responsibility.

5. Probabilistic versus Deterministic and the Public

Every engineering structure which has failed implies the failure of its design assessment. The reader may demur on the grounds that, in some cases, the failure may be due to the structure being subject to unforeseen loads or conditions “beyond design basis”. But this is just semantics since the design assessment may still be regarded as at fault for failing to take the loads or conditions into the design basis. Admittedly this is a rather harsh perspective in the case of deliberate sabotage.

In the vast majority of cases the design assessments will be traditional, deterministic assessments: satisfaction of design code requirements or the demonstration of a deterministic reserve margin. It is not unique to probabilistic assessment that a structural failure implies an assessment failure; that is, a failure of the engineer’s duty to protect. But this does not, and should not, undermine the use of design codes. The correct response is to identify the shortcomings in the design assessment and improve them. This may involve improving the engineering calculations, or improving the plant itself, or rectifying its operation. But whether design calculations are deterministically based or probabilistically based is not the central issue. It is inappropriate, therefore, to regard the basic approach of probabilistic assessments as being invalidated by the core meltdowns which have occurred.

The interesting sociological issue is how numerical engineering assessments are interpreted. Misinterpretation of the meaning, or reliability, of an engineering assessment can apply to traditional deterministic assessments as much as to PSA. Both types of assessment are prone to misinterpretation, and the reason is the same. Both types of assessment are only as good as the assumptions upon which they are based. If the condition which destroys the plant was unforeseen, both types of assessment will be equally wrong.

Just as the probability calculated by PSA may not be the absolute failure probability, so the demonstration of a deterministic reserve margin may not be the absolute guarantee of integrity it appears to be. And yet many people do make this interpretational error. So I have to ask: is this a wilful misinterpretation? This is the interesting sociological question. John Downer hints at this when he writes, “although nuclear engineers have continued to discuss the nuances of probabilism, such discussions have all but disappeared from the dialogues between those experts and the public or policymakers they serve…..Assessment experts may be aware of the limitations of their calculations, in other words, but on an institutionalised level, at least, they actively occlude such limitations from non-specialists.”

I would argue that it is not engineers who “actively occlude such limitations” so much as a phenomenon which arises sociologically. The underlying sociological dynamic might be described thus,

  • We, the public, have an exaggerated fear of radioactivity,
  • We therefore insist on absolute assurances about nuclear accidents,
  • As a result, developers of nuclear power plant are placed in the position of re-assuring the public in terms defined by the public’s fear,
  • Fear is assuaged by absolute assurances, not by apparent prevarication – which is how any discussion of “limitations” would inevitably be perceived,
  • Hence only assurances which are expressed in absolute, or near-absolute, terms are acceptable to the public.

In short: the public demands an impossible absolutist assurance, this assurance is duly given because it is obligatory, and, when reality bites, the industry is held to be culpable. And, of course, it is. But so is the public. Because the public are also beneficiaries.

The interesting sociological phenomenon is that the public are permitted to consider themselves decoupled from any responsibility for endeavours from which they benefit.

The issue is not unique to nuclear power, but this phenomenon is particularly acute in the case of nuclear power.

For a conventional instance of the same phenomenon, imagine a newly opened bridge. The designer is asked by the press whether the bridge is safe. Sociologically the only possible answer is “yes”. No prevarication, no provisos, no qualifications are acceptable. Even a second’s hesitation in delivering the answer “yes”, or a less than emphatic tone, could fatally undermine public confidence. The designer will know this and will respond in accord with the sociological context. The designer will know that the answer “ah, but there are degrees of safety, you know” is not appropriate in the public context of the question. Designers know that a 500 page exposition of fracture mechanics, fatigue and structural resonance theory are not what they are being asked for. They know that bridges do sometimes fail. But they also know that they are required, by societal pressure, to carry 100% of the responsibility for the safety of their bridge. They are happy to go along with this sociological obligation because they do, in fact, have great confidence in their bridge. But they know that they are not God. They know they are fallible. But they also know that the sociological context obliges them to pretend otherwise.

Is this fair?

The benefit is to the whole of society, but the responsibility is polarised absolutely.

Do we allow the public to be infantilised?

The public is not entitled to expect absolute certainty in safety assessments for the simple reason that absolute certainty is impossible. But the public do expect this. Those responsible for major projects should be judged against stringent, but humanly achievable, standards. And they should be held accountable when there are failures. Nevertheless, there is – or should be – a shared responsibility between the public and the involved industries. That there should be a shared responsibility follows because there is a shared benefit. This is a perspective which has been lost in modern discourse.

6. In Summary

There are indeed interesting sociological issues surrounding Fukushima and nuclear plant generally. But I have argued here that our continued commitment to nuclear plant despite past failures is not one of them since this merely mirrors the same “learn-and-progress” approach which is accepted in the conventional domain. Rather, the interesting sociological phenomenon is the differing public perception of nuclear and non-nuclear disasters, which John Downer’s paper exemplifies.  Why can vast numbers of conventional disasters and their attendant huge death toll be accepted by the public with relative equanimity, whilst nuclear disasters, real or imagined, excite a far greater public concern? The reason appears to be more sociological than logical. It would be of interest to see it addressed.

Downer raises some valid issues in respect of the over-interpretation of PSA by the nuclear industry. Whilst the industry must be held culpable for this (in my view), there is another sociological issue here which it would be interesting to see addressed: what pressures, societal or otherwise, have led to this over-interpretation becoming prevalent?

The obligation upon sociology, if it is to play an active role in these issues rather than merely being a passive observer, is to facilitate a truly rational appraisal of these issues. With energy shortages and unmeetable prices of energy now a present reality, the time is overdue to clear the way for public enthusiasm in a nuclear renaissance in the UK, based upon an adult engagement with the issues, not an infantilised one.

Appendix: Emergency Response

I have not attempted in the main text, above, a blow-by-blow review of the criticisms John Downer raises. My main point would always be that the industry can be improved and hence criticism should be heard constructively. However some points in respect of emergency preparedness require a response because they misrepresent the factual position.

John Downer opines that there is, “an institutionally deep-rooted confidence that contingency planning is unnecessary for nuclear disasters“, though he also notes, “that is not to say that disaster planning is entirely absent in the nuclear sphere, but rather that it is routinely insincere and insufficient“.

The accusation of insincerity is certainly false. The attitude within the industry is the opposite of complacent. The responsibility for nuclear safety invested in the staff is felt at all levels in the organisation. I can testify to this from first-hand experience over a full working lifetime.

However, the accusation of “insufficiency” is another matter. Since significant enhancements to emergency arrangements have been made post-Fukushima, this is a tacit admission that the previous position was not as robust as it should have been. But emergency planning arrangements have always been treated with the utmost seriousness.

In fact all EDF Energy’s nuclear stations have a nuclear emergency exercise annually. Shutdown and decommissioned stations continue to have such exercises, as do MOD nuclear facilities. In total there are at least 35 nuclear emergency exercises annually in mainland UK. I myself took part in such exercises annually for about 16 years. They always involve remote support from the Central Emergency Support Centre (CESC) as well as the affected station, and, depending on the level of the exercise, also involve the local police force, government departments, and other external bodies who would be involved in a real event.

John Downer is critical of operator training for unanticipated accident sequences, claiming that operators in the USA were not required (in 2012) to demonstrate knowledge of relevant guidance in these circumstances. The civil nuclear industry in the UK has had Severe Accident Guidelines (SAGs) and Symptom Based Emergency Response Guidance (SBERGs) for decades. Much of the functionality of these is incorporated into the control room Tech Specs which are used as routine by operators in the UK.

So, I believe the existing arrangements in respect of emergency nuclear response in the UK are considerably more robust than John Downer portrays.

Nevertheless he again has a point as regards resilience. As a response to Fukushima, so-called stress tests were carried out across Europe, and in the UK in particular. Had arrangements been perfect, this would not have resulted in any action being necessary. That was not the case. Substantial improvements have been made to the facilities available to respond to nuclear emergencies within the UK, including permanently available, mobile resources such as pumps, generators, etc.

Is nuclear power safe?

No.

There is no safety to be had in this world. End of. Get used to it.

Ah, but is nuclear power safe enough?

That is a different, and more sensible question. The public do not like the idea that there are degrees of safety. They beg to be infantilised by some authority figure who will pretend to provide the absolute guarantee of safety they crave. A bridge, a dam, a power station – the public seeks only to be told “they are safe”. No ifs, no buts, no “degrees of safety”.

Nuclear safety is an important issue. I will have more to say about it. But I start with an essay I wrote seven years ago in response to a visiting academic in the sociology department of the University of Bristol. You can read his paper first if you wish, but his key points are quoted in my essay so there is no need. The thrust of his article was that the nuclear accident at Fukushima in 2011 demonstrated that the nuclear industry’s claims about plant safety could not be trusted. I have only updated a few details since the 2015 text.

**************

A Personal Commentary on: Disowning Fukushima: Managing the Credibility of Nuclear Reliability Assessment In The Wake Of Disaster by John Downer

1. What is the real sociological issue?

How can the assurances of nuclear experts remain credible in the wake of Fukushima?” asks John Downer. It’s a reasonable question. Rather dramatically he asserts,

The only fact that Fukushima demonstrates absolutely unambiguously is that devastating oversights can exist in what authoritative experts ardently claim to be rigorous, objective and conservative risk calculations.

Leaving aside the word “only”, this is a true statement.

The thrust of John Downer’s argument is that nuclear disasters like Fukushima demonstrate the inadequacy of the nuclear assessments of the failed plant, which claimed such failures would not happen. On this basis he challenges the credibility of continuing to justify nuclear facilities based on similar assessments, specifically probabilistic safety assessments (PSA).

As regards the first of these claims, he is obviously correct.

Examination of the second claim is the main purpose of this response. As regards the weaker claim that probabilistic safety assessment failed in particular instances (e.g., of core meltdowns), this may be substantiated on the basis of the contrast between the assessed and the experiential failure frequency. There have been four indisputable core destructions over the last 65 years: Windscale, Three Mile Island, Chernobyl and Fukushima. The latter was actually three reactor meltdowns. There was also an INES Level 5 core damage accident at Chalk River, Canada, in 1952.

(The Kyshtym disaster in the Mayak reprocessing plant, Russia, in 1957, involved the explosion of a storage tank, an INES Level 6 event which caused nearly 200 late cancer deaths, though this was not reactor related. There have also been five reactor accidents which cost the order of hundreds of millions of dollars to repair and led to radiological releases, but invariably localised to near the facility and not INES rated. Several other reactor accidents were similarly expensive to fix but without radiological consequences).

The frequency of meltdowns on the basis of this actual plant experience is greater than 10-4 per reactor year (pry), more than two orders of magnitude greater than design basis claims for new reactors.  

Downer perceives a wilful refusal by the industry to acknowledge the shortcomings of nuclear design assessments demonstrated by these catastrophes. He argues that a unique sociological phenomenon is at work here; unique, that is, to nuclear power and PSA in particular.

But this is a most odd position to adopt. In truth, every engineering failure is an example of the failure of its design basis. There is nothing unique about nuclear accidents in this respect. Nor is there anything unique about probabilistic assessments in this respect. Every bridge or dam or aeroplane ever built was claimed to be safe. And every one that failed is a failure of that assurance, be it based on probabilistics or traditional deterministic assessments.

How are we to respond to such engineering failures, whether nuclear or conventional? Are we to cease building dams and bridges and aeroplanes – and nuclear power stations – because there have been failures? Or should we seek to improve our construction of them, including the reliability of their design basis? I know the public’s answer to this question in the case of conventional structures. It is evident in society’s attitude which is implicit in their behaviour: an undiminished enthusiasm for conventional engineering industries and their products.

Why is the answer problematic in the case of nuclear?

The interesting sociological question is not that nuclear power stations continue to be designed and built despite the history of failures, for this is merely business as usual for engineering ventures. Failures have not stopped bridges and dams being built, and accidents have not stopped travel by air, sea, rail and road. The interesting sociological question is why nuclear is perceived as different by John Downer and much of the public. Why the accusation of “disowning Fukushima” but no accusation of “disowning Piper Alpha” or “disowning Deepwater Horizon” or “disowning Torrey Canyon” or “disowning Hyatt Regency” or “disowning Mississippi I-35W” or “disowning the Banqiao Reservoir Dam” or “disowning the Herald of Free Enterprise” or “disowning MS Estonia“….a list that could run to thousands?

Is it because nuclear accidents have a far more adverse human impact than conventional accidents? The evidence is the very opposite. Or is it because nuclear accidents are perceived as more severe by the public? And if this is a case of public perception being misaligned with reality, is this not the true sociological issue?

One might observe, correctly, that all the above conventional disasters led to improvements in engineering or operating standards. But this marks no distinction with the nuclear case because exactly the same is true in the nuclear context. John Downer appears to be unimpressed by a reaction to Fukushima which is merely to implement improvements in standards of engineering, operation and emergency preparedness, dismissing these as “the redemption defence”. Similarly, the investigative post-mortems to identify causes, a necessary precursor to improvement, are rather unfairly categorised as attempts to excuse, i.e., the “interpretive defence”, the “relevance defence” and the “compliance defence”.

Instead of accepting this process of improvement, Downer promotes the view that past nuclear failures fatally compromise the credibility of any future assurances regarding the safety of nuclear plant – on principle – a view which is inconsistent with the universal attitude towards conventional engineering structures and facilities. For example he writes,

These sociological arguments about why nuclear risk calculations must be insufficient are important, especially given that expert authorities are inherently unwilling to undermine their own credibility and self-interest in a sustained way (Wynne 2011). At the same time, however, they can obscure a more interesting sociological question about why Fukushima (and, indeed, the disaster-punctuated history of nuclear power more broadly) doesn’t speak for itself.

But we do not permit the vastly greater catalogue of conventional disasters to “speak for themselves” as regards deciding to terminate the building of dams, bridges, aeroplanes, ships, oil rigs, etc. That is, we do not refuse to remount the horse that has thrown us in the conventional context.

The interesting sociological question is not why the response to nuclear failures is to make improvements and continue the endeavour. That is not the interesting question because it is simply the same attitude that we adopt in the non-nuclear context. The interesting sociological question is why is there a widespread societal attitude, exemplified by Downer’s paper, that the reaction to nuclear accidents should be absolutist, unforgiving and condemnatory. Is this distinction in attitude to the nuclear actually justified? Is it, in fact, logical – or is it sociological – or even ideological?

2. The public perception of risk

The following extract from Downer’s paper is very telling regarding the perception of risk,

“..airplanes very occasionally, but nevertheless routinely, crash, even though any specific crash is always unexpected. We accept such accidents as an inevitable cost of the technology and we anticipate them in our plans and institutions. The same logic does not hold in the nuclear sphere, however. Reactors, in this respect, are more comparable to dams or bridges, in that public decisions about them are predicated on an understanding that the chance of them failing catastrophically is so low as to be negligible. Modern democracies, we might say, are institutionally blind to the possibility of nuclear meltdowns.”

In truth, I would argue, reactors are not perceived by the public, or by John Downer, as “comparable to dams or bridges“. Whatever might be the case for nuclear plant, it is certainly not the case for dams and bridges that “public decisions about them are predicated on an understanding that the chance of them failing catastrophically is so low as to be negligible.” It is interesting that John Downer appears to believe this. Here are some bald facts about dam and bridge failures.

In the ten years 2000 to 2009 alone, more than 200 notable dam and reservoir failures occurred worldwide. In the USA alone there have been 1,645 dam failures since records began in 1848, an average of about 10 per year. Dam failures have occurred in every US state. The worst decade on record for US dam failures was 1990 to 1999 in which there were 451 dam failures. Whilst only 4% of US dam failures lead to loss of life, in the period from 1850 to 2017 an estimated 3,495 fatalities have occurred as a result of 64 dam failure events. Even when loss of life does not occur, dam failures have frequently caused immense property and environmental damage.

Starting the clock from 1957, the year of the Windscale fire, worldwide there have been 12 dam failures which killed over 100 people, and 4 which killed over 1000. People still remember the Windscale fire, but who remembers the Monte Toc reservoir failure in Italy in 1963 which killed an estimated 1,917 people? Dam construction led to water pressure overloading of the valley walls which collapsed and wiped out several villages.

In 1985, the year before Chernobyl, the Stava Tailings Dam in Italy failed. Along its path the mud killed 268 people and completely destroyed 3 hotels, 53 homes, and six industrial buildings; 8 bridges were demolished and 9 buildings were seriously damaged. Do you recall it? No? Yet the death toll was far bigger than the direct death toll at Chernobyl.

In 1979, the year of the Three Mile Island meltdown, the Machchu-2 Dam in Morbi, India, failed due to heavy rains and flooding beyond spillway capacity. The death toll is unknown, the Government’s estimate of 1,000 being almost certainly a serious under-estimate, the opposition putting it at 20,000. No one was killed at Three Mile Island, and there was no significant off-site release of radioactivity. Yet which incident is remembered? Is this a sociological phenomenon?

But the Chinese beat everyone by more than an order of magnitude. In 1975 the failure of the Banqiao and Shimantan Reservoir Dams and other dams in Henan Province, China caused more casualties than any other dam failure in history by far. The disaster killed an estimated 230,000 people and 11 million people lost their homes. A state-controlled newspaper maintained that the dam was designed to survive a once-in-1000-years flood (300 mm of rainfall per day) but a once-in-2000-years flood occurred in August 1975, following 1000mm of rain in one day due to the collision of Typhoon Nina and a cold front. Sound familiar? The dam was subsequently rebuilt. The Chinese kept the disaster secret until the 1990s.

What about bridges? Are they too “as safe as houses” (i.e., not so safe they don’t fall down sometimes)?

There have been at least 214 documented bridge failures since 1950, 137 of which occurred in the last 22 years, an average of one every couple of months. 37 killed more than 10 people, 9 killed more than 100 people. In the USA in 1981 an overhead walkway inside a Hyatt Regency hotel in Kansas City collapsed killing 114 and injuring 216 more.

The same year as the Windscale fire, 1957, saw a train crash on the St Johns Station railway bridge, Lewisham. The bridge collapsed causing 90 deaths and 173 injuries. We remember the Windscale fire, which caused no direct deaths, but this bridge collapse has left no lasting impression on the public psyche, it merely joins the immeasurably long list of ‘conventional’ disasters and accidents.

Does this history tally with John Downer’s view that public decisions about dams and bridges “are predicated on an understanding that the chance of them failing catastrophically is so low as to be negligible”? If so it could only rest upon the public’s woeful ignorance of reality.

Is there some sociological phenomenon at work which leads to public acceptance of these structures as adequately safe despite their history of failures? Is the sociological response to these conventional disasters reasonable or unreasonable? And if the muted societal response to these conventional disasters is reasonable, is the societal response to nuclear accidents unreasonable? These, surely, are the interesting sociological questions.

From the litany of conventional disasters, one might ask whether the issue is not so much “denying Fukushima” as “denying thousands of dam failures and hundreds of bridge failures” – to name but two engineering structures. In fact, neither accusation of denial is valid. The response of the engineering industries to both, conventional and nuclear, is broadly the same and the very opposite of denial.

Downer asks, “how can the assurances of nuclear experts remain credible in the wake of Fukushima?” But one might just as well ask why the public should continue to have confidence in the designers of dams, bridges, ships, aeroplanes, etc.? Yet walking over the Haweswater dam or driving over the Severn Bridge or flying across the Atlantic excite no public fear. Why the difference, and which of the conflicting reactions to nuclear and non-nuclear accidents is the more rational? This is the interesting sociological question.

The engineering world is, and always will be, fallible. But the acceptability of the risks attendant upon any engineering venture can be judged only when set against the benefits. Failure to consider benefits renders any discussion of the acceptability of risk meaningless, because no risk is acceptable without benefit.

The obligation upon the engineering community is to admit failures when they occur, to be open about their causes, and to continuously learn from experience based on an honest acceptance of fallibility. The interests of society are served by ensuring the benefits outweigh the risks. Confidence in decision making requires a rationally based appraisal of both benefits and risks. The obligation upon sociology, if it is to play an active role in these issues rather than merely that of an observer, is to facilitate a truly rational appraisal of these issues.  

3. The human cost of nuclear accidents

The public perception of nuclear accidents is one of almost supernatural horror. The reality is not so extreme. If immediate deaths were the full story, the nuclear accidents to-date would be unworthy of consideration. There were no direct deaths at all at three of the four nuclear disasters (Windscale, Three Mile Island and Fukushima). Even at Chernobyl the immediate death toll was ‘only’ about 30, the majority due to acute radiation sickness (ARS) or shortly following ARS.

When the 2011 Japanese earthquake and tsunami are mentioned, people immediately think “Fukushima”. And yet, of the 20,000 people killed in Japan in just a couple of days during that disaster, none were at Fukushima – not one. Even with the benefit of hindsight, the briefly entertained spectre of an evacuation of Tokyo still has the power to horrify, as Downer exploits. Yet for eight years now there have been many hundreds of staff working full time on the Fukushima Daiichi site in rebuilt offices, and dining on food grown in Fukushima. Yet again, the extreme contrast between reality and perception raises obvious, but unaddressed, sociological questions.

Fukushima was undoubtedly a huge financial catastrophe. Not only was a power station destroyed, but the clean-up operation will take many decades and cost eye watering sums of money. But as a human disaster it was a non-event. In fact, it is somewhat distasteful to focus on Fukushima in view of the wholesale deaths elsewhere in Japan at the time.

The rest of this section will focus on Chernobyl because this is the only one of the four meltdown accidents with a human death toll. Also, Chernobyl was about as bad as a nuclear accident could be. RBMK reactors are large, and about one-third of its radiological inventory went up in smoke. Consequently Chernobyl is an important benchmark as to how bad nuclear accidents can be.

Starting with people who are known to have been exposed to high doses or were involve in clean-up operations, 134 people suffered acute radiation sickness (ARS), of whom 28 died due to radiation effects within four months and another two died from injuries. A further 19 of the ARS survivors have died in the twenty years after the accident (i.e., as of 2006), mostly of causes not attributable to radiation exposure. Among the ARS survivors there have been four cases of solid cancers, three cases of myelodysplastic syndrome, one case of acute myelomonoblastic leukaemia and one case of chronic myeloid leukaemia. In addition, radiation induced cataracts are common amongst the ARS survivors. All the ARS sufferers at Chernobyl were station workers, none were members of the public.

The relatively low death rate amongst those who suffered ARS (i.e., very high doses) suggests that the hundreds of thousands of people who took part in recovery operations, and subject to far lower doses, would be unlikely to suffer serious ill effects. A higher incidence of cataracts might be expected amongst those receiving the highest doses. The 2008 UNSCLEAR report, Annex D also notes,

Amongst adults, the most meaningful evidence (regarding leukaemia) comes from studies of the recovery operations workers. Although at this time, some evidence exists of an increase in the incidence of leukaemia among a group of recovery operation workers from the Russian Federation, this is far from conclusive.

About 600,000 people were involved in recovery operations and so potentially were exposed to larger radiation doses than the general public. The average dose to recovery workers has been conservatively estimated at 120 mSv, which compares to typical background radiation levels of ~2 mSv/year, and to the industry dose limit of 20 mSv/year. Apart from the potential increase in the incidence of leukaemia and cataracts among those who received higher doses, as noted above, there is no evidence of health effects that can be attributed to radiation exposure.

The reason for the possible increase in leukaemia amongst clean-up workers being described as “far from conclusive” is as follows. The magnitude of any leukaemia effect must be estimated from models which are intrinsically very uncertain because they involve the product of a very small probability and a large number of people. The 2006 Chernobyl Forum report estimates perhaps 4000 fatalities from all forms of cancer eventually. Unfortunately a figure of this magnitude, or smaller, will never be confirmable from epidemiological studies because it is a small fraction of the naturally expected number of cancer fatalities in this population (around 100,000). Thus, the 4000 figure is only a 4% elevation in natural rate, far smaller than the error in its estimation (i.e., consistent with zero if error bars were added).

Turning now to the general public. The 2006 Chernobyl Forum report tells us,

Among the 5 million persons residing in other ‘contaminated’ areas (i.e., other than the Cherobyl site itself), the doses are much lower and any projected increases are more speculative, but are expected to make a difference of less than one per cent in cancer mortality. Such increases would be very difficult to detect with available epidemiological tools, given the normal variation in cancer mortality rates. So far, epidemiological studies of residents of contaminated areas in Belarus, Russia and Ukraine have not provided clear and convincing evidence for a radiation-induced increase in general population mortality, and in particular, for fatalities caused by leukaemia, solid cancers (other than thyroid cancer), and non-cancer diseases.”

The 2008 UNSCLEAR report, Annex D, repeats the same message in the context of the wider public, concluding,

There appears, at present, to be no hard evidence of any measurable increased incidence of all solid cancers taken together among the populations of the Russian Federation and Ukraine.

Among those exposed in utero and as children, no persuasive evidence has been found of a measurable increase in the incidence of leukaemia attributable to radiation exposure. This is not unreasonable given that the doses involved were generally small, comparable with natural background doses, and therefore epidemiological studies lack the statistical power to confirm any radiation-related increases had they occurred.

However, there is a strong link between childhood thyroid cancer and the radiation released by Chernobyl, the incidence of this cancer amongst the under-18s being sharply peaked over the five years following the accident. 6,848 thyroid cancers have been seen among those in the three republics who were under 18 at the time of the accident, of which a substantial fraction is likely to have been due to radiation exposure. However, the survival rate for this type of cancer is extremely high.

For children born after 1986 there was no increase in thyroid cancer rate. For adults in the general population at the time of the accident there is no evidence of increased thyroid cancer.

The 2006 Chernobyl Forum report states that there had been 15 cases of deaths due to thyroid cancer by that time which could be attributed to childhood exposure to radioactive iodine (hence to Chernobyl).

It is worth noting, however, that even successful treatment for thyroid cancer might have adverse health impact. It is also worth noting that it would have been easy to avoid this problem if prompt action had been taken to distribute potassium iodate pills more extensively than was done.

Other than thyroid cancer, the 2008 UNSCLEAR report, Annex D, concludes,

To date, there has been no persuasive evidence of any other health effect in the general population that can be attributed to radiation exposure.

In summary, the number of deaths deterministically attributable to Chernobyl is around 54. Any excess cancer deaths (due solely to leukaemia) have been estimated to be a sufficiently small percentage of the normal expected cancer death rate that it is unlikely to be discernible from natural variations in the normal rate. The most credible estimate is ~4000 excess deaths amongst the recovery workers, a population in which 100,000 cancer deaths would be expected. Even this large figure makes the Chernobyl death toll comparable with the more severe conventional accidents.

4. PSA

I note in passing that the Downer paper states that some projected accident frequencies are claimed to be calculated “to seven decimal places”. This is a preposterous misrepresentation. I presume the author is actually referring to claims that a frequency is 10-7, which is not at all the same thing. Actually such numbers are generally not even claimed to be accurate to one decimal place.

Nevertheless, I have considerable sympathy with the view that extremely low probabilities of failure, like 10-6 pry, are not calculable. One reason is the uncertainty in the probabilities of the individual events of which any fault sequence is composed. But the more serious problem is that it is fundamentally impossible to list every possible scenario, and hence one cannot even begin to quantify the total failure probability. This point is made forcibly in Downer’s paper and is correct in my opinion.

For probabilities which are not too small this problem does not arise because more exotic scenarios than those actively considered may be dismissed intuitively as far less probable. Where the boundary lies I do not know, but I guess perhaps around 10-4 pry.

This does not, of course, mean that PSA is not worth doing. The best should not be made the enemy of the good. PSA is a useful discipline for weeding out sequences of events which might otherwise lead to unacceptably large failure probabilities. Such studies can, and do, have very practical engineering consequences, for example, in terms of influencing the number of pumps, valves, etc., which are required to drive the assessed probability down to an acceptable level. That the calculated probability may not really be the absolute failure probability does not detract from this.

Nevertheless, Downer is right to criticise the industry for giving the impression that failure probabilities as small as 10-6 or 10-7 pry are calculable. My opinion, like his, is that they are not. But is the industry alone to blame for this convenient over-interpretation of the results of PSA? Or has this attitude been effectively forced upon it by outside influences which require the industry to express its confidence in these terms?

Downer opines,

We would be better positioned to govern the atomic age if we could institutionalise the idea that nuclear risk assessments are contestable judgments more than they are objective truths.

Again I agree, with the proviso that “contestable judgment” is not taken to mean “unfounded” or “anyone’s guess”. Both conventional engineering assessment and PSA provide high levels of confidence, far higher than would be available with mere qualitative judgment. Numerical calculation is, and will remain, crucial. The issue is only that the outcome should not be misrepresented as certainty, or as a definitive statement regarding absolute failure probabilities where these are extremely small (e.g., 10-6 pry or smaller).

Within the nuclear industry there is a widespread conception that arguments in safety cases may be of two kinds: numerical or judgmental. I have long argued that this is a false dichotomy. In truth, all arguments are judgmental – and so all safety cases are judgmental. That there appears to be two distinct types of argument results only from how, and by whom, the judgments are made. In the case of numerical calculations the judgments are made by engineers and are invisible, or less visible, to those entrusted with independent assessment. This absolves those who should rightly be responsible for confirming judgments from the necessity to do so, since they may never be exposed to the judgments which are implicit in the calculations or may lack the expertise to identify them. Numerical assessments are, in effect, a means by which responsibility is delegated to less senior levels in an organisation.

Engineers are aware that their calculations are only as good as the set of assumptions on which they are based. But it may be convenient for others to interpret numerical assessments as absolute assurances. They are not. All calculations are also judgments – it is merely that the judgments are made by the engineer in choosing the input assumptions. This conveniently obviates any obligation upon other parties to exercise judgment, and hence can obscure that judgments are involved. Worse, it can encourage a belief that no judgments are involved. This is Downer’s point, and it is a point well made. By pushing the responsibility down to the engineer, responsibility is conveniently avoided by those higher up the salary scales.

But it is not only within the industry itself that this effect operates. A similar phenomenon allows the public to disengage from responsibility.

5. Probabilistic versus Deterministic and the Public

Every engineering structure which has failed implies the failure of its design assessment. The reader may demur on the grounds that, in some cases, the failure may be due to the structure being subject to unforeseen loads or conditions “beyond design basis”. But this is just semantics since the design assessment may still be regarded as at fault for failing to take the loads or conditions into the design basis. Admittedly this is a rather harsh perspective in the case of deliberate sabotage.

In the vast majority of cases the design assessments will be traditional, deterministic assessments: satisfaction of design code requirements or the demonstration of a deterministic reserve margin. It is not unique to probabilistic assessment that a structural failure implies an assessment failure; that is, a failure of the engineer’s duty to protect. But this does not, and should not, undermine the use of design codes. The correct response is to identify the shortcomings in the design assessment and improve them. This may involve improving the engineering calculations, or improving the plant itself, or rectifying its operation. But whether design calculations are deterministically based or probabilistically based is not the central issue. It is inappropriate, therefore, to regard the basic approach of probabilistic assessments as being invalidated by the core meltdowns which have occurred.

The interesting sociological issue is how numerical engineering assessments are interpreted. Misinterpretation of the meaning, or reliability, of an engineering assessment can apply to traditional deterministic assessments as much as to PSA. Both types of assessment are prone to misinterpretation, and the reason is the same. Both types of assessment are only as good as the assumptions upon which they are based. If the condition which destroys the plant was unforeseen, both types of assessment will be equally wrong.

Just as the probability calculated by PSA may not be the absolute failure probability, so the demonstration of a deterministic reserve margin may not be the absolute guarantee of integrity it appears to be. And yet many people do make this interpretational error. So I have to ask: is this a wilful misinterpretation? This is the interesting sociological question. John Downer hints at this when he writes, “although nuclear engineers have continued to discuss the nuances of probabilism, such discussions have all but disappeared from the dialogues between those experts and the public or policymakers they serve…..Assessment experts may be aware of the limitations of their calculations, in other words, but on an institutionalised level, at least, they actively occlude such limitations from non-specialists.”

I would argue that it is not engineers who “actively occlude such limitations” so much as a phenomenon which arises sociologically. The underlying sociological dynamic might be described thus,

  • We, the public, have an exaggerated fear of radioactivity,
  • We therefore insist on absolute assurances about nuclear accidents,
  • As a result, developers of nuclear power plant are placed in the position of re-assuring the public in terms defined by the public’s fear,
  • Fear is assuaged by absolute assurances, not by apparent prevarication – which is how any discussion of “limitations” would inevitably be perceived,
  • Hence only assurances which are expressed in absolute, or near-absolute, terms are acceptable to the public.

In short: the public demands an impossible absolutist assurance, this assurance is duly given because it is obligatory, and, when reality bites, the industry is held to be culpable. And, of course, it is. But so is the public. Because the public are also beneficiaries.

The interesting sociological phenomenon is that the public are permitted to consider themselves decoupled from any responsibility for endeavours from which they benefit.

The issue is not unique to nuclear power, but this phenomenon is particularly acute in the case of nuclear power.

For a conventional instance of the same phenomenon, imagine a newly opened bridge. The designer is asked by the press whether the bridge is safe. Sociologically the only possible answer is “yes”. No prevarication, no provisos, no qualifications are acceptable. Even a second’s hesitation in delivering the answer “yes”, or a less than emphatic tone, could fatally undermine public confidence. The designer will know this and will respond in accord with the sociological context. The designer will know that the answer “ah, but there are degrees of safety, you know” is not appropriate in the public context of the question. Designers know that a 500 page exposition of fracture mechanics, fatigue and structural resonance theory are not what they are being asked for. They know that bridges do sometimes fail. But they also know that they are required, by societal pressure, to carry 100% of the responsibility for the safety of their bridge. They are happy to go along with this sociological obligation because they do, in fact, have great confidence in their bridge. But they know that they are not God. They know they are fallible. But they also know that the sociological context obliges them to pretend otherwise.

Is this fair?

The benefit is to the whole of society, but the responsibility is polarised absolutely.

Do we allow the public to be infantilised?

The public is not entitled to expect absolute certainty in safety assessments for the simple reason that absolute certainty is impossible. But the public do expect this. Those responsible for major projects should be judged against stringent, but humanly achievable, standards. And they should be held accountable when there are failures. Nevertheless, there is – or should be – a shared responsibility between the public and the involved industries. That there should be a shared responsibility follows because there is a shared benefit. This is a perspective which has been lost in modern discourse.

6. In Summary

There are indeed interesting sociological issues surrounding Fukushima and nuclear plant generally. But I have argued here that our continued commitment to nuclear plant despite past failures is not one of them since this merely mirrors the same “learn-and-progress” approach which is accepted in the conventional domain. Rather, the interesting sociological phenomenon is the differing public perception of nuclear and non-nuclear disasters, which John Downer’s paper exemplifies.  Why can vast numbers of conventional disasters and their attendant huge death toll be accepted by the public with relative equanimity, whilst nuclear disasters, real or imagined, excite a far greater public concern? The reason appears to be more sociological than logical. It would be of interest to see it addressed.

Downer raises some valid issues in respect of the over-interpretation of PSA by the nuclear industry. Whilst the industry must be held culpable for this (in my view), there is another sociological issue here which it would be interesting to see addressed: what pressures, societal or otherwise, have led to this over-interpretation becoming prevalent?

The obligation upon sociology, if it is to play an active role in these issues rather than merely being a passive observer, is to facilitate a truly rational appraisal of these issues. With energy shortages and unmeetable prices of energy now a present reality, the time is overdue to clear the way for public enthusiasm in a nuclear renaissance in the UK, based upon an adult engagement with the issues, not an infantilised one.

Appendix: Emergency Response

I have not attempted in the main text, above, a blow-by-blow review of the criticisms John Downer raises. My main point would always be that the industry can be improved and hence criticism should be heard constructively. However some points in respect of emergency preparedness require a response because they misrepresent the factual position.

John Downer opines that there is, “an institutionally deep-rooted confidence that contingency planning is unnecessary for nuclear disasters“, though he also notes, “that is not to say that disaster planning is entirely absent in the nuclear sphere, but rather that it is routinely insincere and insufficient“.

The accusation of insincerity is certainly false. The attitude within the industry is the opposite of complacent. The responsibility for nuclear safety invested in the staff is felt at all levels in the organisation. I can testify to this from first-hand experience over a full working lifetime.

However, the accusation of “insufficiency” is another matter. Since significant enhancements to emergency arrangements have been made post-Fukushima, this is a tacit admission that the previous position was not as robust as it should have been. But emergency planning arrangements have always been treated with the utmost seriousness.

In fact all EDF Energy’s nuclear stations have a nuclear emergency exercise annually. Shutdown and decommissioned stations continue to have such exercises, as do MOD nuclear facilities. In total there are at least 35 nuclear emergency exercises annually in mainland UK. I myself took part in such exercises annually for about 16 years. They always involve remote support from the Central Emergency Support Centre (CESC) as well as the affected station, and, depending on the level of the exercise, also involve the local police force, government departments, and other external bodies who would be involved in a real event.

John Downer is critical of operator training for unanticipated accident sequences, claiming that operators in the USA were not required (in 2012) to demonstrate knowledge of relevant guidance in these circumstances. The civil nuclear industry in the UK has had Severe Accident Guidelines (SAGs) and Symptom Based Emergency Response Guidance (SBERGs) for decades. Much of the functionality of these is incorporated into the control room Tech Specs which are used as routine by operators in the UK.

So, I believe the existing arrangements in respect of emergency nuclear response in the UK are considerably more robust than John Downer portrays.

Nevertheless he again has a point as regards resilience. As a response to Fukushima, so-called stress tests were carried out across Europe, and in the UK in particular. Had arrangements been perfect, this would not have resulted in any action being necessary. That was not the case. Substantial improvements have been made to the facilities available to respond to nuclear emergencies within the UK, including permanently available, mobile resources such as pumps, generators, etc.

Source

No comments:

Post a Comment