Chernobyl Disaster: The Cultivation of a Flawed Safety Culture – Almir Hodzic (
ID: 344361 • Letter: C
Question
Chernobyl Disaster: The Cultivation of a Flawed Safety Culture – Almir Hodzic
(https://www.linkedin.com/pulse/chernobyl-disaster-cultivation-flawed-safety-culture-almir-hod%C5%BEi%C4%87/ )
Published on December 25, 2015
Executive Summary
When observing events that result in devastating casualties, technological disasters are ones that people do not immediately consider. Experts that research humanitarian disasters tend to analyze these phenomenons as consequences of conflicts. Yet, technological disasters are sometimes less predictable and produce massive death tolls. The unpredictability of technological disasters, coupled with the dangers of nuclear energy can result in a situation that could affect vast areas and generations of families exposed to the radiation. But, not all technological disasters should be considered as unprecedented events. The human component in the case of any technological mishap should be considered. The case of the Chernobyl disaster is one where further investigation has exposed that the nuclear accident was not as inevitable as originally thought. This paper will explain how convergence of numerous factors led to the catastrophic technological disaster.
Introduction
The Chernobyl Disaster in Pripyat, Ukraine (then part of the Soviet Union) resulted in devastating short-term and long-term effect to not only the nuclear power plant worker, but most of Europe. On April 26, 1986 a routine systems test was scheduled to be completed. Instead, a sequence of events led to explosions of the nuclear power plant’s Reactor 4 core. The explosion led to large quantities of radioactive particles spreading across Europe. Ukraine, Belarus, and Western Russian were exposed to the strongest concentration of radioactive contamination.
The disaster was labeled a Level 7 event on the International Nuclear Scale Event, only the second incident to be classified with the maximum rating. The immediate explosion and fire led to 31 individuals dying (World Nuclear Association 2015), with an additional 4,000 people estimated to dye from the radiation exposure (World Health Organization 2005). In addition to individuals potentially dying of radioactive contamination, citizens were internally displaced as some 350,000 people were relocated in what the World Health Organization (WHO) characterized as a, “deeply traumatic experience” (World Health Organization 2005). The response to neutralize the threat was costly in human and financial terms. It is reported that, “The battle to contain the contamination and avert a greater catastrophe ultimately involved over 500,000 workers, known as liquidators, and cost an estimated 18 billion Rubles” (Chernobyl Gallery, 2014).
Yet, labeling this as a technological disaster that was inevitable would be an injustice the individuals who lost their lives that day in 1986 and who continue to be plagued by the side-effects of radioactive contamination. Originally, reports identified operator errors as the leading cause to the accident. The disabling of safety mechanisms and improvised response to stabilize the reactor provide solid evidence for the operator to take responsibility for accident. But, the actions of the operator were due to a number of factors that were pervasive in the Soviet nuclear industry at that time. The convergence of a culture of ignoring safety, an incompetence of the operators of the power plant, and a lack of accountability led to the Chernobyl disaster. Analyzing these factors will demonstrate there were indeed warnings a nuclear disaster was not out of the realm of possibility.
Culture of Ignoring Safety
The nuclear industry was one of lax regulations. This was especially true in the Soviet Union where centralized planning emphasized production over all other concerns. As explained in a World Nuclear Association report, “It [Chernobyl Disaster] was a direct consequence of Cold War isolation and the resulting lack of any safety culture,” and only after the disaster were drastic changes made to the nuclear industry world-wide (World Nuclear Association 2015). Although the Three Mile Island Nuclear Accident of 1979 should have exposed the dangers of ignoring safety, the West-East split during the Cold War isolated experts in the nuclear industry.
The RBMK reactor, used only in the Soviet Union, was a product of the Soviet idea to emphasize output over safety. No Western nations used the RBMK reactors at their nuclear power plants and for good reason. The RBMK had no containment structure which would have minimized the amount of amount of radioactivity released into the air. Additionally, “One peculiar feature of the RBMK design was that coolant failure could lead to a strong increase in power output from the fission process (positive void coefficient)” (World Nuclear Association 2015). The reactors were intentional build with this feature. As explained in The Chernobyl Accident and its Implications for the United Kingdom, “The reactor design was inherently unstable under certain conditions,” and the “designers’ decision to optimize the core design for fuel economy, allowed the possibility of a sudden uncontrollable power surge (Worley 1988, 23). The safety was viewed as an acceptable trade-off in order to increase output.
The designers were not the only individuals in the industry that tested the boundaries of safety. Operators in the nuclear power plants also tended to violate safety procedures. The phenomenon of repeating an unsafe practice until catastrophic consequences was endured by the workers at Chernobyl. The first miscalculation by the operator of the systems test was the lack of attention given to the reactor. The test was to ensure that steam turbine generators would maintain power for the reactors in case the main electrical supply was lost. In addition to not giving enough attention to reactor during the test, the operator also violated operational procedures at multiple steps. This concern of deviation from standard industry procedure is highlighted in International Nuclear Safety Advisory Group (INSAG) report when emphasizing that, “When the reactor power could not be restored to the intended level of 700 MW(th), the operating staff did not stop and think, but on the spot they modified the test conditions to match their view at that moment of the prevailing conditions” (INSAG 1992, 19).
The lack of automatic safety mechanisms associated with the reactors exacerbated the consequences of poor safety habits by the operators. Once operators disabled all of the safety mechanisms to proceed with the experiment, no mechanisms were left shutdown the reactors. Worley confirms lacking function when saying, “It is now recognized as a design weakness by the Russians that positive provision was not made, for example, to shut the reactor down automatically under these conditions” (Worley 1988, 23). Only after the Chernobyl disaster were the RBMK reactors modified with automatic trip mechanisms.
The final evidence that there was a culture of safety ignorance within the industry is explained by the disregard to previous nuclear malfunctions. Not only did the Three Mile Island Accident happen seven years before Chernobyl, but there were documented RBMK design problems at two different Soviet power plants. As pointed out in the INSAG findings, “Two earlier accidents at RBMK reactors, one at Leningrad (Unit 1 in 1975) and a fuel failure at Chernobyl (Unit 1 in 1982), had already indicated major weaknesses in the characteristics and operation of RBMK units” (INSAG 1992, 23). But the report further explains, “However, lessons learned from these accidents prompted at most only very limited design modifications or improvements in operating practices” (INSAG 1992, 23). This policy of continuing to practice unsafe procedures until something bad resulted in a catastrophe that otherwise, if not totally avoided, could have been minimized.
Incompetence of Operators
The role of operators distinguishes this specific accident from the other instances in which a large-scale crisis was averted. It is very possible that the incompetence of the operator allowed for the situation to become worse than it otherwise would have been. The ad hoc decision made during the most critical time of the crisis resulted in devastating consequences.
The actions taken by the operator resulted in putting the reactor in an unstable condition. This is not to say that the actions were against procedure, but the combination of this along with other factors guaranteed that the core explode. The World Nuclear Association confirms this belief when saying, “The 1991 report by the State Committee on the Supervision of Safety in Industry and Nuclear Power on the root cause of the accident looked past the operator actions”, yet “It was certainly true the operators placed their reactor in a dangerously unstable condition (in fact in a condition which virtually guaranteed an accident)” (World Nuclear Association 2015). The operators lacked the proper training to diagnose the seriousness of the situation and proceed with appropriate measures to neutralize further damage.
Although the INSAG report in 1992 shifts the blame of the accident away from the operators and to the design of the RBMK reactors, there is no question that the actions of the operators contributed to the reactor reaching unstable levels. This is reiterated in the report when stating, “It is not disputed that the test was initiated at a power level (200 MW(th)), well below that prescribed in the test procedures” (INSAG 1992, 19).
Yet, once the operators stabilized the reactor after this initial phase of instability, they continued with the test rather than re-scheduling. When assessing the situation, the INSAG report claims that, “Where in the process it is found that the initial procedures are defective or they will not work as planned, tests should cease while a carefully preplanned process is followed to evaluate any changes contemplated” (INSAG 1992, 19). Rather, the operators proceeded and allowed the reactor to reach volatile level by, “removing too many of the control rods, resulting in the lowering of the reactor’s operating reactivity margin. However, the operating procedures did not emphasize the vital safety significance of the ORM but rather treated the ORM as a way of controlling reactor power” (World Nuclear Association 2015). The sequence of operation errors resulted in the core reaching a level of irreversible instability.
The actions of the operators are ill-advised when looking back, but other factors led to the risky decisions practiced at the Chernobyl nuclear power plant. One theory is that, “the actions of the operators were more a symptom of the prevailing safety culture of the Soviet era rather than the result of recklessness or a lack of competence on the part of the operators” (World Nuclear Association 2015). And this safety culture was present in the nuclear industry due to the Soviet structure of planned economy. Valery Legasov, who had led the Soviet delegation to the International Atomic Energy Agency (IAEA) Post-Accident Review Meeting, claimed “After I had visited Chernobyl NPP I came to the conclusion that the accident was the inevitable apotheosis of the economic system which had been developed in the USSR over many decades” (World Nuclear Association 2015). The Soviet structure encouraged the lack of a safety culture which led to operator errors not only in Chernobyl, but the nuclear industry as a whole.
Lack of Accountability – The Soviet Structure
The structure of the Soviet Union’s state institutions and planned economy created the environment for technological disasters to transpire. Like many of the entities within the Soviet Union, there was a lack of independence between regulatory bodies and the central government. The Politburo’s structure prevented any sort of multiple advocacy that would have voiced discontent about the state of the nuclear industry’s practices. Although, there were nuclear regulatory bodies responsible for enforcement of industry guidelines as shown in Appendix A.
Yet, these institutions were totally inefficient as the Soviet policy apparatus prioritized output over all other concerns. As assessed by INSAG, “The assurance of safety in the face of the inevitable pressures to meet production goals requires a dedicated operating organization and a strong and independent regulatory regime, properly resourced, backed at Government level and with all necessary enforcement powers. This sort of regime did not exist in the USSR at the time of the accident” (INSAG 1992, 21). The report further expresses the deficient fostered from the lack of independent regulatory nuclear institutions within the Soviet system when determining, “the regulatory regime was ineffective in many important areas, such as analyzing the safety of the design and operation of plants, in requirements for training and for the introduction and promotion of safety culture, and in the enforcement of regulations. It did not function as an independent component in ensuring safety” (INSAG 1992, 21).
The highest level of decision-making in the Soviet Union, was obsessed with production instead of creating the necessary functions that would ensure safe practices. The KGB report produced after the 1982 Chernobyl reactor malfunction downplayed the significance of the incident, only recommending that the staff, “monitor the situation every day and inform” (Vakuylenko 1982). This, along with the memoirs of Valery Legasov, First Deputy Director of the Kurchatov Institute of Atomic Energy at the time of the accident, expose the Soviet regime’s culpability in pre-Chernobyl Disaster. On the second anniversary of the disaster, Legasov committed suicide and left a memoir revealing previously undisclosed facts about the knowledge of flawed reactor designs that had long been known by several individuals.
Legasov’s testament, My Duty is to Tell About this, explained that from the very beginning scientists, “were very worried about the quality of the equipment supplied to NPP’s [Nuclear Power Plants] and also the lack of proper training and education of the person designing, constructing and operating NPPs” (Mould 2000, 296). Thus, from the very beginning the Soviet regime’s desire to accelerate the capabilities of the nuclear industry set everyone up to fail. The Soviet tendency of facilitating an environment that dissuaded dissent led to scientists not willing to speak up about their concerns. As noted by Legasov, “Declarations of the problems were viewed negatively by authorities and considered to be unprofessional” (Mould 2000, 297). The scientists and Soviet authorities were well aware of the consequences of flawed designs and that operators would be put in an impossible position when faced with unstable reactors. Legasov explains this when saying, “I considered that the design of the reactor was unusual and inefficient from the point view of protective systems which should come into operation under extreme conditions…I have heard specialist made proposal to the designers about changes in these protective systems but these were either rejected or developed very slowly” (Mould 2000, 298).
Lack of accountability for different regulatory bodies led to a deficiency of responsibility for the quality of work and actions for the numerous actors involved. In his memoir, Legaov explains that, “the division of responsibility was justified at the beginning of the development of atomic energy…Multiple Councils existed both within a single establishment and between establishments…This situation is not correct and leads to confusion and a complete absence of personal responsibility which in turn, as shown at the Chernobyl NPP, can lead to very significant irresponsibility” (Mould 2000, 298). The structure of control practiced by Soviet Union’s elites created environments for flawed practices in a number of areas, including nuclear regulations.
In the concluding section of his memoirs, Legasov does not single out a specific reason for the nuclear accident, rather he claims, “The chain of events leading up to the Chernobyl accident, why one person behaved in such a way and why another person behaved in another etc, it is impossible to find a single culprit, a single indicator of events, because it was like a closed circle” (Mould 2000, 300). While Legasov’s arguments are relevant, it is still important to consider the context that created the “closed circle.” The Soviet planned economy created the components and behaviors that would inevitably lead to a nuclear disaster.
Conclusion
Relating the Chernobyl Disaster to Marc Gerstein’s work on the causes of accidents, the Chernobyl Disaster would be classified as warm and hot causes. The nuclear industry itself, the operators and designers of the reactors, were exposed to warm causes. There was a history of obvious lack of a safety culture. Every component from the building of the reactors to the operation of them applied unsafe practices. Even when the industry experienced previous indicators of flawed designs in the 1975 Leningrad and 1982 Chernobyl reactor incidents, these were ignored as glaring signals for a need to address design problems.
Simultaneously, the Soviet apparatuses provided an environment for the industry to go down a dangerous road. The lack of independence for nuclear entities and the trickle-down pressure to keep up with centrally planned production led to a hot cause. Legasov’s memoirs reveal the extent to which the Soviet regime was willing to ignore safety when there were obvious warning signs in favor of achieving political and economic goals. The Soviet organizational structure, which resulted in a number of miscalculations in the areas of technology, intelligence dissemination, and policy-making weakened the elite’s legitimacy as a competent governing apparatus.
based on the the artical above please answer this question:
a)How can the government be blamed for the Chernobyl disaster?
Explanation / Answer
The Soviet government could be blamed for Chernobyl Disaster due to following reasons-
Please provide your reviews in comments.:-)
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.