Skip to main content Accessibility help
×
Hostname: page-component-f554764f5-68cz6 Total loading time: 0 Render date: 2025-04-22T08:13:20.547Z Has data issue: false hasContentIssue false

3 - The Nuclear World: Atoms for Peace

from Part I - Out with the Old

Published online by Cambridge University Press:  01 February 2024

Summary

The history of nuclear power is examined through the work of a number of pioneering physicists, chemists, and engineers, including Marie Curie in Paris (radiation), Ernest Rutherford, James Chadwick, and John Cockcroft in Cambridge (model of the nucleus), and Enrico Fermi in Rome, New York, and Chicago (the first nuclear reactor CP-1). Albert Einstein and Leo Szilard’s cautionary letter to Franklin Roosevelt, the Manhattan Project at Los Alamos that oversaw the making of the first nuclear bomb, US Admiral Hyman Rickover’s nuclear fleet, and the transition to electricity-generating fission power by the US, UK, and Soviet Union is explored.

The ‘70s growth of “too cheap to meter” nuclear power is shown to be expensive, dangerous, and incapable of treating its own waste. Examples of the failure of the nuclear industry are given, in particular, accidents at Mayak, Cumbria, Three Mile Island, Chernobyl, and Fukushima, as well as numerous deep geologic repositories. The state-of-the-art of nuclear power, so-called small modular reactors, and the current slate of existing and under-development power plants are discussed. The history, design specifications, and potential for success of nuclear fusion is included with examples from JET, ITER, NIF, and others.

Type
Chapter
Information
The Truth About Energy
Our Fossil-Fuel Addiction and the Transition to Renewables
, pp. 193 - 296
Publisher: Cambridge University Press
Print publication year: 2024

3.1 Nuclear Fission: Turning Mass into Energy

I did my first university work term at Atomic Energy Canada Limited, calculating the radioactivity released to the atmosphere after a LOCA, a seemingly innocuous acronym for “loss of coolant accident.” I was a second-year physics student at the time, in awe of the process of splitting uranium in a collision with a neutron that in turn released more neutrons to split more uranium, resulting in an ongoing chain reaction that generated lots of energy. My work entailed inputting data into computer models to simulate possible LOCA scenarios. Although a LOCA was a highly unlikely event given the independent backup safety systems in a nuclear reactor – gravity-assisted cobalt shutoff rods, cadmium injection, moderator dump – all of which would immediately stop the reactor by keeping more neutrons from splitting more uranium, the government licensing agency needed to know, just in case.

A nuclear reaction is a fairly straightforward process: a slowed-down neutron strikes a U-235 atom, which splits the uranium apart (fission) and releases energy because the resultant parts have less mass than the original atom. The energy of the mass difference is substantial as shown by Einstein’s equation, E = mc2. Indeed, very little mass (m) can make a lot of energy (E), because c2 is so large. For example, 1 gram = 9 × 1013 joules, that is, a 9 followed by 13 zeros or 90 trillion joules! By comparison, recall that a watt is 1 joule/second and thus a standard 60-watt incandescent light bulb consumes 60 joules in one second. The hard part is to keep the process going in a controlled way that turns heat into electricity via a piped-in, heat-transfer cooling system, followed by the same electrical generation system for burning fossil fuels, where heat converts water to steam to run a turbine that creates electricity as in a conventional power plant.

Not all uranium is “burnable.” The most common uranium isotope, U-238, absorbs neutrons instead of fissioning, so we have to use the much less plentiful isotope U-235 for our nuclear fuel, making the fission business that much harder. U-238 accounts for 99.3% of natural uranium as found in the ground, while the fissionable U-235 makes up only 0.7%, roughly one part in 140.1 Separating U-235 atoms from U-238 atoms (a.k.a. enriching) isn’t easy either, so neutron-absorbing uranium (U-238) is always present with fissioning uranium (U-235) in the reactor fuel.

Fortunately, a few tricks are available, such as enriching U-235 to around 3–5% or slowing down the neutrons even more to use the less-plentiful U-235 as is. Note that U-238 and U-235 have the same number of protons (92), but a different number of neutrons (146 and 143), and are known as “isotopes.” The “mass number” (for example, 238 or 235) indicates the number of protons and neutrons, collectively known as nucleons.

The general nuclear fission equation is n + X → X* → X1 + X2 + Nn + E, where a neutron (n) is “captured” by a fissionable atom X (for example, U-235), the X atom becomes unstable X*, breaks into two “daughter nuclei” fission products (X1 and X2), ejects a number of neutrons (N = 1, 2, or 3 depending on the fission products), and releases energy (E). One possible reaction for U-235 is shown in Figure 3.1, producing the fission products barium (Ba) and krypton (Kr). Numerous other possible reactions produce different fission-product pairs, but this one has all the ingredients of a typical fission reaction.

Figure 3.1 Nuclear fission: (a) U-235 fission-product yield versus mass number A (source: England, T. R. and Rider, B. F., “LA-UR-94–3106, ENDF-349, Evaluation and Compilation of Fission Product Yields 1993” (table 7, Set A, Mass Chain Yields, u235t), Los Alamos National Laboratory, October 1994) and (b) fission process started by a captured neutron. One possible reaction yields the fission products krypton (A = 92) and barium (A = 141).

0n1+92U23592U23656Ba141+36Kr92+ 30n1+E(170 MeV)([Eq. 3.1])

For an element ZXA, the atomic number (subscript Z) and mass number (superscript A) are always conserved in a nuclear reaction. So, in this example, the atomic number (number of protons), Z = 92 = 56 + 36, and the mass number (number of nucleons), A = 1 + 235 = 236 = 141 + 92 + 3, are the same before and after the reaction. Conservation means we get out what we put in.

The energy is worked out by simple atomic accounting, adding up the difference in mass of the original atom and the resultant parts. Here, the U-235 fission reaction produces 170 MeV, the mass–energy difference between a U-235 atom (plus one captured neutron) and the two fission products, in this case barium (Ba-141) and krypton (Kr-92) (plus three liberated neutrons). As a single atom is so small, the released energy is typically given in MeV (mega-electron-volts), which is equivalent to 1.6 × 10–13 joules (0.00000000000016 J), although in a reactor there are gazillions of atoms to make the energy add up.

One might think uranium would split into two equal parts of atomic number Z = 46 (92/2) or mass number A = 118 (236/2), but in fact the split is uneven (roughly 3:2). Typically, we get one large fission product near barium (Z = 56, A = 141) and another smaller fission product near krypton (Z = 36, A = 92), although there are more than 100 different possible daughter nuclei, as shown in Figure 3.1a. Most are highly unstable (that is, radioactive) because each resultant fission product is neutron heavy and must realign itself to become more stable. For example, barium (56Ba141) turns into stable praseodymium (59Pr141) after about a month and krypton (36Kr92) turns into stable zirconium (40Zr92) in about 6 hours, both via a series of “beta” decays where a neutron turns into a proton and an electron, increasing the atomic number of the decaying atom by one. Highly energetic and potentially very dangerous, alpha particles (doubly charged helium nuclei, 2He4++) and beta particles (nuclear electrons created when a neutron turns into a proton, –1β°) are both ejected during the ongoing radioactive decay of fission products, as are highly penetrating gamma rays (high-energy photons).

Nuclear fission is a statistical process with many different possible reactions, producing different radioactive fission products, a range of energies, and from one to three neutrons. Importantly, the average number of neutrons is about 2.5, and thus capable of creating a chain reaction in the uranium, while the average energy is about 200 MeV. In each case, the stored energy in the U-235 fuel is converted to the kinetic energy of the fission-product pairs, which heats the coolant to create the steam, while delayed neutrons from the decaying fission products help keep the reaction going. Astonishingly, 1 g of U-235 fuel per day in a reactor can generate almost 1 MW of power from over 3 × 1013 fissions per second.2

What’s not to like? Uranium ore (UO2) is fairly abundant in the Earth’s crust, mined primarily in Kazakhstan (~40%), Canada (~20%), and Australia (~10%), while very little fuel is needed to generate lots of energy, although the uranium must first be refined, for example, milled, enriched, and packaged as rods in a fuel assembly (or bundle) in a “light-water” reactor. Fortunately, as nuclear advocates and some environmentalists are keen to note, there are no nasty carbon-containing by-products or greenhouse gases as with fossil fuels, and thus running a nuclear plant reduces carbon emissions that would otherwise be emitted in an equivalent-rated coal-, oil-, or natural-gas-burning plant.

Indeed, in those green university days, I thought nuclear power was the answer to all our energy needs, buoyed by company literature citing the impressive performance and safety record of a CANDU (CANada Deuterium Uranium), Canada’s pressurized “heavy-water” reactor that uses natural uranium as fuel. As co-op students, we were sent to regular industry talks, the last slide in a presentation typically showing a picture of the nuclear power plant life cycle, the 40 years or so from green pasture to working 600-MW reactor back to green pasture again after being decommissioned, with accompanying family bike-riding pictures through the pretty, now-restored, green pasture. Nothing much was said about the waste material – what it was, how long it lasted, how much there was, where it would go – but none of us asked too many questions other than about the physics, little wondering if the talks were more PR than reality. No mention either about carbon-intensive and dangerous mining.

There are two main nuclear reactor types, depending on the concentration of U-235 and the absorbing material employed to slow down the neutrons (called a “moderator”). A typical light-water reactor uses “enriched” uranium and a light-water moderator (mostly regular H2O water), while a heavy-water reactor uses “natural” uranium and a heavy-water moderator (mostly “heavy” D2O water). The moderator also doubles as a coolant to take away the fission heat, the whole point of a reactor.

In the more common light-water reactor, found in the United States and most nuclear-power-producing countries, the U-235 content is increased roughly six times to between 3% and 5% to make up for the high neutron absorption of U-238 because the amount of U-235 in natural uranium is not enough to maintain a chain reaction. In a heavy-water reactor, fewer neutrons are absorbed by the U-238 atoms and instead bounce around inside the reactor, improving the likelihood of capture by U-235 atoms. The neutron “cross-section” (fissionability3) in heavy water is 30 times that of light water, making up for the lower percentage of U-235 in the fuel. In short, we enrich the U-235 content to use light water (in a more common, light-water reactor) or we thermalize more neutrons with heavy water to use the lower-percentage U-235 found in naturally occurring uranium (in the niche-market, heavy-water reactor).

Interestingly, water in a stream, lake, or household tap is not all regular H2O, as not all the hydrogen is “protium” (1H1), but includes two naturally occurring hydrogen isotopes: heavy hydrogen or deuterium (1H2 or D, ~0.015%) and tritium (1H3 or T, <10–18%). Still, it takes time and money to turn regular light water into heavy water, separating out the roughly 1-in-7,000 deuterium oxide molecules (D2O) from the hydrogen oxide molecules (H2O) by isotope separation.

There are pros and cons to both designs. The advantage of a heavy-water reactor is that we don’t need to enrich the fuel, and can use the uranium more or less as dug out from the ground, separated from its impure UO2 ore (uraninite, a.k.a. pitchblende4), saving money on expensive fuel-refining costs. The disadvantage is that we need to spend money making heavy water, produced by bombarding regular light water with neutrons (H2O + 2n = D2O). Although heavy water is mostly used as a neutron moderator in CANDU reactors, other applications include tracer compounds to label hydrogen in organic reactions, nuclear magnetic resonance (NMR) spectroscopy, and neutrino detectors.

Conversely, we can use regular H2O water in a light-water reactor, but must pay to enrich the U-235, a difficult and expensive process involving gaseous UF6 diffusion or centrifuges, the same methods employed to enrich uranium to make a nuclear bomb. Note that weapons-grade uranium contains 90% U-235, while reactor-grade uranium contains either 3–5% U-235 for light-water reactors or 0.7% U-235 for heavy-water reactors. Depleted uranium (DU) has less than 0.3% U-235, that is, almost 100% U-238, still dangerous but no good as is for power. Depleted uranium is what’s left over after U-235 has been removed for enriching (Figure 3.2).

Figure 3.2 Uranium enrichment: (a) uranium ore (source: Geomartin CC BY-SA 3.0), (b) uranium hexafluoride

(source: Argonne National Laboratory), and (c) natural uranium to weapons-grade uranium (90% U-235) or reactor-grade uranium (3–5% U-235).

Enriching natural uranium into reactor-grade U-235 (from 0.7% to 3–5%) is done by isotope separation, based on the differences in the atomic weights of U-235/U-238 (with three more neutrons, U-238 is heavier). After milling to separate the uranium from its ore, the uranium is ground into a condensed powder known as yellowcake (mostly U3O8) – so-called because the separated uranium crystals are yellow – and turned into purified UO2 by smelting, which is then fabricated into pellets for use in heavy-water reactors (thus no enriching needed) or converted into UF6 (uranium hexafluoride or “hex”) for enriching in light-water reactors. Vaporous UF6 is better for enrichment because U-235 is more easily separated from U-238 by gaseous diffusion (hex has a low boiling point), where the lighter U-235 atoms move faster across a barrier.5 Thousands of cascading units are needed because the U-235 yield at each stage is minimal.

Centrifuges require much less electricity and are more common today than gaseous diffusion, separating out the U-235 by high-speed spinning until the desired percentage is reached (the heavier U-238 atoms move to the outside), the same process for making today’s weapons-grade uranium and hence an important indicator of the existence of an illegal weapons program. For use in a light-water reactor, the high-grade UF6 is then remade into solid UO2, before being sintered and shaped into 1-inch-long cylindrical pellets, packed into 12-feet-long, 1-inch-diameter zirconium tubes to make fuel rods (think of a Pez dispenser) and arranged in a multi-rod assembly to be lowered into a reactor for irradiation.

The fuel rods are left in the reactor core for up to 4 years as the fission heat is transferred to the circulating coolant. During burning, the U-235 fuel is replaced by neutron-absorbing, fission-product “waste” such as krypton and barium, before being removed and stored in onsite “spent fuel” pools. All fuel handling is done by machine, while the reactor is sealed in a metal pressure vessel and enclosed in a thick-walled concrete containment building. Some of the U-235 remains unburned, as much as 2% depending on the time spent in the reactor and the fueling scheme, becoming unusable along with the highly radioactive, reaction-killing, fission-product waste. The life of a CANDU fuel bundle is similar as it passes horizontally through the reactor core (the middle is the hottest), before being ejected out the other end. CANDU refueling is continuous without the need for a reactor shutdown.

Atom smashing is now big business, generating almost 5% of global electrical power in 440 nuclear power plants worldwide. One hundred are in the United States, providing almost 20% of electrical power to the US national grid. At the beginning of the Atomic Age, after the physics and engineering of harnessing U-235 atoms had been worked out, some believed nuclear power would be the answer to all our energy needs and become “too cheap to meter.”

Note that “atomic” and “nuclear” physics seem interchangeable, the fundamental difference muddied by the history of nuclear power. An atom consists of a nucleus (protons and neutrons) and orbital electrons, but although “atomic” often means nuclear as in weapons and power, atomic physics technically refers to the electronic structure of an atom and the energy levels of the electrons that orbit the nucleus, while nuclear physics is what goes on inside the nucleus. Furthermore, chemical reactions involve only the orbital electrons, while nuclear reactions break apart the core of an atom and release much more energy (~ 200 MeV/eV). Alas, we are stuck with the muddied distinction as defined in the 1954 US Atomic Energy Act, which states “The term ‘atomic energy’ means all forms of energy released in the course of nuclear fission or nuclear transformation.”

3.2 Nuclear Beginnings: Atoms for War

To mark a beginning to nuclear physics, one typically starts with Ernest Rutherford, a New Zealander who did pioneering work at the universities of Manchester, McGill, and Cambridge. He was the first to recognize during a series of experiments from 1908 to 1913 that an atom was made of a dense central “nucleus” (which he named from the Latin for “little nut”), when alpha particles fired at a gold-foil target bounced back toward the source rather than being deflected as assumed in an earlier theoretical model. In the first practical model of the material world, the hydrogen nucleus (H+) became the building block of all elements, which Rutherford called a “proton.” In the academic home of Isaac Newton, and where in 1897 J. J. Thompson discovered the first subatomic particle, the electron,6 Rutherford built a different kind of laboratory to explore the inner workings of matter, generating new atomic stones to fling at increasingly higher velocities. Smashing stuff had become acceptable science.

Building on Rutherford’s work at the Cavendish Lab in Cambridge, his research partner James Chadwick discovered the neutron in 1932, similar in mass to a proton but without an electric charge. Chadwick noted that atoms would have too much positive charge if only protons accounted for the atomic mass of an element (hence the atomic number is not equal to the atomic mass). In 1938, John Cockcroft and Ernest Walton would be the first to break apart an atom, smashing a lithium target with a stream of protons accelerated by a high voltage,7 the original particle accelerator and “first direct quantitative check of Einstein’s equation, E = mc2.”8 The atomic zoo was expanding with every curious new discovery.

Elsewhere, the Joliot-Curies in Paris employed alpha particles to break apart other light elements, building on the work of Marie Curie – the other great pioneer of nuclear science who earlier isolated two highly radioactive unknown elements in uranium ore, which she named polonium and radium – before Enrico Fermi in Rome tried neutrons as his atomic bullets, differentially slowed by his marble and wooden benches (hydrogenous wood slowed more neutrons). Although they had lots of fun smashing elements with their high-velocity protons, alpha particles, and neutrons, the vast amount of energy stored in the nucleus wasn’t fully understood by any of the pioneering nuclear alchemists, either physicists or chemists.

A massive array of capacitors and rectifiers accelerated the atom-smashing particles at higher speeds, while oscilloscopes, scintillation screens, and Geiger counters measured what came out after the smashing.9 Nothing much was expected from the exploratory work other than a better understanding of the properties of matter and more insight into a previously unknown nuclear realm. As Rutherford noted, “Anyone who expects a source of power from the transformation of these atoms is talking moonshine.”10 Einstein himself was also doubtful at first, stating in 1934, “splitting the atom by bombardment is something akin to shooting birds in the dark in a place where there are only a few birds.”11

What came next is caught up in the intricacies of World War II, ultimately pushing the United States to build the first nuclear bomb as part of the Manhattan Project, fearful that Nazi Germany was developing its own nuclear-bomb program after scientists in Berlin showed that the nuclear fission of uranium could in fact release an enormous amount of energy. Generating usable nuclear power was a by-product of the bomb.

Systematically working his way through the elements from hydrogen (Z = 1) on up, Fermi had created the first “transuranium” elements (Z > 92) in 1934 by bombarding uranium with slow neutrons, which spurred on Otto Hahn, a chemist, and Lise Meitner, a physicist, to do their own uranium smashing at the Kaiser Wilhelm Institute in Berlin. Although the German chemist Ida Noddock was the first to suggest the theory of fission – a cell biology term used by American biologist William Arnold while working with Niels Bohr in Copenhagen – no one was quite sure what was going on in the bombardment. Fermi believed he had either created neptunium (Z = 93) and plutonium (Z = 94),12 a nuclear isomer, or various complex radioactive decay products, none of which made sense until Hahn and his young assistant Fritz Strassmann calculated that the mass number of the target uranium atoms was equal to the sum of the mass numbers of the measured reaction products.

Meitner, a Jewish-born Austrian exile who had fled Germany after the 1938 Anschluss to work in Stockholm, and her nephew Otto Frisch determined that the uranium atoms had been split apart, basing her thinking on the liquid-drop models of Bohr, George Gamow, and Carl Friedrich von Weizsäcker. As Frisch noted, “gradually the idea took shape that this was no chipping or cracking of the nucleus but rather a process to be explained by Bohr’s idea that the nucleus was like a liquid drop; such a liquid drop might elongate and divide itself.”13 Imagine a water-filled balloon squeezed somewhere near the middle to form a dumbbell that then snaps in two. Indeed, when a nucleus expands after capturing a neutron, the long-range, proton–proton, repulsive forces exceed the short-range, nucleon–nucleon, attractive forces that bind the atom together, causing “fission.” The resultant kinetic energy would be considerable, as much as 200 MeV, as Meitner and Frisch wrote in a Nature letter on January 16, 1939.

Meitner’s news spread fast, prompting Fermi to continue his neutron-induced radioactivity and uranium-smashing experiments at Columbia University in New York, where he had relocated with his Jewish wife Laura by way of Sweden after receiving the 1938 Nobel Prize “for his demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons.” Adding to the sense of urgency, the results of Hahn and Strassmann were verified by Frisch in Copenhagen, who recorded electric pulses of the fission products with an oscilloscope, and in a cloud-chamber photograph by two Berkeley physicists.14

The importance of the “secondary” neutrons produced in the reaction was immediately understood as a way to create a chain reaction, but whether any such reaction could be controlled was still unknown. In a now better-understood nuclear splitting, Fermi also proposed the existence of a new particle, which he called a neutrino for “little neutral one,” earlier hypothesized by the Austrian physicist Wolfgang Pauli as part of the process of beta decay. The neutrino satisfied the law of conservation of energy that some had thought to abandon to explain the mysterious hidden energy of the nucleus, becoming yet another part of the growing subatomic catalogue of particles, all of which had to be carefully accounted for in any reaction.

As the atomic scientists surmised from a now credibly established theory, a “controlled” nuclear reaction via “primary” neutrons would almost certainly be able to detonate a bomb.15 Adding to the worry of such raw power, Germany’s fission research was expanding, led by Nobel physicist Werner Heisenberg, who was building an experimental reactor (Uranmaschine) at the Kaiser Wilhelm Institute in Berlin, fueled with uranium from Europe’s only source, the Ore Mountains south of the Czech–German border.

The American quest to make the bomb would begin in earnest after the Hungarian émigré Leo Szilard – another European scientist caught in the crossfire of war, who had fled to the United States like so many others – commissioned Einstein to write a letter to President Franklin Roosevelt, warning of the likely German progress on nuclear fission. Dated August 2, 1939, the letter called for concerted government action: “In the course of the last four months it has been made probable – through the work of Joliot in France and Fermi and Szilard in America – that it may become possible to set up a nuclear chain reaction in a large mass of uranium, by which vast amounts of power and large quantities of radium-like elements would be generated.” The letter goes on to list possible uranium sources in Canada, Czechoslovakia, and the Belgian Congo, and ends with an ominous declaration that Germany had stopped the sale of uranium ore from Czechoslovakian mines upon annexing the Sudetenland. One month later, the Germans would invade Poland.16

On September 1, 1939, prior to a ban on reporting nuclear results, Niels Bohr and his former student John Wheeler published their seminal paper “The Mechanism of Nuclear Fission” in the American journal Physical Review, calculating that the odd-numbered, least-abundant uranium isotope U-235 was more likely to fission than U-238, and highlighting for all the practicality of generating large amounts of energy with uranium. There was no return once the nuclear genie was loosed from its atomic bottle. That same day, the world was at war.

The $2 billion Manhattan Project was the largest research project ever, employing more than 125,000 scientists, technicians, office workers, laborers, and military personnel at its peak at various locations across the United States, including the main think tank at Los Alamos in New Mexico and two productions centers, one in Oak Ridge, Tennessee, and another in Hanford, Washington (codenamed Project Y, K-25/Y-12, and Site W). Most of the heavy thinking took place in Los Alamos (a.k.a. “the Hill”), where the top physicists of the day had been corralled by the army, such as Hans Bethe, Felix Bloch, Emilio Segrè, Edward Teller, Eugene Wigner, and a young Richard Feynman, who wowed everyone with his talent, levity, and safecracking ability, and would soon head the Theoretical Computations Group in charge of the IBM calculating machines.

After a daring escape in an open boat from occupied Denmark to England, Niels Bohr, the Danish theoretician responsible for the concept of atomic shells, made several trips to Los Alamos, having calculated that 1 kg of purified U-235 would be sufficient to make a bomb,17 while others acted as visiting consultants from university laboratories across the USA, such as Enrico Fermi, Ernest Lawrence, and Isidor Isaac Rabi. The British national James Chadwick, who had first discovered the neutron only a decade earlier, fittingly became a part of the team after the American and British efforts were combined. Einstein, however, was considered a security risk, mostly because of his misunderstood pacifist views, and was not asked to join the project and besides wasn’t a nuclear physicist, although he did do some isolated work from Princeton on isotope separation and ordinance capabilities.18

The thousands of dedicated workers at Los Alamos were all led by J. Robert Oppenheimer, known as Oppie, a hyper-intelligent, philosophically minded theoretical physicist from Berkeley and colleague of the pioneering American atom-smasher Ernest Lawrence, the 1939 Nobel Prize winner for the invention of the cyclotron. Having visited the Pecos Wilderness near Los Alamos as a child on family vacations, Oppie reckoned that the secluded New Mexico desert was perfect for a “secret complex of atomic-weapons laboratories.”19

Given the uncertain outcome of working with novel materials, two bomb designs were devised to improve the chances of a successful detonation. One employed enriched uranium (U-235) and the other plutonium (Pu-239), an element found only in trace amounts in nature. Both materials were produced with great difficulty: U-235 via electromagnetic separation coupled with gaseous diffusion at Oak Ridge and Pu-239 via uranium neutron capture in the B Reactor at Hanford, based on Fermi’s test pile at the University of Chicago, to where he had moved from Columbia to support the war effort. Feynman’s as-yet-unfinished PhD thesis was on the separation of U-235 from U-238, although the method eventually employed to create weapons-grade U-235 was invented by Lawrence at Berkeley, while the Pu-239 was collected from inside the Hanford reactor after the transmutation of U-239.

In the “uranium-gun” design, two subcritical-mass pieces of enriched U-235 would be kept separate until triggering, while the “plutonium-implosion” bomb would be triggered by setting off plastic explosives around a subcritical spherical mass of Pu-239, condensing the core to twice its density via “shaped charges” that evenly focused the implosion like a lens. Plutonium was much easier to produce, but spontaneous fission in Pu-240 was too high for a gun design, solved by imploding the plutonium to a supercritical mass.20 The “lens” was made of layered plastic explosives that had the consistency of taffy and could be shaped to produce a massive, evenly applied shockwave, symmetrically compressing the core. The implosion had to be perfectly uniform to avoid a dud, a.k.a. a “mangled grapefruit.” The fast-to-slow detonation layers were designed by another European émigré, John von Neumann.21

After almost 4 years of development, the first test was prepared at Alamogordo in the New Mexico desert, about 150 miles south of Albuquerque. Codenamed Trinity, some were unsure if the atmosphere would catch fire via a nitrogen fusion chain and ignite the world.22 Others took bets on the size of the explosion. Although the world survived, the blast heat turned the desert sand to green glass – now known as “trinitite” – in all directions for 800 yards.

The Manhattan Project (codenamed S-1) detonated three atomic bombs, the original Trinity “gadget” test, July 16, 1944, and two more on live targets less than 3 weeks later, even though Germany had already been defeated by then and were no longer involved in an atomic weapons program, negating the project’s purpose. The second (Little Boy) was dropped by the B-29 bomber Enola Gay over the city of Hiroshima on August 6, 1945, and the third (Fat Man) by another B-29, Bock’s Car, on Nagasaki three days later, effectively ending the war.

The Hiroshima bomb was the uranium-gun design with an explosive power of 12.5 kilotons TNT (~ 50 TJ), killing an estimated 105,000 people and destroying 54,000 buildings, while the Nagasaki bomb employed the plutonium-implosion design – as at the proof-firing Trinity test – with the explosive power of 22 kilotons TNT (~90 TJ), killing 65,000 people and destroying 14,000 buildings.23 Almost 10,000 people per square kilometer were killed by the two roughly 1,800-feet-high “air-bursts,” the explosive power of the Hiroshima bomb equal to 16,000 of the largest conventional bombs dropped from a B-17. Twenty years later, Oppenheimer would capture the magnitude of worry most of the scientists felt upon witnessing the destructive power of the world’s deadliest creation that had lit up the sky like a second Sun at Alamogordo:

We knew the world would not be the same. A few people laughed, a few people cried, most people were silent. I remembered the line from the Hindu scripture, the Bhagavad-Gita; Vishnu is trying to persuade the Prince that he should do his duty and, to impress him, takes on his multi-armed form and says, “Now I am become Death, the destroyer of worlds.” I suppose we all thought that, one way or another.24

The ethics of dropping a nuclear bomb on live targets has been debated ever since. Some said two bombs were detonated to demonstrate the ease of deployment and that further Japanese resistance would be futile, saving countless Allied lives in a prolonged invasion of Japan (both bombs were stored together at the assembly site on Tinian Island). Some believed the Americans had to beat the Soviets into Japan to control reconstruction and post-war markets, while sending a message against communist expansion into Western Europe and Asia.

Others thought more sinister goals were at play, two billion dollars too much to spend on an untested concept. The English physicist and novelist C. P. Snow wrote in The New Men, “It had to be dropped in a hurry because the war will be over and there won’t be another chance,” while Cornell physicist Freeman Dyson noted “the whole machinery was ready.” Still others thought the world should know the bomb’s full potential as the United Nations was being formed. Already, the scientists and government overlords were debating the future deployment policy of the most deadly weapon ever constructed.

Niels Bohr argued that hard-won nuclear secrets should be shared with the Soviets to avoid an unnecessary and costly arms race as well as future proliferation, which has indeed transpired, but the politics of the day triumphed over scientific logic. Bohr even met with Roosevelt to discuss the possibility of “atomic diplomacy” with the Soviets, nixed by British prime minister Winston Churchill, who refused to see beyond the existential challenges of the current war.25 Bohr’s “open world” policy was akin to his complementarity principle, where wave-particle duality was compared to stockpiling atomic bombs that made the world both safer (as a permanently unusable deterrent) yet more dangerous (in the event the unthinkable transpired).26 The usually apolitical Fermi imagined an honest agreement with effective control measures and that “perhaps the new dangers may lead to an understanding between nations.”27 The future of warfare and life had become both freed and enslaved by science.

Expressing concern about the lack of contact between scientists and government officials, Einstein also sent a second letter to Roosevelt. Dated March 25, 1945, the letter remained unopened on the president’s desk, Roosevelt dying before he could read it. Often credited as the Father of the Bomb because of his equation and first letter, despite having done nothing to build it, Einstein lamented, “Had I known the Germans would not succeed in producing an atomic bomb, I never would have lifted a finger.”28 In fact, Germany’s nuclear program had been primitive, unable even to build a working reactor.29 Einstein would spend the rest of his life campaigning for arms control, reduced militaries, and a “supernational” security authority. Later, he noted that signing the letter in 1939, ostensibly starting the world on the road to an unstoppable nuclear-armed future, was the greatest mistake of his life.

***

After the war was over and the physics of exploding nuclear bombs better understood, other countries built their own kiloton and then megaton weapons using the Earth as a giant test lab. The Soviet Union detonated its first atomic bomb in 1949 at the Polygon test site in northeast Kazakhstan, a plutonium fission bomb codenamed “First Lightning” (called “Joe-1” by the USA after Joseph Stalin). Soviet development was helped by German refugee and spy Klaus Fuchs, who worked with the British contingent at Los Alamos, a little-known American physicist turned spy Ted Hall, who worked on explosion experiments and was unhappy that the USA hadn’t shared nuclear information with its Allies, and 300 tons of uranium dioxide found after Germany’s surrender, including a small amount at the Kaiser Wilhelm Institute in Dahlem where Hahn and Strassmann had first split the uranium atom a decade earlier.

The Soviets were keen to catch up to the Americans after the fall of Berlin, igniting a race for nuclear supremacy and the Cold War, which would divide the world into rival spheres of influence and may have averted a possible American first strike on a war-weary USSR. In 1952, the British tested their first nuclear bomb in the Montebello Islands off the northwest coast of Australia. After the detonation of the first nuclear bomb, the power and pace of destruction increased more in a decade than ever before in human history.

Overseen by the tutelage of the Hungarian émigré and Manhattan Project alumnus Edward Teller, nuclear weapons today are thermonuclear bombs, where a plutonium-fission “A-bomb” is detonated to generate the million-degree temperature needed to trigger a fusion “H-bomb,” fusing hydrogen nuclei to release energy rather than fissioning U-235 or Pu-239. Much more powerful than an A-bomb, the “Super” was opposed by some of the leading Los Alamos scientists because of its limitless destructive power, yet advocated by others to strengthen US military capability. The first Super, codenamed “Mike,” was successfully tested by the United States on November 1, 1952, on the remote Pacific Ocean coral atoll of Eniwetok in the Marshall Islands, halfway between Hawaii and the Philippines. Redesigned to be deployed by plane, the first H-bomb that could literally wipe out an entire country was dropped on March 1, 1954, on nearby Bikini Atoll. At 15 megatons TNT, “Bravo” was more than 1,000 times as powerful as the original Hiroshima blast, creating a 250-foot deep, mile-wide crater on the ocean floor and “fallout across more than 7,000 square miles of the Pacific Ocean.”30 In the midst of a widening Cold War chasm, few questions were asked.

Between 1946 and 1958, more than 1,200 nuclear bombs were detonated, equivalent to over one Hiroshima per day, leaving a distinctive, still detectable radiation signature across the globe. More atomic muscle-flexing followed when the Soviet Union detonated a 58-megaton-TNT H-bomb in 1961 in the Russian Arctic. At almost 5,000 times the power of the original Hiroshima blast, RDS-220 or “Tsar Bomba” was the largest bomb ever exploded. The “destroyer of worlds” now contains enough destructive power to kill us all many times over.

For most of the scientists in the Manhattan Project, weapons development was morally permissible during war, but not in peacetime. As noted in the 1949 General Advisory Committee report, signed off by Oppenheimer before he washed his hands of any further development, there was “no inherent limit in the destructive power” to the H-bomb and that such a “weapon of genocide … should never be produced.”31 But wiser heads did not prevail, the $2 billion wartime project spiraling out of control into half a century of Cold War gamesmanship. The USA brazenly tested bombs without any military need and the Soviets marched through eastern Europe as if daring their former allies to stop them. The US price tag for testing and stockpiling their new military toys was a staggering $5.5 trillion.32

Presently, nine countries possess verifiable nuclear weapons – the USA, Russia, the UK, France, China, India, Pakistan, Israel, and North Korea – with an estimated 13,500 nuclear warheads either deployed on base with operational forces in a state of “launch on warning” or in reserve with some assembly required (down from a peak of 85,000 at the height of the Cold War). More than 90% are held by the USA (5,800) and Russia (6,375), totaling 20 EJ of destructive energy or almost half a million Hiroshimas.33 Unofficially, there may be more, the wonder not that so many countries possess nuclear weapons, but that so few do or that none have been used again in anger since the end of World War II.

Furthermore, despite reducing the possibility of direct conflict between the United States and the Soviet Union cum Russia in any number of hotspots around the world, the arms race has diverted – and continues to divert – a substantial amount of spending to the military along with “a sharp increase in the influence of the military upon foreign policy.”34 It is unfathomable that trillions of dollars have and are still being spent on something that can never be used. Even if they could, how many times can one obliterate existence?

Nonetheless, thousands of unusable doomsday machines stand poised, ready to destroy, despite their essential impotence, the risk of accident ever present – operational, machine-triggered, or software-assisted. When a nuclear-armed American B-52 collided in midair with a refueling plane during a Chrome Dome regular test run in 1966, seven of 11 crew members on the two planes died and four nuclear bombs inadvertently dropped, landing near the fishing village of Palomares in southern Spain. Although the 70-kiloton-plus H-bombs didn’t detonate, plutonium was released in the resultant, non-nuclear TNT explosions of two of the bombs, spreading radiation over an 800-km radius and contaminating a 2-km2 area with highly radioactive debris. Initially denied and then downplayed by the American and Spanish governments, one of the errant bombs was retrieved 10 weeks later from the sea, a flotilla of ships sent to find the missing “broken arrow.”35 If the nuclear bombs had exploded, a large part of southern Spain would have been destroyed and rendered completely uninhabitable. More than 50 years on, the cleanup around Palomares continues, 1.6 million tons of soil removed to the United States at a cost of $2 billion, while an estimated 50,000 m3 of contaminated soil still remains.36

To be sure, atomic bookkeeping comes with its own unique concerns, too chilly to comprehend. At least two of the 32 broken arrows reported since 1957 have never been found, one accidentally jettisoned mid-flight off the coast of Georgia in 1958 after another midair collision involving a hydrogen-bomb-carrying B-47, while another rolled off the side of an aircraft carrier in 1965.37 Both are still unaccounted for, buried somewhere under miles of murky ocean waters. Others have also likely gone missing from the secret arsenals of nuclear-armed states.

By the start of the 1960s, the “strategic” deployment of a nuclear deterrent was the raison d’être of the Cold War, a.k.a. “Balance of Terror,” fueled by meaningless claims of a “missile gap.” But the question soon changed from how a bomb is built to whether more should be built. The expense is enormous, a 1998 source estimating that the US nuclear-bomb program had cost $5 trillion to develop and maintain since 1940, while at least 5% of all commercial energy consumed in the United States and Soviet Union from 1950 to 1990 was spent on developing, stockpiling, and creating launch systems for a growing nuclear arsenal.38 No new American bombs have been added since the 1990s, although a recently enacted US nuclear makeover is expected to cost over $1 trillion (including $56 billion for expected cost overruns!), while over $50 billion is spent each year on maintenance.

Mutually assured destruction (MAD) is the agreed outcome in a nuclear confrontation, with billions dead in minutes and billions more in the resultant radioactive fallout, followed by years of “nuclear winter,” the term coined by American astrophysicist Carl Sagan and others in a 1983 paper using models previously based on the effects of volcanic eruptions. Oppenheimer likened the arms race “to two scorpions in a bottle, each capable of killing the other, but only at the risk of his own life.”39 Regardless the unthinkable outcome, there is little value in maintaining an oversized arsenal as a deterrent for future aggression, either militarily or financially. At least, for now, there are no nuclear weapons in space.

Although a horrific possible future has kept rival powers from initiating an unwinnable war, Vaclav Smil noted in Energy and Civilization that “the magnitude of the nuclear stockpiles amassed by the two adversaries, and hence their embedded energy cost, has gone far beyond any rationally defensible deterrent level.”40 In fact, stockpiling unusable nuclear weapons undermines spending on conventional weapons that may be needed in the event of a real war and for important industrial and social spending. And yet, undeterred by the cost and horror of total annihilation, a silent sentinel stands guard on a future no one can endure.

In the changing times that immediately followed World War II, however, the powers that be sought to implement a new policy to attempt to make amends for the devastation at Hiroshima and Nagasaki, turning to the prospect of peaceful nuclear power. Thousands of scientists and engineers retooled their abilities to tame the raw energy within the atom as power moved from the work benches of Los Alamos to the halls of government. Former bomb-making factories were converted to national research laboratories, while academic and civilian research facilities grew along with the ever-expanding military program, tasked with preventing another world war by maintaining an unthinkable, always-ready deterrent. Whether any of the modern alchemists could reorient their thinking was still unknown.

In From Faust to Strangelove, a book that explores the changing public perception and image of the scientist, Roslynn Haynes notes that after Hiroshima and Nagasaki physicists were no longer perceived as innocent boffins, while “it became progressively more difficult to believe in the moral superiority of scientists and even more difficult to believe in their ability to initiate a new, peaceful society.”41 Nonetheless, despite its clandestine origins, moral ambiguity (“technical arrogance” to use Freeman Dyson’s characterization), and the intellectual chasm of classifying the mysterious workings of a compact “little nut” at the core of an uncertain new Atomic Age, nuclear fission would become fully vested in locomotion and everyday electrical power. How we got there is as fascinating a tale as there is in the history of science and engineering, from which the promise of endless energy rolls on.

3.3 The Origins of Nuclear Power: Atoms for War and Peace

On a brisk, wintery morning, December 2, 1942, Enrico Fermi made his way to work at the University of Chicago, where he had been enlisted in the war effort to engineer a way to make plutonium, known only as element number 94 until earlier that March. Waiting for him at a campus doubles squash court underneath the west stands of the disused Stagg Field football stadium was a team of scientific workers from the “Metallurgical Lab” (Met Lab), who under Fermi’s supervision had assembled a nuclear reactor as part of the top-secret Manhattan Project. Called a “pile” for its numerous uranium-filled layers of graphite, the goal was to prove that a geometrically assembled array of lumped uranium could go “critical,” that is, produce a self-sustaining fission chain reaction, by which plutonium could be produced.42

Assembled around the clock in two 12-hour shifts over a period of a month, the Chicago Pile One (CP-1) consisted of 45,000 highly purified graphite blocks stacked in 57 layers in a 30-by-32-foot lattice, some of the blocks hollowed out to hold 40 tons of hockey-puck-shaped uranium-metal discs. Graphite (a carbon allotrope) was a readily available moderator, known from Fermi’s earlier work at Columbia, which according to his precise calculations should sufficiently slow down enough fast-fission neutrons to kick-start the fission process that would produce the secondary fission-product neutrons needed for the growing pile to go critical in an ongoing chain reaction. As each layer was added, Fermi measured the neutron count, confident of his numbers.43

Containing enough graphite to provide a pencil for “each inhabitant of the earth, man, woman, and child,”44 as his wife Laura would later write, Fermi called the 385 tons of graphite, 6 tons of pure uranium metal, and 34 tons of uranium oxide, all held together in a wooden cradle standing 22 feet high, “a crude pile of black bricks and wooden timber.” To keep the pile from going critical on its own, 14 extractable, neutron-absorbing, cadmium control rods were horizontally inserted into the middle of the pile, while three brave scientists – called the “liquid control” or “suicide” squad – stood overhead at the ready with buckets of cadmium salt solution in case the massive construction caught fire.

Inch by inch and one by one, the control rods were pulled out, Fermi noting the increasing neutron levels on a Geiger counter. The “neutron economy” was everything as more neutrons were released by more U-235 fissions within the pile, sufficiently slowed down by the graphite moderator to facilitate even more. After a break for lunch, the final rod was removed and at 2:20 pm the reaction went critical, the reproduction factor, k, greater than one (1.0006). The neutron count was literally off the chart. In a phone call to Washington to report on the success, the project leader Arthur Compton, who had recruited Fermi to Chicago, declared, “The Italian navigator has just landed in the new world.”45

The CP-1 prototype was a great success, the ever-cautious Fermi not only happy for having built the world’s first nuclear reactor, but for the ease in which he could control the ongoing reaction, exclaiming, “To operate a pile is as easy to keep a car running on a straight road by adjusting the steering wheel when the car tends to shift right or left.”46 Although the uranium was not enriched, hence the large size for the minimal power generated (under 1 watt47), criticality had been proven, essential for the Manhattan Project’s plutonium production and later to show that peaceful nuclear power was doable after the horrors of war.

“Atoms for Peace,” as President Eisenhower styled his proposed safer world in a 1953 UN speech, would absolve “atoms for war” that saw 170,000 people die instantly or in the immediate aftermath of the Hiroshima and Nagasaki blasts, followed by hundreds of thousands more from cellular degradation and radiation-induced cancers in the ensuing years. As Fermi noted in 1952, “It was our hope during the war years that with the end of the war, the emphasis would be shifted from weapons to the development of these peaceful aims. Unfortunately, it appears that the end of the war really has not brought peace. We all hope as time goes on that it may become possible to devote more and more activity to peaceful purposes and less and less to the production of weapons.”48

Throughout the war, nuclear scientists working on the Manhattan Project had been sidetracked in the rush to make the bomb before Germany. In the aftermath, the idea to produce peaceful power for the masses, “outside of the shadows of war,” renewed their hopes for a saner future, while aiding the rehabilitation of their reputations as creators of death. The basics entailed using water or some other coolant to remove heat continuously from inside the pile instead of exploding a highly enriched core. Military research money was earmarked to remake death and destruction into energy and light, starting at the newly reformed weapons facilities cum laboratories in the USA, USSR, and UK.

CP-1 had already moved during the war away from public scrutiny and potential urban catastrophe to Argonne Woods in southwest Chicago, reassembled as CP-2 and then CP-3, the site eventually renamed the Argonne National Laboratory before moving again to nearby LaMont. Part atomic facility and part University of Chicago research lab, Fermi oversaw the research at Argonne during and after the war, building on a growing nuclear expertise there and at Los Alamos, Oak Ridge, Livermore (Lawrence’s reformed Berkeley lab), and especially Hanford. The central tenets of reactor physics were worked out in the early post-war years, nuclear science transmuting from a source of curiosity about the nature of matter and the raw power within to the engineering challenge of turning mass into utility for all. No longer beyond comprehension, the atom could do what no alchemist had ever imagined:

The basic theory of a nuclear reactor was developed during the war by Fermi, Wigner, and others and tested in the military reactors. After 1945, a large amount of work was spent in completing the understanding of fission processes and developing a detailed theory of the nuclear reactor. By 1958, with the publication of Alvin Weinberg’s and Eugene Wigner’s authoritative The Physical Theory of Neutron Chain Reactors, the work of the physicists was over in this area.49

Capitalizing on a functionally superior, light-weight, cleaner fuel, the first operational nuclear submarine, the USS Nautilus, was launched in 1955, overseen by Admiral Hyman Rickover, while the USS Enterprise – the first nuclear-powered aircraft carrier and longest naval vessel ever built – was commissioned in 1961, launched by Eisenhower’s wife Mamie. Although keen to generate nuclear power for naval propulsion, the USA still had plenty of energy from its vast oil and coal reserves, and was less inclined to instigate a national energy makeover. The navy was also eager to get in on the atomic act and not be excluded from the nuclear club by the army or air force.

Called the “Father of the Nuclear Navy,” Rickover was a highly intelligent, hard-working electrical engineer who had worked his way up the ranks to admiral in 1953. During World War II, he was in charge of the electrical section of the Bureau of Ships, overseeing the change to infrared signal communication from visible light for ship-to-ship messaging “that made American ships targets for enemy bombs and torpedoes.”50 Assigned to Oak Ridge after the war, Rickover wanted to build “the ultimate fighting platform,” the world’s first nuclear-powered submarine that could remain undetected underwater for prolonged periods of time.51 Importantly, he chose water as the reactor moderator, subsequently employed in all future American commercial designs.

But despite their head start on nuclear-power technology, the Americans were more interested in making bigger bombs and designing advanced ship propulsion for a nuclear fleet unmatched in history. Built by the British near the coastal village of Seascale in northwest England, the first civilian nuclear reactor went critical on October 17, 1956. Calder Hall was a 92-MW, graphite-moderated, CO2-gas-cooled reactor, based on wartime research at Chalk River on the Ottawa River and later at Harwell in Oxfordshire, both directed by the original atom-smasher John Cockcroft. The reactor was dual-purpose, designed to generate electrical power and make plutonium for the British bomb-making program, hence the oddest of nuclear acronyms, PIPPA (pressurized pile producing power and plutonium).

Adapted from the propulsion reactor in Rickover’s nuclear navy, the first US civilian reactor was a 60-MW pressurized light-water reactor, successfully tested – “cold-critical” without power hook-up – on December 2, 1957, the fifteenth anniversary of Fermi’s original CP-1, and “grid-tied” on December 18. Constructed on the shores of the Ohio River, 25 miles northwest of Pittsburgh, the Shippingport Atomic Power Station reactor was built by Westinghouse and the Duquesne Lighting Company under the direction of Rickover’s Naval Research Group, and proclaimed the “world’s first full-scale atomic electric plant devoted exclusively to peacetime uses.”52 Triumphantly fulfilling Eisenhower’s earlier Atoms for Peace declaration to the UN despite the simultaneous building up of nuclear arms, Shippingport was the first commercial American nuclear power plant, following 14 previous reactors built to manufacture weapons-grade plutonium cores.53 Clean air was also cited in contrast to a proposed coal plant on the Alleghany River. As historian Richard Rhodes noted, “no expensive precipitators for smoke control, no expensive scrubbers for sulfur-oxide control, 60 megawatts of peak-load power, and a leg up on nuclear-power technology.”54

Rickover also made a strategic design change, employing uranium dioxide ceramic fuel clad in zirconium instead of uranium metal fuel, making the uranium more difficult to repurpose in a bomb. The United States was ready to take on the world with an unrivaled nuclear arsenal, deterrent to any and all potential belligerents, as well as a space-age atomic-power program to keep the home fires burning brightest. Shippingport operated for 20 years before being re-engineered as a light-water “breeder” reactor, where a thorium core is surrounded by a natural uranium “blanket” to create U-233 fuel (another fissile isotope of uranium), before being decommissioned in 1982.55

***

The pace of construction increased thereafter with two scaled-up commercial reactors in operation in the United States by the end of the 1960s, the 570-MW Connecticut Yankee “pressurized-water” reactor built by Westinghouse south of Hartford on the shores of the Connecticut River (1968) and the 640-MW Oyster Creek “boiling-water” reactor built by General Electric on the New Jersey shore (1969). In the following decade, 100 reactors were built by the two pioneering electrical companies, Westinghouse (pressurized) and GE (boiling), competing again as if electric power had come full circle to its origins. In other countries, reactor designs evolved from their own engineering and technological practices. The dominant design in the USA employed enriched U-235 fuel with light water as a moderator and coolant (pressurized or boiling), known as a light-water reactor (LWR), while graphite moderators became prevalent in the UK (air- or gas-cooled) and in the Soviet Union (light-water cooled).

Essential experience was gained in the early years of nuclear power, such as calculating the optimal fuel scheme to get the most out of every gram of U-235 and the safe operating procedures of the world’s newest energy source. In the United Kingdom, nine more reactors were built in the decade after Calder Hall, supplying almost 25% of British electrical power, while in the United States 20 went online.56 More reactors were built in the secretive and closed-off Soviet Union, starting in the town of Obninsk near Moscow with a small, dual-purpose, high-power channel reactor (reaktor bolshoy moshchnosty kanalny or RBMK) that went critical in 1954. By 1985, there were 25 Soviet reactors.57

Today, the most common reactor employs a light-water moderator/coolant, either pressurized (PWR) or boiling (BWR). In a PWR, a light-water coolant is kept under high pressure to remain liquid (2000 psi and 300°C58) as it passes through the core before heating a secondary coolant in a heat-exchange system to generate steam to turn the turbine. In a BWR, the light-water coolant is boiled and the steam used to turn the turbine before being condensed and returned to the reactor. A PWR is safer because the irradiated primary coolant stays within the containment structure, while the coolant in a BWR also turns the turbines and thus radioactive matter can escape more easily to the atmosphere in the event of an accident (Figure 3.3).

Figure 3.3 The basics of a pressurized light-water nuclear reactor (PWR).

The other main reactor types employ either a pressurized, heavy-water moderator/coolant (PHWR) as in a CANDU, found mostly in Canada and India, a graphite moderator/gas coolant (Magnox and AGR), found exclusively in the UK, or a graphite moderator/light-water coolant (RBMK), made in Russia, while some experimental breeder reactors (a.k.a. catalytic nuclear burners) are still being tested. Before being discontinued, the Magnox reactor used natural uranium clad in magnesium oxide (hence the name), while an advanced gas-cooled reactor (AGR) uses 2%-enriched U-235 fuel. The RBMK-1000 is the same reactor as in Chernobyl, but now comes with a containment structure to reduce radioactive release in the event of a core breach as occurred in 1986 (which we’ll look at later).

In a “breeder” reactor, fissionable fuel is created inside the reactor core. Breeding is a two-stage process as the fuel can’t fission on its own, but turns into a fuel that can fission via neutron absorption while in operation. Typically, the uranium isotope U-233 is bred from thorium and is thus called a thorium reactor (0n1 + 90Th23290Th23391Pa233 + –1β° → 92U233 + –1β°). Thorium reactors are cooled with liquid sodium and need a boost to start. All reactors “breed” some nuclear fuel, providing a kick or “plutonium peak” in normal operation as U-238 transmutes into U-239 that decays into neptunium and then plutonium before fissioning, which provides about one-third the reactor power (also a way to make weapons-grade bomb material, believed to be how India made its first nuclear bomb in 1974).

Breeder reactors convert “fertile” thorium-232 or natural uranium-238 into fissionable fuel and don’t need to be enriched, saving time, money, and the environmental stress of extensive refining, but have not become economically viable as originally supposed. In his 1956 address to the American Petroleum Institute, where he famously introduced the theory of Peak Oil, M. King Hubbert was overly optimistic, stating “it will be assumed that complete breeding will have become the standard practice within the comparatively near future.”59

Whatever the reactor design, the essential neutron economy is managed by raising and lowering neutron-absorbing “control rods” to keep the reaction steady: both “prompt” primary neutrons emitted during fission and “delayed” secondary neutrons released by fission products on average about 14 seconds later that help maintain a constant criticality. As plenty of water is needed to cool the core and produce steam, nuclear power plants are constructed near rivers, lakes, or oceans, while all nuclear reactors produce a cornucopia of radioactive waste stored in onsite spent fuel pools (a.k.a. bays).

Today, there are 437 commercial nuclear plants operating worldwide in 30 countries, providing roughly 400 GW of power, 92 of which are in the USA (see Table 3.1). The American reactors are all light-water (LWR), either PWR (~67%) or BWR (~33%), while almost 85% of global nuclear reactors are LWR. The others are either PHWR (such as the CANDU, ~10%), graphite-moderated/gas-cooled (~2.5%), graphite-moderated/water-cooled (~2.5%), or fast-breeder reactors (<0.5%).

Table 3.1 Number of nuclear power plants by reactor type, total power output, and fuel

Reactor#GWFuelModeratorCoolant
Pressurized waterPWR307293Enriched UO2WaterWater
Boiling waterBWR6061Enriched UO2WaterWater
Pressurized heavy waterPHWR4724Natural UO2Heavy waterHeavy water
Light water graphiteRBMK117Enriched UO2GraphiteWater
Gas-cooledMagnox, AGR85Natural U metal, enriched UO2GraphiteCO2
Fast neutronFBR21PuO2 and UO2NoneLiquid sodium
Source: “Nuclear Power Reactors,” World Nuclear Association, May 2023. www.world-nuclear.org/information-library/nuclear-fuel-cycle/nuclear-power-reactors/nuclear-power-reactors.aspx.

The World Nuclear Association (WNA) maintains a global nuclear power database, citing 437 working nuclear power plants in 2022 rated at a total peak capacity of 390 GW that annually requires 68,000 tonnes of uranium fuel to keep running.60 Two countries, France (65%) and Ukraine (53%), generate more than half their electric power with nuclear (Table 3.2).

Table 3.2 Nuclear power, 2022 (country, number, power, global and national percentage)

CountryNumber of plantsPower (GW)Global percentageNational percentage
USA92952418
France56611665
China5452135
Japan333286
Russia3728719
South Korea2524626
Canada1914314
Ukraine1513353
Spain77221
Sweden67232
Rest905815
Total437394100
Source: “World Nuclear Power Reactors & Uranium Requirements,” World Nuclear Association, August 2023. www.world-nuclear.org/information-library/facts-and-figures/world-nuclear-power-reactors-and-uranium-requireme.aspx.

The largest nuclear power station in the world is the 8-GW Kashiwazaki-Kariwa plant on the west coast of Japan (five BWRs, two ABWRs), while the largest in Europe is the 6-GW Zaporizhzhia plant on the Dnieper River in southeast Ukraine, comprising six water-cooled, water-moderated PWRs, a.k.a. water–water energetic reactors (VVERs). During the 2022 Russian invasion of Ukraine, the site was overrun by military forces as was the Chernobyl plant, raising alarms about possible radiation contamination in a worst-case scenario. After an anxiety-filled visit to the plant by a team of nuclear inspectors from the International Atomic Energy Agency (IAEA) after only one of the six reactors had remained online, two inspectors stayed onsite as the IAEA called for a demilitarized perimeter and nuclear safety protection zone in the embattled region, underscoring the challenge of keeping citizens safe from radiation leaks in wartime.

***

Despite many engineering firsts, dizzying technical achievements, and a seemingly simpler supply of fuel, today providing almost 5% of global electric power, atomic energy is under fire on numerous fronts, because of high construction costs, safety concerns, and the ongoing problem of radioactive waste, all a far cry from the early 1960s when everyone wanted to get into the nuclear biz. The army had their bigger and better bombs, the navy their submarines that can stay submerged for years, while the air force even wanted to build a nuclear plane, spending almost $5 billion on a prototype before admitting that flying something as bulky as a nuclear reactor was not practical nor would the damage be easy to contain in the event of a radioactive reactor falling from the sky.

Edward Teller wanted to use atomic bombs for small-scale engineering to clear land for a deepwater Alaskan port (codenamed Operation Chariot), build a Sacramento Valley canal to move water to San Francisco, and extract oil and gas from the heavy bituminous Albertan oil sands in an ill-devised 1958 nuclear fracking plan (codenamed Project Oilsands).61 For a trained physicist, who intimately knew the secrets of the atom, one wonders how the father of the H-bomb was so ignorant about the realities of radioactive contamination. Project Plowshare was similarly designed to use “nuclear excavation technology” for large-scale landscaping, including creating a new Panama Canal (dubbed the Pan-Atomic Canal), but was abandoned for obvious safety reasons. Project Gasbuggy, however, was approved to detonate an atomic bomb to extract natural gas about a mile underground in northern New Mexico in another fracking experiment to explore peaceful nuclear uses. The site is now a no-go area. After a similar test detonation in Colorado, the gas was “too heavily contaminated with radioactive elements to be marketable.”62

Even NASA wanted to go nuclear, hoping to power future satellites with plutonium. That is, until a plutonium-powered navigation satellite failed to achieve orbit in a 1964 launch and fell back to Earth, disintegrating upon re-entry (also why launching nuclear waste into the Sun or space is not such a smart idea). A subsequent soil-sampling program found, “embarrassingly, that radioactive debris from the satellite was present in ‘all continents and at all latitudes’.”63

NASA also had heady plans to power a spaceship with “nuclear bomblets” that would carry 10 times the payload of a Saturn V moon rocket, but Project Orion literally didn’t make it off the ground and was ditched in 1965 after a decade of fruitless work.64 Initially seeded with $1 million in funding to a private contractor, the idea was to focus the massive propulsive power of 100 nuclear bombs detonated at half-second intervals to reach space, although no one was quite sure how to keep the ship from blowing itself apart.65 Project A119 was another top-secret US plan to detonate a lunar atomic bomb as a show of force after Sputnik, also discarded in favor of the PR-winning Apollo moon landing. The lunar detonation was the brainchild of another Manhattan Project luminary, Harold Urey, who thought the explosion would shower the Earth with moon rocks or that the resulting debris collected by a following missile could then be analyzed.

There was even a plan to nuke the recently discovered Van Allen radiation belts in an unforgettable fireworks display to show the world the full power of the American atomic arsenal, in particular the Soviets. Unfortunately, this demented scheme – dubbed the Rainbow Bomb – went ahead and on July 8, 1962, a 1.4-megaton H-bomb was exploded 250 miles above a Pacific Ocean launch site on Johnston Island. In his catalogue of bizarre experiments conducted in the name of science, Alex Boese noted that “Huge amounts of high-energy particles flew in all directions. On Hawaii, people first saw a brilliant white flash that burned through the clouds. There was no sound, just the light. Then as the particles descended into the atmosphere, glowing streaks of green and red appeared.”66

The man-made atomic light show lasted 7 minutes, but also knocked out seven of 21 satellites in orbit at the time and temporarily damaged the Earth’s magnetic field. Launched the previous day, AT&T’s Telstar I also became a casualty of the EM pulse months later, while radiation levels around the Van Allen belts took 2 years to settle. On the plus side, one of the most irresponsible displays of military hubris brought an end to atmospheric, underwater, and space testing of atomic bombs as the United States and the Soviet Union signed the Partial Test Ban Treaty the next year. Apparently, filling the skies with nuclear radiation was a step too bold even for science.

In the early heyday of reactor building, many thought nuclear power was as simple as igniting an atomic-powered Batmobile. Fortunately, calmer heads prevailed and we aren’t now cleaning up a permanent radioactive mess created by atomic planes, nuclear blasters, or plutonium-powered satellites and rockets. Nonetheless, we are still dealing with the radiation fallout from decades of weapons testing, a grim reminder of the madness of unchecked technological warfare. There are also restrictions about where to build a nuclear power plant, such as not on seismically active ground, above water tables, or near large populations. Reactor containment buildings must also be able to withstand a jet plane crash or a large-scale earthquake. Alas, not all of the world’s 400 or so reactors fit the bill.

Decommissioning is an especially sticky issue, where after 60 years of operation, radiation-degraded materials are no longer fit for service. The Nuclear Regulatory Commission (the US atomic licensing agency) grants a 40-year operating period that can be renewed up to a further 20 years. The first commercial US nuclear power plant – Oyster Creek Nuclear Generating Station near the Jersey shore – was shut down in 2018 after almost 50 years, while the Connecticut Yankee Nuclear Generating Station built in 1968 was taken offline in 1996 and decommissioned 8 years later. In the absence of a permanent long-term nuclear waste-disposal plan, the spent fuel rods remain onsite, stored in a so-called Independent Spent Fuel Storage Installation (ISFSI).

The USS Enterprise was deactivated in 2012, although its reactors are still intact with no decision yet on full decommissioning, while the USS Nautilus was retired in 1973 after 15 years of service and is now a museum in New London, Connecticut. The first vessel to reach the North Pole – in 1958 on its third attempt – Nautilus logged over 500,000 miles, more than seven times the 20,000 leagues of its Jules Verne-inspired namesake.

Having ignored the realities of nuclear aging by continuously kicking the decommissioning can down the road, today’s fleet of 40- to 60-year-old reactors are showing their age. The WNA estimates that at least 100 plants will be shut down by 2040,67 although in early 2021 the US Nuclear Regulatory Commission called for American reactor lifetimes to be extended to 100 years (“Life Beyond Eighty”), increasing the threat to public safety and the likelihood of an accident. Despite ongoing concerns about cost, safety, waste disposal, and what to do when a reactor is no longer fit for service, however, 60 new reactors are under construction in 16 countries (half in China and India), roughly 100 are on order or planned (with a total capacity over 100 GW), and more than 300 are in the works.68

Large capital costs are also a deterrent for would-be operators, particularly when amortized over decades or subjected to expensive cost overruns from delayed site licensing that can take years depending on local opposition. Nuclear NIMBY (“not in my backyard”) is a powerful force against building a nuclear plant, especially after the headline-grabbing accidents of Three Mile Island (1979), Chernobyl (1986), and Fukushima (2011). No new nuclear power plants have been built in the USA since 1979, although two under construction in Georgia are expected online soon according to the latest revised estimates (Vogtle 3 and Vogtle 4).

To make up for the shortfall as old units go offline and no new replacement reactors are built, more energy is being squeezed out of existing reactors at a rating higher than the original design (called “uprating”), of which 149 of 150 requests in the USA have been approved.69 Uprating and less downtime have kept the overall percentage of nuclear power in the United States at around 20% over the last 40 years despite an ever-growing population and no new reactors being built on American soil in four decades.

***

Although the physics is basically the same, new reactor designs have been employed to improve safety. Gen-III or “advanced” nuclear power reactors are fitted with “passive” safety features (for example, natural water convection) and designed to work without electricity in the event of a failure, retroactively added to the older Gen-I and Gen-II reactors to avoid a Fukushima-type loss of power and coolant. EDF’s EPR, Hitachi’s ABWR, Toshiba’s AP1000, and KEPCO’s APR1400 are essentially Gen-III upgrades of older Gen-II reactors.

Experimental Gen-IV reactors are also in various stages of development, such as the very-high-temperature reactor (VHTR) and molten salt reactor (MSR) designs. A VHTR is a helium-gas-cooled, graphite-moderated reactor with an exceedingly high coolant temperature over 1,000°C that uses a tri-isotropic (TRISO)-coated particle fuel comprised of a uranium, carbon, and oxygen kernel, which resembles a billiard-ball-sized pebble, hence the name “pebble-bed reactor” (PBR). The reaction heat can also be used to generate high temperatures required in the chemical, oil, and iron industries or to produce hydrogen. MSRs employ liquid-sodium cooling that circulates at a lower temperature and pressure to run in a simpler, less-expensive reactor design. Trials are starting up in China and the USA. Advanced fuel types such as thorium, plutonium, and mixed oxide (MOx) have also been proposed.

Small-scale nuclear reactors are also being designed to limit construction costs that can run to at least $10 billion per gigawatt, not including overruns. The so-called SMRs – small modular reactors – have lower power ratings at up to 350 MW, less upfront financing, are built in sections in a factory and shipped on site for easy assembly (nuclear IKEA!), and require fewer operators and less maintenance. Those cooled with sodium don’t need water and thus aren’t restricted to coastal or riverside sites. Just add more to expand, although the average cost is still high without any obvious economy of scale.

One proposed “micro-reactor” was the Toshiba 4S (Super-Safe, Small, and Simple), a 10-MW cigar-shaped, liquid-sodium-cooled design, run on highly enriched fuel (19.9% U-235) and a fast-neutron graphite reflector to control the reaction (fission stops upon opening the reflector). The coolant circulates via electromagnetic pumps, while more fuel is burnt than in a conventional nuclear reactor, producing less waste and less plutonium. The underground setup also increases safety in the event of an accident, and is meant to run for 30 years with minimal supervision, although weapons-grade uranium can more easily be made from the higher enriched uranium. Especially suited to remote areas, a 4S mini-reactor was planned for Galena, Alaska, a 500-strong town on the Yukon River with only 4 hours of daylight in the depths of winter, where customers pay well above the going price for diesel. Alas, the Galena project was cancelled and no commercial mini-nukes have yet been built as the search continues to make nuclear power safe and affordable.

Funded by Microsoft co-founder Bill Gates, TerraPower is investing in small-scale nuclear, such as the traveling wave reactor (TWR), a prototype, liquid-sodium-cooled, fast-breeder “Natrium” reactor that uses fertile depleted uranium for fuel. Fertile material is not fissionable on its own, but can be converted into fissionable fuel by absorbing neutrons within the reactor; for example, U-238 first absorbs a neutron to become U-239 that beta-decays to Np-239 and then fissionable Pu-239. A small amount of concentrated fissile U-235 “initiator” fuel is needed to start a breeder reactor as in a traditional nuclear reactor before the fertile U-238 atoms absorb enough neutrons to begin producing Pu-239. Other fertile/fissile fuel combinations are possible besides the U-238/Pu-239 cycle, such as thorium-232 and uranium-233 in a Th-232/U-233 cycle. A steady “breed-burn” wave is maintained by moving the fuel within the reactor to ensure a constant neutron flux.

Such low-pressure, non-light-water reactors (NLWRs) have never been proven, however, ironically noted by one economist as “PowerPoint reactors – it looks nice on the slide but they’re far from an operating pilot plant. We are more than a decade away from anything on the ground.”70 Safety and maintenance is also a concern. Hyman Rickover avoided sodium-cooled reactors for the US navy because of high volatility, leaks, radiation exposure, and excess repair time, reporting to Congress in 1957 that “Sodium becomes 30,000 times as radioactive as water. Furthermore, sodium has a half-life of 14.7 hours, while water has a half-life of about 8 seconds.”71

Nonetheless, in 2021, a Chinese research group in Wuwei, Gansu, completed construction on the first liquid-sodium-cooled reactor since Oak Ridge National Laboratory’s 7-MW, U-233-fueled, molten-salt test device that was shut down in 1969 after 5 years.72 Expected to produce only 2 MW and fueled for the first time with thorium, the Chinese MSR could take a decade to commercialize to a working 300-MW reactor. TerraPower is also hoping to build a $4 billion, 345-MW, depleted-uranium demonstration reactor in the coal-mining town of Kemmerer, Wyoming (population 2,656), which could be up and running by 2030 subject to the testing process.73 Half of the seed money is coming from the US Department of Energy, angering critics of the unproven technology, while no long-term, waste-disposal solutions were included in the plan.

Today, the only operational SMR is a floating reactor that brings nuclear power to the consumer. In 2018, Russia’s state nuclear power company, Rosatom, started up the Akademik Lomonosov, a $480 million, 70-MW, barge-mounted, pressurized LWR, held in place by tether to a wharf behind a storm- and tsunami-safe breakwater in Kola Bay near the northern city of Murmansk. Factory-assembled and mass-produced in a shipyard, costs were reduced by about one-third and construction time by more than half. Used to provide district heating to the town of Pevek via steam-outlet heat transfer, one resident replied when asked about a possible radiation leak or accident, “We try not think about it, honestly.”74

Resembling more a container ship than nuclear plant, a floating reactor can be towed where needed to provide short-term power. Although an emergency cooling system is not required at sea, the design has sparked obvious concerns – tsunamis, typhoons, collisions, and pirates offer new challenges to the usual worries about nuclear power. Despite the added operational responsibility, China plans to power artificial islands in the South China Sea with 20 floating reactors rated up to 200 MW each. There isn’t much scope for new designs in the high-priced nuclear market, but after a number of small-scale demonstrations of the novel, low-temperature, “pool type” Yanlong reactor, China also plans to build a 400-MW unit to heat 200,000 homes via district heating with water as a moderator, coolant, and radiation shield. Underwater and seabed reactors have also been proposed.

Despite the increased interest in small-scale nuclear reactors, SMRs aren’t cheap, however, primarily because of the uncertain, first-of-a-kind technology. A 2017 report predicted that SMRs have a “low likelihood of eventual take-up, and will have a minimal impact when they do arrive,” while a 2018 report noted that extra costs are always expected with new plant designs.75 Just as Admiral Rickover noted in 1957, when a congressional committee questioned the efficiency of his first civilian reactor in Shippingport: “Any plant you haven’t built is always more efficient than the one you have built.”76 Getting new technology right is never easy, doubly so with nuclear technology. Neither has the nuclear industry delivered on its promise of cheap power, instead saddling the world with an expensive, uncertain design, fraught with peril from the proliferation of weapons to ongoing safety concerns in a radioactive release or accident.

The attraction to nuclear science has always been connected to a belief that we are masters of our material world and that the power of the atom is the solution to all our energy needs. In his 1914 novel The World Set Free, H. G. Wells championed the brave new world of artificial radioactivity, signaling the coming era of radium as an elixir for cancer treatment and nuclear power as a way to break free of our earthly bonds and embrace a more modern future, while at the outset of civilian nuclear power in 1956 M. King Hubbert confidently stated, “we may have at least found an energy supply adequate for our needs for at least the next few centuries of the ‘foreseeable future’.”77 It would take a new understanding of the dangers of what we can’t see to slow that dream.

3.4 Radiation: What You Don’t See Is What You Get

Uranium (Z = 92) is the last element of the periodic table that naturally occurs in large-scale amounts. Transuranic elements (Z > 92) need to be artificially produced, for example, in a cyclotron where the elements from Z = 93 to 106 (neptunium to seaborgium) were first created. They are also highly unstable, as are their “daughter nuclei.” In fact, all elements above lead are unstable to varying degrees, different isotopes radioactively decaying in a chain of cascading transmutations, emitting alpha (α) and/or beta (β) particles in an ongoing quest for nuclear stability. The end goal is the stable nucleus of lead (Z = 82).

The reason is simple geometry and the ongoing competition between protons and neutrons in such a small space. As C. P. Snow noted, “If an atom were expanded to the size of the dome of St. Paul’s Cathedral, virtually all its mass would lie within a central nucleus no larger than an orange.”78 Inside that tiny core, nuclear packing is at a premium as positively charged protons repel each other, mitigated by neutrally charged neutrons to maintain stability. It is a deadly three-dimensional balance to keep the long-range, repulsive, proton–proton Coulomb force (1/r2) in check with the short-range, attractive, strong nuclear force (<10–15 m). To keep the balance in higher-Z atoms, more neutrons are needed. Indeed, for Z = 2 (helium, 2He4) there is an equal number of protons and neutrons – a trend that continues for low-Z atoms – while by Z = 92 (uranium, 92U238), 146 neutrons are needed to balance just 92 protons. If three is a crowd in a relationship, 238 is way too much for an atomic party.

Radioactive decay redresses the ongoing imbalance by ejecting nuclear matter from the core in an atomic tantrum: either an alpha or beta particle as well as the illusive neutrino that Pauli and Fermi first proposed. For example, U-238 eventually decays to stable lead after a series of alpha- and beta-particle emissions, accompanied by high-energy, electromagnetic radiation called gamma radiation (γ), essentially high-energy x-rays.

The time for half of any material to radioactively decay is called a “half-life” (t1/2), and is an exponentially decreasing function, that is, after each t1/2 period of time only half of the initial material still remains: 1, 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128 (less than 1% after seven half-lives), …, etc., before nothing is left.79 We can plot the course by simple nucleon subtraction (α-particle emission) or proton addition (β-particle emission). For example, U-238 decays to Th-234 via alpha-particle emission (92U23890Th234 + 2He4), taking about 4.5 billion years for half of the uranium to turn into thorium. The by-products also have corresponding half-lives, thorium-234 turning into protactinium-234 via beta decay after 24.1 days (90Th23491Pa234 + –1β°). More alpha and beta emissions follow in a continuing series of decays before reaching the end of the line at stable lead (Pb-206), 14 stages in all (Figure 3.4).

Figure 3.4 The 14 decay stages of uranium-238 to stable lead-206.

Radioactivity is a random physical process originating in all unstable nuclei and occurs everywhere in the natural background of our material world: in the rocks, soil, and atmosphere (enhanced since the 1950s by nuclear-test fallout). Defined by Marie Curie as “spontaneous emission of radiation” as opposed to neutron-induced emission of radiation following nuclear fission, natural “low-level” radioactivity goes unnoticed in our daily lives and is relatively harmless because our cells can constantly repair themselves. Enriching or refining radioactive material and neutron-induced radioactivity emitted by fission products in nuclear fuel and bomb detonation (including a “dirty” bomb that disperses radioactive material in a conventional explosion) are not at all harmless, our bodies’ cell repair mechanisms unable to overcome the damage as DNA is destroyed.

Capable of breaking apart atoms in genes by high-frequency “jiggling,” the radiation type – often referred to as “ionizing” for its chemical effect on absorbing material80 – as well as the half-life are important for health and safety, for example, “An α-emitter with a very long half-life (for example, the common isotope of uranium, 92U238) is relatively safe. A γ-emitter with a short half-life is very dangerous.”81 The potential biological damage depends on the absorbing material: skin, tissue (especially lymphoid), bone (and bone marrow), blood, and cells. However, if alpha particles penetrate human tissue or an alpha emitter is ingested they are 10 times as dangerous as gamma radiation. Note that a short half-life indicates higher activity, measured in decays per second (becquerels), although the amount of radioactive material is important (one atom of Pu-239 is not as dangerous as 1 kg of U-235). Curies are also used: 1 curie (Ci) = 3.7 × 1010 becquerels (Bq), equal to the activity of 1 g of radium.

Over 100 different fission products are made in a nuclear reactor, all producing α, β, and γ emission (Figure 3.5): alpha particles (highly ionizing though weakly penetrating), beta particles (less ionizing but more penetrating), and gamma rays (weakly ionizing but highly penetrating). The radiation is contained within the reactor building by thick shielding, but after being removed from the core the spent fuel rods are still highly radioactive – as seen onsite in lead-lined, concrete fuel-storage pools where the water turns an eerie deep blue from the beta-ray excited molecules emitting higher-than-normal-energy photons (called Cherenkov radiation) – and must be permanently stored or reprocessed. All fission products are either highly radioactive or very long-lived, not to mention the deadly release of radioactive debris to the environment in the event of a reactor breech or the fallout from a nuclear bomb blast.

Figure 3.5 Penetrating power of ionizing radiation (α, β, x/γ, n).

Somatic effects (direct exposure) and hereditary effects (gamete damage passed onto a descendant through sperm and/or eggs) are both possible and can be acute (large single dose) or chronic (small repeated doses). The acute effects are reasonably well understood; long-term, low-level exposure less so. Radiation is always nicking away at our DNA and proteins, even sunlight, but the amount of damage depends on the rate of exposure versus the rate of our cells’ repair mechanism (DNA) and/or replacement mechanism (proteins/lipids). If the damage exceeds the ability to repair, we have a problem. Radiation sickness is at one end of the spectrum, while keratosis (a.k.a. age spots) is closer to the other. Furthermore, most cancers tend to come later in life because our repair mechanisms aren’t as efficient as we age, as well as adding over time to any accumulated damage that never healed. But just because one doesn’t get “sick” after radiation exposure or even develop cancer within a few years doesn’t mean no damage was done. One must always be wary, including about childhood summer sunburns, a good reason to always use sufficient sunscreen.

Radiation dangers were learned on the job by early experimentalists and other industry workers. Henri Becquerel was burned by a sealed vial of radium salt he carried in his pocket. Marie Curie, who purified radium in the pitchblende residues brought from mines in St. Joachimsthal, Bohemia, and her daughter, Irène Joliot-Curie, both died of leukemia, a cancer of the white blood cells produced in bone marrow that limits the body’s ability to fight infection and may have resulted from the “long exposure to radiation that both of them experienced during their scientific careers,”82 while Pierre Curie may have been dying of radiation poisoning when he was killed in a horrific carriage accident crossing a street in Paris. He noted in his 1905 Nobel lecture that after a 2-week exposure to even a small amount of radium, “redness will appear on the epidermis, and then a sore which will be very difficult to heal. A more prolonged action could lead to paralysis and death.”83

Four of the five workers who started the study of radium under the Curies died of radium poisoning,84 while luminous dial painters who shaped the tips of their radium-laced brushes with their tongues to paint thin lines and the smallest of dots onto fancy, glow-in-the-dark watch faces (called “lip pointing”) died of cancer, anemia, and necrosis of the jaw, before the dangers were understood and the practice curtailed. Early uranium miners suffered increased levels of lung cancer, likely from ingesting radon gas (a uranium and radium decay-chain nuclide), whose alpha-emitting “radon daughters” radioactively decayed inside their lungs, or from directly inhaling alpha particles attached to dust in poorly ventilated areas. For miners in the Elliot Lake region between Sudbury and Sault Ste. Marie in northern Ontario, where uranium has been mined since the 1950s, the incidence of lung cancer was found to be twice as high as in the general population. Elsewhere, a young uranium refinery worker exposed to alpha-particle dust died within 2 years from reticulum cell sarcoma (bone lymphoma), as did many Japanese bomb victims.85

Soon after World War II ended, two scientists at Los Alamos died of acute radiation poisoning in separate incidents when the same plutonium test pile accidentally went supercritical – Louis Slotin died after 9 days and Harry Daghlian within a month. A number of Manhattan Project scientists died of cancer, including Oppenheimer, Fermi, and Herbert Anderson, who developed beryllium disease from inhaling radioactive dust particles created during the sawing, grinding, and cutting of beryllium (an early reactor moderator and neutron trigger). Over 40% of the 220 cast and crew of the 1956 epic The Conqueror produced by Howard Hughes and starring John Wayne contracted cancer, 47 of whom died. The Utah film location was downwind of a nuclear test site in Yucca Falls, Nevada, while 60 tons of radioactive-laden dust was transported to Hollywood to finish the picture on a studio lot.86 After a reactor coolant leak aboard the first-generation Soviet nuclear submarine K-19, which was patrolling the north Atlantic on its maiden voyage in 1961 in the midst of heightened Cold War tensions, eight engineering crew members died within a few days and another 20 within a few years. (Perhaps the first ever LOCA, the event was depicted in the film K-19: The Widowmaker starring Harrison Ford and Liam Neeson.)

Fortunately, alpha particles don’t easily penetrate the skin, but can still be inhaled or ingested, becoming especially dangerous, for example, the poisoning in a London hotel of the Russian émigré Alexander Litvinenko, who was given tea laced with polonium-210, a high-intensity alpha emitter. As Penny Sanger notes in Blind Faith: The nuclear industry in one small town (p. 46), “Alpha particles carry an electrical charge and cling to dust particles that may be swallowed, inhaled, or even … work their way into the body through the pores of the skin. They in particular are the source of the lung dose that may cause the lung cancer associated with radon gas.” Whatever the type of radiation (α, β, or γ), one must be ever cautious with nuclear material.

3.5 Radiation: Long-Term Effects

As for the long term, the lingering effects are well known from the hundreds of thousands of surviving hibakusha (bomb-affected people) in Hiroshima and Nagasaki in the years and decades after the first atomic bomb detonations, at the time called “radiation sickness.” The total number of deaths from the blasts and immediate aftermath are estimated to be between 110,000 and 210,000,87 but only 5% of the radiation came from the initial blast energy, 10% from residual radiation, while the majority was from fallout, brought to the earth by “black rain” and responsible for the most acute radiation symptoms, including outside the bomb area.88 Despite a post-war, US senate commission report by General Leslie Groves, the ever-pragmatic military head of the Manhattan Project, stating that high-dose radiation was “without undue suffering” and “a very pleasant way to die,”89 numerous hibakusha died of painful leukemia and other cancers after the fact, while many others were severely disfigured and suffered for decades, afraid to show themselves in public or marry and have children because of a fear of offspring birth defects.

Today, two of the most worrisome radioisotopes are strontium-90 (Sr-90) and iodine-131 (I-131), both biologically active and relatively long-lived. Sr-90 has a half-life of 28.8 years (decaying to Zr-90) and can replace calcium in bones causing leukemia (group II elements, strontium and calcium are chemically similar).90 I-131 has a half-life of 8 days (decaying to Xe-131), but can accumulate in the thyroid gland, which regulates metabolism, especially dangerous for children with active thyroids. I-131 also increases the risk of thyroid cancer.

Prior to the 1963 Limited Test Ban Treaty banning aboveground nuclear detonations, radiation fallout from nuclear weapons testing resulted in well-documented increases in cancers around the world, particularly leukemia and thyroid cancer, while local cancers have been traced to contaminated milk wherever cows ate strontium-90-laden feed. After a 1957 reactor fire at a UK plutonium-making plant, 200 square miles of countryside was contaminated, while half a century later local leukemia rates were roughly 10 times the UK average.91 In the event of a nuclear accident, taking calcium supplements and iodine tablets are recommended to saturate the body, protecting against radioactive Sr-90 and I-131 retention.

In 2002, after growing public concerns over the Sellafield nuclear fuel reprocessing plant located at the converted site of the former Calder Hall power-producing reactor and Windscale plutonium bomb-making reactor on the northwest coast of England, every household in Ireland was sent a box of potassium iodate (KI) pills, delivered by regular post to their doors (six 85-g tablets containing 50 g each of stable iodine). I still have the small orange box that eerily reads “FOR USE ONLY IN THE EVENT OF A NUCLEAR ACCIDENT” with instructions that “these tablets work by ‘topping up’ the thyroid gland with stable iodine in order to prevent it from accumulating any radioactive iodine that may have been released into the environment.” Pregnant women, breast-feeding mothers, newborns, and children up to 16 years are considered the most vulnerable, while only those allergic to iodine are advised not to take them (or if the tablets turn brown). The effects are statistical, any absorbed/ingested radioactive strontium/iodine competing for binding sites with the ingested calcium/iodine, which doesn’t always work. Fortunately, I haven’t had to open the contents. More worrisome, iodine tablets were distributed in the summer of 2022 to people living near the Zaporizhzhia nuclear power plant in southern Ukraine after possible threats of radiation breaches from ongoing shelling during the Ukraine war.

Cesium-134 and cesium-137 are also common radioactive fission products released to the atmosphere in a nuclear accident, soaking into the ground after mixing with rain. Both gamma-ray emitters causing burns, acute radiation sickness, and cancer, Cs-134 is more active (t1/2 = 2 years), while Cs-137 is of greater long-term concern (t1/2 = 30 years; both have further high-energy decay series). Almost half of the radioactive Cs-137 released in the 1986 Chernobyl reactor meltdown in Ukraine remains in the environment, found “to some extent” in every country in the northern hemisphere, including in the “raw milk from a Minnesota dairy.”92 Government restrictions on the production, transportation, and consumption of some food remained in effect for decades afterwards in many countries, including on UK sheep and Swedish and Finnish reindeer, as well as mushrooms, berries, and fish throughout much of Europe. An insoluble, orally administered “ion-exchange” drug decorporates cesium from the body.

Radon gas is a leading cause of lung cancer, especially pernicious because gas is more easily inhaled than other radioisotopes. Radon is odorless, tasteless, invisible, chemically inert, and exists in highly variable concentrations, making accurate detection difficult. An intermediate radioisotope in the decay chain of uranium and thorium, radon gas is the first daughter nuclei of radium (88Rd22686Ra222 + 2He4), while its own decay products are also radioactive – the eerie sounding radon daughters polonium-218 and lead-214 – present wherever radon gas lingers, continuing to emit alpha, beta, and gamma radiation after inhalation or ingestion. The most stable (that is, least unstable) isotope, radon-222, has a half-life of 3.8 days. Radon gas is found in pitchblende mines, a by-product of nuclear fuel enriching and refining, and in basements and other poorly ventilated areas, having seeped up through the cracks in the underlying rock, soil, and concrete from deep within the earth.

Plutonium-239 (t1/2 = 24,000 years) accumulates in the lungs, liver, and bones as well as the gonads, causing damage to reproductive organs. Tritium or hydrogen-3 (t1/2 = 12.5 years) affects the whole body and is dangerous if ingested (for example, pure tritiated water, T2O, or diluted HTO). Chelating agents are used to eliminate internal contamination with transuranic metals, while drinking lots of ordinary (uncontaminated) water will flush out radioactive tritium. If inhaled, ingested, or absorbed through the skin, americium-241 (t1/2 = 432 years) – minute amounts of which are used in everyday smoke detectors – accumulates in the lungs, liver, and bones, while iridium-192 (t1/2 = 74 days) accumulates in the spleen.

When used in controlled ways, nuclear radiation has practical applications similar to x-rays. Cobalt-60 is a common cancer treatment radioisotope, where a highly focused external beam of gamma rays from Co-60 decay (t1/2 = 5.3 years) is directed at a cancerous tumor to destroy malignant cells, called a “gamma knife,” although some nearby healthy cells are killed in the process, often causing radiation sickness (nausea, fatigue, hair loss). Co-60 gamma radiation is also used to sterilize medical equipment (gowns, masks, bandages, syringes) and to irradiate food (a controversial way to kill viruses, bacteria, and germs such as the microorganisms Escherichia coli, Listeria, and Salmonella as well as insects and parasites), while Co-60 pellets can be inserted directly into the body to destroy cancerous tumors. Ironically, ionizing radiation can indiscriminately cause cancer, but when used locally kills cancerous cells.

Marie Curie’s daughter Irène won the 1935 Nobel Prize in Chemistry for creating short-lived artificial radioactive isotopes (a.k.a. radioisotopes) to treat cancer, employed in implants or as external radiation sources, which were later incorporated in medical imaging techniques for radiotherapy. Short-lived sources are most effective, employed before they completely decay, yet not sufficiently long-lasting to cause ongoing damage. Today, there are 20 million nuclear procedures per year in the United States, Tc-99m the most common radioactive tracer employed in about two-thirds of all nuclear procedures involving one-third of all hospital patients.93 A radioactive tracer that “tags” molecules to image their path through the body, Tc-99m is very short-lived – the m is for metastable, while its γ-emission half-life is 6 hours, essentially gone within two days. Carbon-14 and tritium are also employed, while phosphorous-32 marks cancer cells that absorb more phosphates than healthy cells.

The most biologically worrisome radioisotopes (a.k.a. radionuclides), their likely source, radiation type, half-life, and main health concerns are summarized in Table 3.3 as reported by the US Centers for Disease Control and Prevention. Radium and tritium are also included. Further radioactive decay follows from each daughter nuclei, emitting more ionizing radiation (treatment countermeasures are available from the US Department of Health).

Table 3.3 Important radioactive isotopes (radioisotopes)

RadioisotopeZXASourceTypeHalf-lifeMain health concerns
Hydrogen-3 (tritium)1H3 (T)Reactorβ12.5 yearsCancer
Cobalt-6027Co60Natural, reactorβγ5.3 yearsBurns, acute radiation sickness, cancer
Strontium-9038Sr90Fallout, reactorβ29 yearsBurns, cancer (bone and bone marrow)
Iodine-13153I131Reactorβγ8 daysBurns, thyroid cancer
Cesium-13755Cs137Fallout, reactorβγ30 yearsBurns, acute radiation sickness, cancer
Iridium-19277Ir192Reactorβγ74 daysBurns, acute radiation sickness
Radium-22688Ra226Decayα1,600 yearsCancer
Uranium-23592U235Reactorα7.0 × 106 yearsBone and lung cancer
Uranium-23892U238Reactorα4.5 × 109 yearsBone and lung cancer
Plutonium-23994Pu239Reactorα24,000 yearsLung disease, cancer
Americium-24195Am241Falloutαγ432 yearsCancer
Source: “Emergency Preparedness and Response: Radioactive Isotopes,” Centers for Disease Control and Prevention, Atlanta, Georgia. https://emergency.cdc.gov/radiation/isotopes/index.asp.

There is much debate over the advantages and disadvantages of low-level radiation. Radium (Z = 88) was once thought to be a cure-all for various ailments such as arthritis and cancer, a presumed wonder mineral with miraculous medicinal properties that gently massaged under the skin in small doses, understood now to be low-level radiation. Marie Curie called the glow from the radium she painstakingly extracted from her uranium ores “faint, fairy light.” Unfortunately, she was slowly poisoned by the more sinister effects when concentrated. Her and her husband’s Paris office is still radioactive today, showing up localized “hot spots” on a Geiger counter almost 100 years after her pioneering experiments on radioactive substances.

Hot spas have also been known to soothe various aches and pains, including rheumatism, gout, and neuralgia. Such geothermal spas are warmed by trapped heat within the earth and vary in radioactivity and temperature depending on location (some are also heated by volcanic sources). Others, now known as radium spas, are heated by radon gas from deep within the Earth’s interior. One of the most active radium spas is in Bad Gastein in Austria (~40 kBq/m3), while the ancient spas on the Greek island of Ikaria are famous for their therapeutic value, having regularly been used by Byzantium royals, “superheated” by temperatures up to 60°C and accompanied by weak to strong radioactivity (~8 kBq/m3). Named after Icarus, who flew too close to the Sun and plunged into the nearby Aegean Sea when his wax wings melted, the island locals are said to live longer on average than the rest of Europe, although diet, location, and lifestyle likely explains the increased longevity.

Alas, one must be careful with what one can’t see. The adverse effects are clear, especially if alpha emitters are inhaled or ingested or one is exposed to high levels of gamma or x-ray radiation. Radiologists have a 5.2-year lower life span on average than other doctors as well as a 70% higher incidence of cardiovascular disease and some cancers compared to the general population, while their rate of leukemia is 8 times that of other medical doctors.94 In fact, 2% of all cancers are caused by medical x-rays, primarily CT scans (3D x-ray images produced via computed tomography). Like radioactivity itself, radiation disease is statistical and hard to determine an exact cause, while body types and immune systems differ greatly in the general population. An acceptable dose is difficult to quantify.

Two analogies help explain the confusing dilemma between beneficial radiation (for example, directed medical isotopes) and harmful radiation (for example, from uranium refining, leaky spent fuel, accidental release, and nuclear fallout): water and hammer taps. One drop of water on the skin is inconsequential as are a series of drops, while a steady flow soothes as in a hot shower. A non-stop series of high-frequency drops, however, will ultimately damage the skin. Replace the water with a gentle tap from a hammer and the body can still recover if subjected to a small number, but tap every second for an entire day – even very lightly – and the skin will blister and bleed before eventually dying. If the taps continue, the hand will even fall off.

A further analogy is the Sun, especially for gamma radiation. Too much exposure to high-energy photons damages the upper layers of the skin such that without sufficient time for the epidermis and dermis to heal, the skin will not only burn but die, increasing the risk of cancer (especially melanoma). Like Icarus, we must all be wary of overexposure to the Sun.

Although there is no agreed safe level – clinical or otherwise – the risk of cancer increases with the level, length, and location of exposure. Furthermore, not all low-level exposure leads to cancer, but can produce cardiovascular and circulatory problems, often excluded when analyzing clustered data. Counting only cancers is not sufficient to measure all health effects of radiation, as “radiation can change the whole chemistry of the body, making it more susceptible to other disease.”95 Although hard to measure, hereditary effects are also possible.

Dr. Rosalie Bertell, a noted cancer research scientist who specialized in the health effects of x-rays, disagreed with the basic assumption that low-level nuclear radiation is harmless. She noted that while normal cellular repair can occur after radiation exposure stops, some cellular damage is irreparable and that on top of the observable clinical effects, radiation produces “a generalized, systemic ‘aging’ effect on the body.”96

The experiments of Herman Muller, the pioneering geneticist and outspoken 1946 Nobel laureate, showed that genetic mutations were produced in fruit flies after x-ray exposure, suggesting a no-threshold dose and increased damage with more exposure. Although contrary evidence suggested the possibility of a low-dose threshold, Muller’s results popularized the idea of a linear no-threshold (LNT) relationship between radiation exposure and harmful effects, precipitating the introduction of shielding, at least to protect the reproductive organs.97 In the 1954 president’s address to the American Association for the Advancement of Science, Caltech professor and genetics pioneer Alfred Sturtevant reiterated the general understanding that “any level whatever” is “at least genetically harmful to human beings” and that low-level, high-energy radiation is a “biological hazard.”98

***

To understand better the potentially lethal health effects and requisite safety guidelines, we must understand the units, which unfortunately are a bit confusing. So far, we have seen the becquerel (Bq) and the curie (Ci), named after Henri Becquerel and Marie Curie, which relate to the physics of radiation, that is, the statistical disintegration of a nucleus as measured in decays per second heard in the clicking of a Geiger counter (1 Bq or 2.7 × 10–110 Ci). For example, 1 g of radium-226 decays 37 billion times per second, defined as 1 curie (37 billion Bq = 1 Ci), while 1 g of plutonium-239 decays 2.3 billion times per second (2.3 billion Bq or 0.063 Ci).99

Other units measure the biological effect of radiation, that is, the harm to a cell rather than a click on a Geiger counter. Rads and rems are non-standard but are still sometimes used, where a rad (radiation absorbed dose) measures the physical exposure per mass and a rem (radiation equivalent man) the biological effect. Grays and sieverts (Sv) are the SI equivalent units, where 1 Sv equals 100 rems. A rem or sievert does roughly the same damage whatever the type of radiation. To keep things simple, we will stick to sieverts or microsieverts (µSv) wherever we can.

Another measure, the Q-factor (Q) also helps clarify the differing effects of the main ionizing culprits: alpha radiation is the worst (Q = 20) because it travels the shortest distance and thus does the most localized internal damage if ingested, while beta, gamma, and x-rays are comparatively less damaging (Q = 1). The damage also depends on the organ and its cell division rate, the most susceptible being the gonads, thyroid, skin, eyes, lungs, and bone marrow.

The recommended annual maximum dose for the general public is 5,000 µS, while the average background radiation is typically around 1,000–3,500 µSv per year: 500 from cosmic rays, 500 from soil, and 2,500 from one’s surroundings depending on where one lives. For example, Denver is near to local uranium-laden granite, close to the mountains, and at high elevation (the Mile High City), and thus receives more background radiation than average. In our daily lives, most of us don’t come close to exceeding the limit, even with a regular dental x-ray (5 µSv), chest x-ray (100 µSv), mammogram (400 µSv), or occasional 2-hour plane trip (5 µSv) where cosmic rays are less shielded at higher altitudes, although a CT scan is especially high (~7,000 µSv) and thus the number of scans a patient receives is closely monitored. Wherever possible, ultrasound or magnetic resonance imaging is preferred to a high-dose PET-CT scan.

The recommended maximum dose for a US nuclear worker is 50,000 µSv/year (10 times the general public limit), while a cardiologist or radiologist who performs 100 operations in a year might receive 10,000 µSv. An astronaut on the International Space Station gets about 160,000 µSv/year (more than 30 times the general public limit) as do cigarette smokers from trace amounts of radioactive lead and polonium in tobacco (another good reason to quit). Few of us will ever reach 1 Sv (1 million µSv) in our lifetime from normal activity (200 years of the maximum public dose), although at 32 times the average annual dose smoking clearly increases exposure and the risk of cancer (other cigarette toxins also add to the risk). Importantly, 1 Sv indicates a 5.5% increased chance of cancer, although receiving a 1-Sv dose all at once is extremely dangerous.

The US Nuclear Regulatory Commission (NRC) provides a “personal annual radiation dose” calculator to work out one’s exposure depending on location and lifestyle (for example, amount of plane travel, plutonium-powered pacemaker, x-rays).100 Mine is 3,400 µSv (340 mrems), well below the recommended annual maximum public dose of 5,000 µSv (500 mrems), more than half coming from atmospheric radon. According to the NRC, Americans receive an average annual radiation dose of about 6,200 µSv (620 mrems).

Some of the most radioactive places on Earth are listed in Table 3.4, the data collected via a hand-held Geiger counter. The basement of the Pripyat hospital – where the fire fighters retreated after the 1986 Chernobyl reactor accident – produced the highest surveyed level, still delivering 1,000 µSv/h after 30 years, 1,800 times the annual recommended maximum dose, which would equal about 1 Sv after 6 weeks of continuous exposure. That’s the equivalent of 60 years of smoking in one year or 100 packs a day for a year!

Table 3.4 A survey of some of the most radioactive places on Earth

Radioactive sourceµSv/h× annual dose
Hiroshima peace dome0.30.53
Marie Curie office (door knob)1.52.6
Jáchymov uranium mine1.73.0
Alamogordo Trinity bomb site0.81.4
Alamogordo trinitite mineral2.13.7
Airplane flight (33,000 feet)>2>3.5
Chernobyl reactor #4 exterior58.8
Fukushima exclusion zone1018
Astronaut1832
Smoker’s lung (polonium)1832
Chernobyl hospital basement1,0001,800
Source: “The Most Radioactive Places on Earth” [documentary], presented by Derek Muller, Veritasium, December 17, 2014. www.youtube.com/watch?v=TRL7o2kPqw0.

Not included in the survey, the Rongelap Atoll in the Marshall Islands may be the most contaminated place on Earth, adjacent to the site of 67 nuclear weapons tests from 1946 to 1958, including the 15-megaton-TNT Bravo thermonuclear blast to the west at Bikini Atoll. The picturesque coral atoll is still uninhabitable more than a half century on, the evacuated residents promised a return that never comes. Some believe the US government used the tests to measure the effects of radiation on humans (Project 4.1 Biomedical Studies). The Soviets also exposed 1.5 million people during almost 500 atomic tests in northeast Kazakhstan, where locals continue to suffer increased rates of miscarriage, skin disease, breast, throat, and lung cancer, aplastic and hemolytic anemia, physical deformities, nervous disorders, schizophrenia, mental retardation, hereditary disease, and oncological disorders at twice the rate of Hiroshima and Nagasaki. There are no plans to clean up either sites, while no one has cleaned up Marie Curie’s suburban Paris lab more than 100 years after she painstakingly purified pitchblende to extract radium.

Radioactivity is not easily understood, inside or outside of the nuclear industry. Built up from more than a century of gold mining, waste heaps on the outskirts of Johannesburg, South Africa, contain almost 600,000 tons of uranium (50 g per ton of waste tailings), 30 times the natural background level of 0.10 µSv/h, comparable to the exclusion zone in Chernobyl. The open sites also include heavy metals at over 300 times the level allowed for arsenic and 80 times lead in what has been called “one of the biggest environmental disasters in the world.”101

Although the effects of nuclear radiation are still debated, basic occupational precautions are essential: masks and adequate ventilation for uranium miners and refinery workers, full-body lead aprons for medical workers, and restricted exposure in high-risk jobs (limiting the number of surgeries, high-altitude work, or power-plant exposure). Dosimeter badges must be worn at all times, but don’t necessarily measure full exposure. For example, surgical aprons don’t cover the head, hands, or legs (although the gonads are the most susceptible).

While most of us needn’t worry about annual radiation doses, accident exclusion zones, or abandoned test sites, there are still everyday concerns for those who live near a uranium refinery, fuel reprocessing facility, or nuclear power plant that regularly release radioactive material into the air, soil, and water during normal operations. Groundwater is also contaminated by radioactivity from corroded pipes and spills from old storage tanks, while careless waste-management practice continues to cause ongoing environmental damage and increased health problems. In some locales, the lack of action is beyond comprehension.

3.6 Nuclear Waste Products: Impossible to Ignore

A 2010 report on the state of American reactors, entitled Leak First, Fix Later, listed 102 radioactive leaks since 1963, mostly tritium, cobalt-60, and strontium-90. The report noted that a reactor facility contains up to 20 miles of buried pipes of varying durability that carry radioactive water under buildings, foundations, and parking lots, and can fail from corrosion, erosion, or seismic activity. The radioactivity is found only after the damage is done.

After discovering radioactive ditchwater beside the Exelon Braidwood Generating Station, about 50 km southwest of Chicago, further studies counted 22 spills from 1996 to 2000 over a “four and a half mile-long pipe running from the nuclear station to a dilution discharge point on the Kankakee River.”102 Two of the spills dumped six million gallons of radioactive reactor water that contaminated ponds, wells, and groundwater with tritium and cobalt-60. In 2009, an Exelon reactor at Oyster Creek, New Jersey, spilled tritium-laced water at 50 times the state standard, contaminating local drinking supplies.103 Similar to the petroleum industry, the nuclear industry is not obliged to report leaks and is subject only to voluntary self-reporting.

Radioactive waste is categorized either as high-level waste (such as spent fuel rods and bomb material), intermediate-level decommissioning material (for example, reactor core parts), or low-level waste (such as mill tailings, nuclear plant water, and reactor building parts as well as clothes and hospital waste). The United States produces 15 million tons per year of mill tailings, the “largest source of any form of radioactive waste.”104 A typical GW-reactor produces about 3 m3 of radioactive waste per year, which puts the USA at about 300 m3 for its 100 reactors or more than 2,000 tons a year. A 2009 Scientific American article pegged the accumulated American spent fuel waste at 64,000 tons, enough to cover a football field 7 yards deep.105 By 2023, the amount was over 90,000 tons, adding almost another 3 yards to the problem.

To be sure, the waste isn’t going to disappear anytime soon, nor are disposal solutions getting any easier. Indeed, one of nuclear power’s biggest problems is where to put the accumulating waste, some with radioactive half-lives in the millions of years. The issue can’t be swept under the rug any longer despite refineries still storing low-level waste in plain view under black tarpaulin covers in temporary sites, easily able to leach into the soil or groundwater or even blowing off with the wind.

The story of one small town on the north shore of Lake Ontario is a cautionary tale that beggars belief. Located 100 km east of Toronto, Port Hope became part of the nuclear family in 1932 after a local mining company, Eldorado Gold Mines, began producing radium for the burgeoning international medical market. Led by a co-worker of Marie Curie, who had been lured from France to manage the lucrative new radium trade, pitchblende was dug out of inhospitable mines in Great Bear Lake in the Northwest Territories and laboriously turned into pure radium, refined in the much more accessible and hospitable town of Port Hope, then a lakeshore paradise of about 5,000 inhabitants. One gram of radium, which comprises less than a one-millionth part of pitchblende – the shiny, black, tar-like uranium- and radium-containing ore – was produced from 6.5 tons of ore, requiring 7 tons of chemicals to dissolve and filter the caustic slurry.106 The prize at the end was a single gram of radium, fetching as much as $125,000 and undercutting the world’s only other radium refinery in Czechoslovakia, where Marie Curie first collected uranium ore to produce polonium and radium and which Hitler had commandeered in 1939.107 At the time, the unwanted uranium was discarded in the process.

Tasked by the Canadian government to supply bomb material for the Manhattan Project in World War II, Eldorado retooled its radium production to provide almost 700 tons of uranium oxide to the war effort, becoming one of the world’s largest uranium refineries by the end of the hostilities. Reformed as a crown corporation, the company then began producing uranium and UF6 feedstock for civilian reactors – natural-occurring uranium fuel bundles for Canada’s home-grown, heavy-water reactors and converted uranium for export to manufacture enriched U-235 fuel rods in American light-water reactors. Strategically important as an international uranium supplier and the town’s then largest employer, Eldorado Nuclear Limited was part privatized in 1988 and renamed the Canadian Mining and Energy Company (a.k.a. Cameco), before being fully privatized in 2002 to become the world’s largest publicly traded uranium company, producing almost one-third of Western uranium supplies.108 Having lost its medical luster and of questionable therapeutic value, radium production was phased out in 1954.

During decades of operation at the mouth of the Ganaraska River that cuts the picturesque town roughly in half, Eldorado refined uranium to make fuel for the nuclear industry, often disregarding sensible safety procedures to dispose of mill tailings produced in the process. Heavy-metal wastes such as arsenic were casually discarded in unfenced, poorly signed, and leaky dumps around the town and in other nearby locales. Lax or non-existent regulations even permitted landfill to be carted off by local builders, ultimately contaminating various parts of the community with toxic and radioactive waste, some that had decayed into carcinogenic radon gas and accumulated in poorly ventilated areas.

By 1975, elevated levels of radioactivity were discovered throughout the town, including in an elementary school built with Eldorado landfill, while ravines, beaches, and the harbor all became contaminated. Numerous hot spots showed high radiation and toxicity levels – radon 100 times, arsenic 200 times, and gamma radiation 300 times normal levels – forcing the school’s closure for almost 2 years, while elevated levels of radon gas were detected in more than 500 homes, alarming citizens about the health effects of long-term radiation exposure, especially for children who are more susceptible than adults.

Little was done to alleviate the worries of those concerned about the health effects of low-level radiation, setting off a decades-long battle between pro-nuclear advocates concerned about jobs and anti-nuclear activists wanting the waste cleaned up and the government to commission a study into the damage and potential future effects to their health. Originally dubbed “The town that radiates friendliness” in a slick 1930s radium marketing slogan, one can now take a “toxic tour” around town that starts the Geiger counter ominously clicking.

On-site construction regulations were also found to be substandard, Eldorado given preferential treatment in 1981 to build a new UF6 plant without any of the usual planning permission. Gobbling up lake-front real estate in one of the most postcard-perfect southern Ontario towns, a community beach, picnic area, and baseball diamond were all appropriated. Cementing the power of the state over communal lands, the once-popular West Beach became inaccessible to the public as Eldorado expanded without community input or regulatory oversight.

National nuclear security (or its converted civilian nuclear power form) is uniquely advantaged to create an energy strategy outside the public forum or agreed public good. In Canada, a crown corporation isn’t required to submit an environmental assessment plan, and thus remains beyond the reach of standard scrutiny, stifling discussion about operations, safety, and strategic goals, while denying open debate. Typically, the public can’t even participate in regular safety reviews. Elsewhere, for example, in the UK, the regulator doesn’t even have to consult the public on any issues about nuclear safety.109

By 1995, the then renamed Cameco also wanted to store one million tonnes of nuclear waste in a proposed $200 million, 85-foot-deep, cavern complex at the previously appropriated West Beach site to house mill tailings from its operations as well as other radioactive waste from communities across Canada. Perhaps banking on a similar acquiescence from a seemingly uninformed public, pro-industry advocates glossed over the dangers of low-level radioactivity for almost a year, assuming approval was a formality.

However, Port Hopers had other ideas, tired of Eldorado’s overreach. One resident summed up the growing outrage in a Letter to the Editor of the town newspaper about building a waste disposal site beside the town water pumping station, 500 m from an intake water pipe: “Sign me crazy but not stupid.” Another letter from a former Eldorado mining engineer cum anti-dumping activist aptly captured the concern within the community: “The one certainty is that once you get underground is that nothing is certain. In X number of years, God only knows what might happen.”110 A simple calculation illustrated the magnitude of the proposed operation: “200,000 truck loads, or one every ten minutes, ten hours a day for ten years!”111 Long-time resident and award-winning author Farley Mowat branded the proposal “Canada’s Nuclear Sinkhole,” while the geneticist and science broadcaster David Suzuki, who came to speak at a community meeting at the height of a town-wide discussion on the issue, clarified the reality of burying nuclear waste: “If it’s so safe, why is the nuclear industry protected from legal responsibility in the event of an accident?”112

Local opposition mounted, until the ill-advised plan to build an underground nuclear waste site on the shores of Lake Ontario, half a kilometer from the center of a picturesque town that boasts one of the best-preserved Victorian streetscapes in Canada, was nixed, scuttling for a few more years the federal government’s plan to construct a permanent long-term national nuclear waste site. A paltry $11 million in proposed compensation was clearly not enough to convince anyone that the government had their best interests in mind.

In 2004, elevated levels of radiation were again discovered at another Port Hope school, where gamma radiation and radon gas was detected at 125 times accepted limits (over 500 picocuries). One long-time local activist wondered what the authorities were hiding, stating “They knew this place was dangerous and still let kids in.”113 Countering the government dogma about safe levels of low-level radiation, an epidemiologist also found “higher than normal rates of leukemia and childhood deaths, as well as an elevated incidence of brain, lung and colon cancers for certain demographics and time periods.”114 Three years later, uranium was found in the urine samples of a group of former nuclear workers concerned about unexplained illnesses. Armed with more data and further study, the fight began anew.

Two recent studies further verified the dangers for uranium workers and those living near fuel-conversion facilities. A 2010 study of almost 18,000 Eldorado workers reported a linearly increasing rate of lung cancer incidence and cancer deaths with radon decay product (RDP) exposure, although a correlation between overall mortality and RDP exposure wasn’t found, attributed to the healthy worker effect.115 The workers were employed by Eldorado from 1932 to 1980, mostly at three main sites: Port Radium (Northwest Territories), Beaverlodge (northern Saskatchewan), and Port Hope’s fuel conversion facility (southern Ontario). Another study in 2013 showed a correlation between women living in the Port Hope area and lung cancer, but this may have been related to the socioeconomic characteristics of the region.116

There is little doubt that the radioactive waste strewn across the town came from the conversion plant, caused by careless Eldorado/Cameco management over a prolonged period. Alas, few agreed on how to account for past actions, pitting property values, tourism, and growth versus public health. The small town, which had at one time served as the capital of a rapidly growing Upper Canada prior to Confederation, was becoming a popular commuter town and weekend tourist destination for the bulging modern capital of Toronto despite its nuclear problems. The issue, as usual, was about money, although when a government is the landlord in a tenant dispute the public is oddly in opposition to itself.

It’s difficult to say no to industry in a small town, doubly so when that industry is the main employer and in this case backed by the federal government, one that is less than forthcoming with the details of its “strategically important” activities. Add in an ever-present element of secrecy to nuclear power and a stifling amount of jargon, and we have a recipe for fear and suspicion. What exactly is going on and in whose interests does a strategically important crown corporation or converted for-profit publicly traded company act? Why does responsibility stop at a company fence when the company’s activities clearly extend beyond the physical boundary of the plant and include invisible though detectable radiation? Why is the government protecting a private company?

***

Of course, we have to deal with the waste somehow, a seemingly endless game of political hot potato. In 2002, another small town was proposed as the site of Canada’s officially proclaimed national deep geologic repository (DGR) after Port Hope and Deep River – located upriver from the site of Canada’s first nuclear reactor at Chalk River north of Ottawa – declined. No stranger either to the nuclear industry, Kincardine, Ontario, is a short drive from Bruce Power Station, one of Ontario’s three nuclear power-generating facilities and world’s largest fully operational nuclear plant. On the eastern shore of Lake Huron south of the Bruce Peninsula about 200 km northwest of Toronto, Kincardine had been spared as an earlier choice of “host community” for Canada’s permanent nuclear waste site until the others both refused. That is, until the crown corporation in charge of the project, Ontario Power Generation (OPG), unveiled a plan to store 200,000 m3 of low and intermediate nuclear waste from Ontario’s 18 CANDU reactors in a proposed $1 billion storage facility, 680 m belowground in Kincardine. As in Port Hope, the proposal was met with fierce local opposition.

Precipitating numerous Canadian and American objections, the Kincardine DGR debate has raged for more than two decades. Proponents of the plan champion the stability of the underlying low-permeability limestone rock, while opponents counter that building an underground nuclear waste site 1 km from the shores of Lake Huron is crazy, not least because the Great Lakes is the world’s largest body of fresh water, providing clean drinking water for almost 40 million Canadians and Americans and where a radioactive leak would have devastating consequences. Even more worrisome, the Great Lakes are only 10,000 years old, where seismically active “pop-ups” still regularly appear on the lake bottom from ongoing external plate stress, while the Kincardine DGR is expected to operate as a safe repository for more than one million years. Someone hasn’t done the most basic math.

Common sense dictates that water supplies and radioactive waste don’t mix. Dr. Gordon Edwards, co-founder of the Canadian Coalition for Nuclear Responsibility, called the plan “absurd,” noting that “water is the biggest single threat to the safe long-term storage of nuclear waste. Water floods underground mines, corrodes containers, promotes chemical reactions, generates gas pressure, and carries radioactive poisons back into the food chain. Of all the places to dump nuclear wastes, the Great Lakes drainage basin would seem to be one of the very worst.”117 Dr. Frank Greening, a former nuclear scientist and one-time OPG employee, called the plan “idiotic” and “dangerous,” noting that the official estimates of expected radiation are “1,000 times lower.” Resolutions opposing the plan were passed in over 150 communities on both sides of the border, including Chicago and Toronto.

It seems Port Hope, Kincardine, and Deep River were chosen because they already “host” nuclear facilities and would presumably be more accepting. But the site of an existing uranium refinery or nuclear power plant – chosen because of a proximity to water – is a major no-no for nuclear storage. What’s more, some communities may be used to their own nuclear waste, albeit begrudgingly, but are not keen to welcome other people’s radioactive junk, trucked in from afar. In early 2020, the local Saugeen Ojibway Nation, whose land comprises the area of the proposed Bruce Peninsula DGR, rejected OPG’s $150 million proposal to build on their territory.118 A policy of “rolling stewardship” will instead store the retrievable radioactive waste from Ontario’s reactors aboveground on site until an acceptable storage solution is found.

In 2022, the Canadian government announced two more possibilities for a national storage site, one in South Bruce east of Kincardine and the other in Ignace near the Manitoba border in western Ontario. The plan is to store 5.5 million spent fuel rods from reactors in four provinces in a $23 billion, forever nuclear vault. If approved, the site will take four decades to fill, starting sometime after 2040, requiring 30,000 shipments by road and/or rail, that is, two shipments a day, every day, for 40 years.119 Expect the opposition to be ready, especially in South Bruce, the more populous of the two sites and still within the Great Lakes basin.

There are no guarantees either that a DGR will work. In 2014, a “waste isolation pilot plant” (WIPP) in Carlsbad, New Mexico, sprung a leak after a storage container burst, having been mistakenly packed with organic matter (kitty litter!), costing almost $2 billion to clean up when trace amounts of plutonium and americium were found about a half-mile away at aboveground air-quality stations.120 The nearby Carlsbad Caverns should have been an obvious showstopper, famous for their underground chambers formed by the dissolution of limestone by groundwater, another dubious feature for the long-term storage of nuclear waste. In fact, there have only been three DGRs constructed so far to store nuclear waste, all of which have failed, two in Germany and the Carlsbad WIPP.

Hoping to buck the odds, the Finnish government is building a DGR at the Olkiluoto Nuclear Power Plant, located on an island off the west coast of Finland and 200 km northwest of Helsinki. Called Onkalo (Finnish for “cavity”), the ambitious plan is to store plutonium-rich spent fuel rods at a depth of 500 m, employing spiral tunnels and a “multi-barrier” system that includes corrosion-resistant copper canisters, a bed of water-absorbing bentonite clay, and continuous infill. At the end of the proposed 200-year lifetime, the entire structure will be sealed for all time (“Nuclear Eternity”) or until the next ice age cracks it open. Alas, one Finnish opponent to the €3 billion nuclear tomb stated that the site was not chosen for its geology, but because “Eurajoki was the first municipality to say ‘ok, we can take it’, and there wasn’t an active nuclear opposition in this area.”121 Who is to say what will cause the next accident and what damage will result? Unlike, coal, oil, and gas, we can’t recover as easily from a nuclear mistake. Onkalo is expected to receive its first waste in 2025.

In the UK, the magnesium-oxide (Magnox) and advanced gas-cooled (AGR) reactors are particularly bad at waste. In 1990, the chairman of UK National Power noted that “British reactors had produced more waste than all the rest of the nuclear industries combined,”122 yet without a permanent waste storage facility the spent fuel from 15 British nuclear reactors has been stored at Sellafield. Situated beside the world’s first commercial nuclear power plant, Sellafield has been called the riskiest nuclear waste site in the world, while a former UK minister of energy revealed they are only starting to “get to grips with the legacy after decades of inaction.”123

Located on the Cumbrian coast just west of the Lake District National Park, Sellafield contains a 100-m fuel-waste storage pond from the original UK bomb-making program as well as hundreds of tonnes of other highly radioactive material, while a 21-m high silo of fuel cladding has been full since 1964. As noted in New Scientist, “The decaying structures are cracking, leaking waste into the soil, and are at risk of explosions from gases created by corrosion.”124 Nuclear cleaning isn’t cheap either, the Sellafield job earmarked at a whopping £80 billion, while the problems persist over finding a permanent storage site. The cost of the UK’s most recently proposed long-term DGR has ballooned to over £50 billion, half to be paid by taxpayers, even though Cumbria doesn’t want the stuff.125 Next to never letting nuclear waste in, ensuring no more arrives is as good as it gets. As the saying goes, it’s not enough to cook uranium and serve it to the public, one also has to do the dishes.

In northwest Russia, $200 million has already been spent to decommission old Soviet nuclear storage sites. Following years of neglect after the breakup of the USSR, spent fuel from more than 22,000 Soviet-era nuclear submarine canisters stored in the Arctic town of Andreeva Bay, located about 50 km east of Murmansk, are being reprocessed in Mayak, site of the Soviet Union’s original bomb-making program and world’s first major nuclear accident. The partly damaged fuel elements at the world’s largest spent fuel store were previously kept in leaky dry storage units after leaking from on-site pools and now constitute enough radioactive material to fill 400 40-ton containers, equivalent to 100 reactor cores. A former naval officer who has monitored the site for years commented on the lax policy, “I’ve been all over the world to pretty much every country that uses nuclear power and I’ve never seen anything so awful before. With nuclear material, everything should be done very carefully, and here they just took the material and threw it into an even more dangerous situation.”126 The fuel will now be transported by ship from Andreeva Bay to nearby Murmansk and then by train to Mayak across almost half of Russia.

As bad as nuclear waste storage sites are in the former Soviet Union and as hard as it is to agree on the means or even a location to store nuclear waste in Canada or the UK, the United States hasn’t solved its problem either of where to put long-term waste. The nuclear NIMBY battle wages on in the heartland of atomic power, a hot potato hotter than the waste. As noted by the World Nuclear Association:

The question of how to store and eventually dispose of high-level nuclear waste has been the subject of policy debate in the USA for several decades and is still unresolved. As well as civil high-level wastes (essentially all US used fuel plus research reactor used fuel of US origin) there is a significant amount of military high-level radioactive waste which Congress intends to share the same geological repository. Naval used fuel is stored at the Idaho National Laboratory.

Since the beginning of the commercial use of nuclear power in the USA, used fuel assemblies have been stored under water in pools (and later in dry casks as well) at reactor sites, and remained the responsibility of the plant owners. The prohibition of used fuel reprocessing in 1977, combined with the continued accumulation, brought the question of permanent underground disposal to the forefront.127

Originally proposed as a centralized, high-level radioactive waste site in 1987, Nevada’s Yucca Mountain was meant to put the US nuclear waste disposal issue out of sight for good if not out of mind, but the issue has flip-flopped for decades. Harry Reid, the former Democratic Senate leader, was against building an underground nuclear vault in his home state, calling the plan the “Screw Nevada Bill,” while 80% of Nevadans, including local Native American tribes, oppose the idea. Nevadans were especially annoyed that Nevada doesn’t even have a nuclear power plant and had previously been used for weapons testing without consultation.

What’s more, only 12 of the 99 reactors in the USA are west of the Mississippi, making Nevada a poor logistical choice on distance alone, never mind that Nevada is the fourth most geologically active state. Without sufficient political push (that is, a large compensation/bribe), nobody wants the waste, estimated at over 90,000 tons and growing by 2,200 tons a year (a 10-yard deep football field). Of course, if the point of building a storage site in the desert is that fewer people will be affected in the event of an accident, it doesn’t hurt that there are fewer people to object.

Concerned about the stability of the seismically and volcanically active region as well as the oxidizing environment, Allison Macfarlane, a former NRC chairman (2012–2014), stated that Yucca is unsuitable.128 Another former NRC chairman (2009–2012), Gregory Jaczko, believes the nuclear industry should clean up its own mess rather than forcing taxpayers to foot the bill. “No other industry is able to complain so loudly that someone else has failed to take care of its waste,” he wrote in Confessions of a Rogue Nuclear Regulator.129 Not surprisingly, the Yucca Mountain proposal was nixed by Barack Obama in 2011, before being restarted by Donald Trump in 2017. One expects the flip-flopping to continue as the world’s #1 nuclear waster continues to dodge an issue it has been avoiding since 1945.

Another highly contentious issue is transportation, often glossed over by top-down system builders who think that waste disposal can be solved with a wish and a pen, where highly radioactive material is hauled by train or truck across thousands of miles through unprotected towns and countryside. Known as “glow” trains and “glow” trucks, some opponents call the transport routes “Fukushima Freeway.”

More than 60 years since the first civilian reactors came online, high-level radioactive waste (HLRW) from spent fuel and decommissioned bomb material represents the most significant problem for the nuclear industry, never mind thousands of tons of low-level radioactive waste (LLRW) from mill tailings (including acids, nitrates, fluorides, arsenic, and other chemical poisons) that must constantly be monitored for leaching, leaks, elemental disruption, and even burrowing animals. So, what to do with the waste, most of which lasts billions of years?

Common sense dictates a dump should not be built near a high water table, public waterway (for example, the Great Lakes!), unstable or eroding ground, earthquake zone, permeable soil, or agricultural land, and should be protected from the elements and outside disruption. Long-term tunnel stability (or other means of access) must be assured, while future needs (and costs) must be included in any design; 24/7 monitoring is imperative, which given the extraordinarily long half-lives of much of the nuclear waste is essentially permanent. For now, the best solution is to leave the waste on site until a better solution comes along – not much consolation, but better than doing something really stupid.

Without solving the waste issue, one wonders how nuclear power can ever be a viable energy solution. As noted in the Flowers Report, a 1976 UK commission on environmental pollution: “There should be no commitment to a large programme of nuclear fission power until it has been demonstrated beyond reasonable doubt that a method exists to ensure the safe containment of long-lived, highly radioactive waste for the indefinite future.”130 Clearly, we aren’t learning from our mistakes. In the rush to install peaceful nuclear power and make amends for the horrors of Hiroshima and Nagasaki, nuclear waste is still an afterthought, saddling future generations with even more worry.

At least we don’t dump radioactive waste into the ocean as in the early days of nuclear power, while some HLRW can be reprocessed by reburning in a reactor. In the aftermath of the Cold War, 250 tons of diluted Soviet bomb-grade material (about 10,000 warheads) were sold to the USA for recycling, which helped power “the lights of Boston.”131 Under a 1993 non-proliferation agreement between the US and Russia, the highly enriched uranium (HEU) from as many as 20,000 warheads was reprocessed, providing half of all uranium fuel used in American reactors until 2013. Although the novel “Megatons for Megawatts” program has since been discontinued, “downblended” HEU weapons and other spent fuel can still be reprocessed to reduce the ongoing accumulation of high-level nuclear waste. American weapons were also reprocessed, providing 5% of US nuclear power. Without other solutions, it makes sense at least to burn the most dangerous uranium material in a reactor.

Of course, by continuing to kick the can down the road, the problems only worsen as more reactors are decommissioned with no place to go. What’s more, there is little political will to demand that the nuclear industry pay for long-term storage, and thus the public must broker the cost and responsibility of an endless ongoing cleanup, the state providing cover for an industry that hasn’t worked out the basics of its own business. In the United States, the 1982 Nuclear Waste Policy Act (NWPA) assumes government responsibility for radioactive waste, essentially a free license for the nuclear industry to continue polluting at will, while the 1954 Price-Anderson Act also limits liability to $700 million, cementing a cozy, decades-long, government backing for ineptitude.

In Port Hope, after decades of living with the unseen dangers of radioactive waste, the Canadian government at least decided to remove the mess created by the former crown corporation Eldorado Nuclear Limited cum publicly traded Cameco. Set up in 2001, the Port Hope Area Initiative (PHAI) oversees the removal of low-level radioactive waste to a long-term, aboveground, storage site, 10 km north of the town (already in operation for Cameco waste). There is no guarantee of future safety, but first removals began in 2018. The $1.28 billion project aims to remove 2.5 million cubic meters of contaminated material such as uranium, thorium, radium, and arsenic, while 5,000 homes will be tested. That works out to $75,000 per person, a high price to clean up what was previously deemed safe by government officials, although no one is quite sure the damage can ever be undone. Some doubters wryly recommended giving the money to each resident instead, concerned about suspect government sampling methods and dodgy analyses.

After the last truckload, possibly within the next decade after a series of administrative delays in part because of an absence of local testing facilities, the town hopes to move on from almost a century of “radiating friendliness.” At least for Port Hopers, the problem will lie under someone else’s rug. Hopefully, they do a better job. Nonetheless, with one of the world’s largest uranium refineries still dominating its lakefront and a nearby waste storage site still a concern, Port Hope will continue to endure its long nuclear legacy. The Canadian government may have finally seen the error of letting an unbridled nuclear industry have its way, but one doesn’t have to try hard to come up with a snazzy new PR slogan to gloss over the past – “Scenic Port Hope: still glowing after all these years.”

Nuclear power incites passionate followers for and against. The standard debate is “jobs versus the environment” as with other industries, although nuclear power is more worrisome because of the uncertain long-term effects of unseen and unknown amounts of radiation, the public unable to gauge their own safety given the secrecy and jargon. One must also wonder how a government can act contrary to the interests of its own citizens by denying basic safety information. Nuclear secrecy should never apply to health. At the very least, poorly regulated, for-profit companies should be excluded from deciding the public interest.

There is only one question about radioactive waste – would you want it in your backyard? Few of us would say yes, although in the absence of an honest public broker one must become better informed about the risks and learn as much as possible about the potential harm. The Earth is not infinite nor a dump to compensate our negligence. Nor is nuclear power “too cheap to meter” as originally advertised. It’s well past time to re-evaluate an industry PR that for over a century has held the public captive to the mysteries of the atom.

If not for the waste, nuclear power could well be the answer to our energy needs, but the ongoing waste problems haven’t been solved 75 years on. Even more worrisome, nuclear risk is always skewed – there may not be as many accidents as with oil and gas, but they are always horrific (as we look at now).

3.7 Nuclear Safety Systems: Can We Ever Be 100% Sure?

More worrisome than radioactive waste is the potential for an accident, which for nuclear power is more dangerous than any other power-generating technology. Since the start of the Atomic Age, when Fermi’s CP-1 pile first went critical on December 2, 1942, three dates have become stamped in the annals of nuclear catastrophe: March 28, 1979 (Three Mile Island partial meltdown), April 26, 1986 (Chernobyl explosion), and March 11, 2011 (Fukushima multiple explosions and triple meltdown). Anyone who lived near Three Mile Island (TMI) in southern Pennsylvania, Chernobyl in northern Ukraine, or Fukushima Prefecture in the Tōhoku region of Honshu about 200 km north of Tokyo can tell you what they were doing the moment they learned of the news and were forced to evacuate.

To standardize the reporting of a so-called nuclear “incident” or “event” the International Atomic Energy Agency created the International Nuclear Event Scale (INES) in 1990, which records all radioactive releases ranked from 0 (“no safety significance”) to 7 (“major accident”). In the first 70 years of civilian nuclear power, 20 facilities have reported at least a “serious incident” (INES-3) with another 15 “incidents” (INES-2) or “anomalies” (INES-1). To date, the two most serious events are both labeled 7s – “Major Release: Widespread health and environmental effects,” the first at Chernobyl, Ukraine, in 1986 (Reactor No. 4 fuel meltdown and fire) and second at Fukushima, Japan, in 2011 (Daiichi 1, 2, 3 fuel damage and radiation release), both precipitating mass evacuations of the local population.

The world’s first “serious nuclear accident” (INES-6) occurred in 1957 in Mayak, Russia, because of unexpected criticality in a reprocessing plant (“Significant Release: Full implementation of local emergency plans”). There have also been four 5s (“Limited Release: Partial implementation of local emergency plans or severe damage to reactor core or to radiological barriers”): at Chalk River, Canada, in 1952 (core damage); Windscale, UK, in 1957 (fire); Three Mile Island, USA, in 1979 (fuel melting); and Goiânia, Brazil, in 1987 (hospital theft). Given the nuclear industry’s track record one should expect more.

It’s not easy to grade every event on a simple scale. Richard Wakeford, editor of the Journal of Radiological Protection, notes that Mayak should be a 7 and Windscale a 6 if the latest estimates of polonium-210 release are included.132 Nonetheless, INES helps put the damage into perspective. The causes of the most serious events include fuel meltdown and fire, fuel damage, radiation release and evacuation, reprocessing plant criticality (onsite nuclear fission), graphite overheating, criticality in fuel plant for an experimental reactor, fuel pond overheating, cooling interruption, turbine fire, and severe corrosion.133 Table 3.5 lists the location and date of an event labeled at least a serious incident at a nuclear facility, not counting unreported incidents in Russia and elsewhere (note some suffered multiple events).

Table 3.5 International Nuclear Event Scale (INES) of at least a “serious incident”

LevelEvent categoryLocation (year)
7Major accidentChernobyl (1986), Fukushima (2011)
6Serious accidentMayak (1957)
5Accident with wider consequencesChalk River (1952), Windscale (1957), Three Mile Island (1979), Goiânia (1987)
4Accident with local consequencesSellafield (5 from 1955 to 1979), Idaho Falls (1961), Saint-Laurent (1969, 1980), Lucens (1969), Jaslovské Bohunice (1977), Andreev Bay (1982), Buenos Aires (1983), Tōkai (1999), Mayapuri (2010)
3Serious incidentVandellòs (1989), Oak Harbor (2002), Paks (2003), Sellafield (2005)

We aren’t always privy either to all the facts regarding radioactive release, alarmingly so after a nuclear accident. At the Mayak nuclear complex in the Russian province of Chelyabinsk, in the closed town of Ozersk, scene of the world’s first significant accident, a spent fuel reprocessing plant went critical, producing a large release of radioactivity and full implementation of local emergency plans, yet remained unknown for decades in the West because of Cold War secrecy. We know now that the cooling failed in a storage tank at the joint military–civilian nuclear installation where Soviet weapons-grade plutonium was produced, resulting in a chemical explosion and a 74 × 1015 Bq (74 PBq) release of radionuclides over a 300 km × 50 km area of the Southern Urals. Now called the East Urals Radioactive Trace (EURT), access is restricted.

Known as the “Kyshtym Accident” because there was no official town in the area and labeled “Chelyabinsk-65” for administrative reasons, 5,000 workers received more than 1 Sv in the first few hours, more than 250,000 area residents received doses above the recommended annual limit, and almost 11,000 people were evacuated134 (recall that 1 Sv equals a 5.5% increased chance of cancer). Although suspicions filtered into the West of a nuclear accident behind the Iron Curtain, details weren’t officially reported until 32 years later in 1989.

After the accident, radioactive material poured into the Urals, the Techa River, and Lake Karachay. Considered the most radioactive body of water on Earth, Lake Karachay had been used as a nuclear waste dump since the start of the Soviet-era weapons program, but was eventually filled in with concrete blocks and completely covered in rock and soil in 2015, becoming Russia’s de-facto permanent nuclear waste storage facility. No need for decades of debate on aboveground or belowground waste sites in autocratic Russia or which unlucky town gets to host the stuff. Although the Russian state nuclear company Rosatom announced that radiation levels had decreased around Karachay, a 1,000-times spike in ruthenium-106 levels was also detected in 2017 in central Europe, believed to have come from the Mayak nuclear complex. Russia claimed the increased levels were from the batteries of a burnt satellite.

After the Mayak accident (as we now know), another significant accident occurred only a few weeks later on October 10, 1957, when a graphite-moderated reactor core caught fire at the Windscale plant on the Cumbrian coast in the UK, 100 km northwest of Liverpool. The now-graded INES-5 event was denied at first by the British government, who were keen to sign a nuclear weapons technology sharing agreement with the USA.

Operational since 1952, the two-pile plant was built to produce plutonium and tritium for the UK’s post-war, nuclear bomb program and was partly designed by John Cockcroft, Britain’s original atom smasher. No. 1 Pile was graphite-moderated and air-cooled, the air expelled up a tall chimney and filtered by “Cockcroft’s Folly,” retroactively added to stop the release of radioactive debris in the event of a fire. Unfortunately, in a rush to join the USA in the exclusive H-bomb club before a presumed worldwide nuclear weapons test ban, production of plutonium and tritium at Windscale was increased beyond safe operating levels. As a result, an isotope cartridge caught fire in one of the reactor channels, which spread through the graphite-moderated reactor after the blaze was fanned by increasing the air coolant to the core. Radioactivity poured out over the Cumbrian countryside and the Irish Sea for three days, contaminating 200 square miles of farmland, the fire eventually extinguished by shutting off the air “coolant.” Thanks to Cockcroft’s ad-hoc safety fix, Cumbria is not now a no-go nuclear exclusion area like Fukushima, Chernobyl, and EURT, although the Irish Sea is the world’s most radioactivity-filled waterway after Fukushima.

Designed to produce utility electricity and transition to peaceful nuclear power, the world’s first civilian nuclear power plant, Calder Hall, was built beside Windscale, opened in 1956 by Queen Elizabeth, and where nuclear weapons material continues to be manufactured. Renamed Sellafield in 1981, the Windscale and Calder Hall site still produces plutonium for the UK nuclear weapons program, but also reprocesses spent fuel from 15 British nuclear reactors as well as from other countries. In 2005, more than 26 kg of plutonium went missing (enough for seven nuclear bombs), the UK Atomic Energy Authority claiming an auditing error in the accounts. As we saw earlier, no permanent storage site has yet been agreed to handle Britain’s long-term waste that includes over 50 kg of plutonium, the cleanup costs alone estimated at around £80 billion and counting while the NIMBY haggling continues.

The next most significant nuclear accident was at TMI in Pennsylvania, located on the Susquehanna River about 10 miles southeast of the state capital Harrisburg. TMI was the first accident to focus the American public on the dangers of nuclear power, caused by a combination of design flaws and human errors. At 4:37 a.m. on March 28, 1979, the cooling water system pump at the 900-MW, TMI-2 PWR failed, and because the water supply for the three backup pumps had been shut off during a routine maintenance operation 2 weeks earlier and not turned back on the reactor heat couldn’t be removed. The temperature and pressure in Reactor 2 immediately began to rise, which triggered an automatic shutdown where all the control rods are inserted into the reactor, called “scram” for “safety control rod actuator mechanism,” or as some have conjectured, “get the hell out.”

Without any core coolant to remove the heat, the pressure exceeded 2,250 psi, opening a pressure-relief valve that then failed to close, lowering the pressure below its design load of 1,600 psi, which automatically triggered another backup measure, the emergency core coolant system or ECCS (a high-pressured spray injection). Thinking that the original backup coolant was working, however, as shown on an incorrect water-level gauge, the TMI-2 operators turned off the ECCS to avoid flooding. Nothing was cooling the reactor and so the core overheated, the zirconium-clad fuel rods ruptured, and the uranium pellets melted, indicating a temperature of at least 5,100°F. Melting almost half the core, a complete core meltdown was averted “only by a last-minute rush of cooling water.”135 Alas, during the cooling rush the overheated zirconium cracked and reacted with water and steam, creating a dangerous “hydrogen bubble” that fortunately didn’t explode, although radioactive water flooded everywhere – 400,000 gallons through the stuck relief valve – and radioactive gas vented into the air.

Essentially, four design flaws and two human errors (one from mistaken information) accounted for the worst nuclear accident in American history, completely destroying the almost-new TMI-2 reactor and causing a minor panic as roughly 150,000 people voluntarily evacuated after the forced evacuation of pregnant women and children in the area. Although the radiation release was considered minimal, the damage could have been much worse.

The analysis of the TMI event showed that some safety systems worked as designed while others did not, the operators unfortunately acting on wrong information to tackle the emergency. Almost four decades after the Italian navigator had landed in the new world, running a nuclear reactor was not as easy as turning a car right or left as Fermi first boasted. In release just 2 weeks before the TMI partial meltdown, the controversial movie The China Syndrome (starring Jack Lemmon as a nervous nuclear operator and Jane Fonda as the dogged TV news reporter in search of a story) heightened public worry about a potentially disastrous nuclear accident. Although “China Syndrome” incorrectly refers to the direction and distance melted reactor fuel will flow – China is not geographically opposite the USA, while melted core material will eventually solidify about 6 feet below ground – liquefied uranium in a reactor spells serious trouble.

***

Underplaying the severity of previous events and the chances of more, a well-financed nuclear industry wants the public to think everything is safe, green, and economical. But nuclear-generated electricity is not as foolproof as the PR claims, as we know now from the two most dangerous nuclear events yet, each labeled a 7 on INES. Both Chernobyl and Fukushima continue to spread radioactivity across the world and remain no-go exclusion zones to hundreds of thousands of former residents in ghost towns permanently cut-off from human habitation. Call it Nuclear Murphy: when things go wrong, they go very wrong.

Chernobyl is still the world’s worst nuclear accident to date, occurring in the former Soviet-controlled republic of Ukraine, located on the Pripyat River about 100 km north of the capital Kyiv and 5 km south of the Belarus border.136 Just after midnight on April 26, 1986, during what was intended to be a simple safety test, Chernobyl Reactor No. 4 – an 800-MW, graphite-moderated, light-water-cooled RBMK reactor – was destroyed by a steam explosion that blew the lid off the inner biological shield and then another more powerful hydrogen explosion a few seconds later that breeched the outer containment. The hydrogen blast, which to those in the control room felt like an earthquake, threw up parts of the fuel and blocks of graphite moderator across the whole of the plant and onto the roof of the neighboring Reactor 3. The resulting graphite blaze killed 31 firefighters in the first few days, followed by thousands more in the radioactive aftermath.

Prior to the breakup of the Soviet Union, Cold War industrial competition was fierce and nuclear methods not readily shared, including test data and LOCA simulations. Today, we know that the world’s worst-ever nuclear accident was caused by a series of horrendous errors during an ill-advised turbine test to determine the lowest power level at which the reactor could restart after a shutdown, ostensibly to measure the safety systems in the event of a real sudden loss of power! Such a test – to see if a spinning down turbine could provide electricity to the emergency backup systems (for example, the ECCS) in the event of a sudden loss of external power – would have been deemed far too dangerous to try in the West. Step after step at Chernobyl was either a stupid mistake or a violation of legal operating procedures, even in the USSR where pushy party bosses routinely stressed production over safety. One has to wonder what was going on and who was in charge.

After the test started going haywire, all the control rods were inexplicably removed from the core, causing a massive upsurge in power at almost 500 times the design rating (384 GW/0.8 GW), instantly vaporizing the twice-normal volume of coolant water (called flashing) and disintegrating the zirconium-clad fuel rods, which then created a hydrogen bubble from oxidized water (as at TMI when water reacts with overheated zirconium). That, in addition to the pure carbon graphite reacting with steam to make more hydrogen and carbon dioxide gas, literally blew the 2,200-ton top off the pressure vessel, venting the reactor contents to the atmosphere and ejecting flaming reactor parts all over the reactor complex.137 The hydrogen explosion shot radioactive material 2 km high as clouds of radioactive gases and debris expanded outwards up to 9 km away, while 100% of the Xe-133, 65% of the I-131, and 40% of the Cs-137 inventory within the core were released into the atmosphere.138,139

Fueled by 1,700 tons of burning graphite, the resulting chemical fire lasted 8 days, releasing more volatile radionuclides across much of Europe and the northern hemisphere. Worries about the state of the reactor core continued after the initial explosions as workers in hovering helicopters dropped 30 tons of boron- and lead-laden sand and clay over the broken reactor building, valiantly trying to squelch the still fissioning uranium fuel. Some wondered whether another blast would “release radioactive clouds large enough to make a good part of Europe uninhabitable.”140 If the overheating core had melted through the reactor’s foundations, another real possibility was the radioactive poisoning of the local water table, the nearby Dnieper River (Ukraine’s major waterway), and even the world’s oceans. Upon entering the water supply, the melted fuel would have also created radioactive steam that would have been impossible to contain and been released to the atmosphere.

Three days after the disaster, elevated radiation levels were detected over 1,000 km away at the Forsmark nuclear power plant outside Stockholm, triggering worries about a local radiation release before attention turned to the Soviet Union from where the winds were blowing. High levels of radioactive fallout were eventually measured in the USA, North Africa, and even in Hiroshima over 8,000 km away. Although Ukraine and the neighboring countries of Belarus and Russia were most affected, more than half of Chernobyl’s volatile inventory landed elsewhere. Adding to the severity of the radioactive release, the RBMK design had no outer containment building, but such a structure would likely have been breached anyway given the severity of the explosion, although the overall release would have been greatly reduced and perhaps more easily staunched.

In the immediate aftermath, iodine-131 was the main concern (t1/2 = 8 days) – readily absorbed by leafy vegetables and grass eaten by farm animals – followed by cesium-134 (t1/2 = 2 years) and cesium-137 (t1/2 = 30 years). Two-fifths of the surface area of Europe was contaminated above 4,000 Bq/m2 and 2.3% above 40,000 Bq/m2, almost 10 times the level in Chernobyl (the EU limit of cesium-137 in dairy foods is 600 Bq/kg).141 According to the World Health Organization, the total radioactivity emitted was an astounding 200 times more than the atomic bomb blasts on Hiroshima and Nagasaki combined.142 Recent numbers put the release at 50 million curies “the equivalent of 500 Hiroshima bombs.”143

After the initial accident, it took 36 hours to start evacuations from the nearby city of Pripyat, just northwest of the reactor complex (one of nine “atomgrads” or atomic cities built across the Soviet Union to house nuclear workers and their families). All 50,000 residents were eventually evacuated within 2 days in a series of grim bus runs, while every building in nearby Kyiv – the Ukrainian capital 100 km to the south – was hosed down for at least a month after the accident. Roughly 2,500 km2 around the reactor remains contaminated (an area the size of Luxembourg). The former closed nuclear workers’ city of Pripyat is now a ghost town as is the city of Chernobyl 15 km southeast of the reactor complex, both lying completely within the 30-km exclusion zone, officially called the Chernobyl Nuclear Power Plant Zone of Alienation from where 350,000 people in both Ukraine and Belarus have been displaced, never to return.144

Thirty years later, in an attempt to stem the dispersal of radiation from the still highly radioactive broken core, a massive, $1.8 billion, steel arch “sarcophagus” – the largest movable metal structure in the world – was inched into place atop the wrecked reactor and hastily assembled and crumbling, original, steel-and-concrete sarcophagus. In Chernobyl: History of a Tragedy, the Ukrainian-born Harvard University history professor Serhii Plokhy called the high-tech shelter, “a warning to societies that put military or economic objectives above environmental and health concerns.”145 Despite reducing the still active radiation by 90%, the surrounding area will be uninhabitable for at least another 20,000 years. Initially estimated at $10 billion, financial losses for the accident have now reached more than $200 billion. More than two decades after the accident, the neighboring country of Belarus directly to the north was still spending $1 million per day to deal with the consequences.

The number of deaths related to Chernobyl is widely disputed as one might expect, anywhere from 4,000 to 90,000 in the general literature, while a 40% increase of cancer was observed in Belarus up to 20 years later.146 Russian biologist Alexey Yablokov estimates that one million people may have died, stating there is no reason to neglect “the consequences of radioactive contamination in other countries, which received more than 50% of the Chernobyl radionuclides.”147 According to numbers released by the Ukraine government after the breakup of the Soviet Union, 3.3 million Ukrainians were categorized as “sufferers,” while 38,000 km2 or 5% of the country was contaminated; in Belarus, more than 44,000 km2 was “severely contaminated, accounting for 23% of the republic’s territory and 19% of its population.”148

On the thirtieth anniversary of the accident, Kim Hjelmgaard documented the plight of those who still suffer from the ongoing effects in a series of USA Today articles. He recounts the stories of exiles who worried about the effects on their children and grandchildren, the difficult times for those who live in the economically depressed areas of neighboring Belarus (often overlooked in the disaster), and the condition of the 800,000 so-called “liquidators” and “biorobots” between the ages of 18 and 22 – brave Russians who worked to contain the initial damage – 160,000 of whom may have died for their bravery before reaching their fortieth birthdays. A grim reality of life in the aftermath of nuclear disaster was still very much evident 30 years on, including almost half a million children born after the accident, who have “respiratory, digestive, musculoskeletal, eye diseases, blood diseases, cancer, congenital malformations, genetic abnormalities, trauma.” Chillingly, as Hjelmgaard noted, “There are 2,397,863 people registered with Ukraine’s health ministry to receive ongoing Chernobyl-related health care. Of these, 453,391 are children.”149

Today, feral animals roam freely in the exclusion zone surrounding Chernobyl, while a few brave souls have unofficially returned to their abandoned homes to live out the end of their lives, known as “Samosely” (self-settlers). Officially, no one can be let back in. A “red forest” grows unimpeded by human activity throughout “the Zone,” the hottest on earth. Continuing radiation damage is not the only worry, which now includes the threat of radioactive forest fires spewing out more radiation across the open skies and the spread of genetically mutated migrating animal species across Europe. During the 2022 invasion of Ukraine, Russian soldiers were also exposed to unknown amounts of radiation after digging trenches in the evacuation zone without any protective gear.

Although the Chernobyl reactor site is a complete write off and nearby Pripyat a permanent ghost town, one can take a guided tour within the Zone arranged as a day trip from Kyiv to Chernobyl and back. On the site of the world’s worst nuclear disaster, you can poke your nose around various deserted sites – such as the Reactor No. 4 control room, the Pripyat hospital, and the iconic Ferris wheel in the eerily abandoned local amusement park – learn about the science of the accident, and see the still unchanged Soviet-era administrative setup frozen in time. Access to some areas is restricted, especially near the reactor, while one must wear a gamma-ray dosimeter to monitor radiation levels inside the exclusion zone. Eating and drinking in the open air, touching things, gathering plants, sitting on the ground, or setting personal items down are all strictly forbidden in this uniquely modern theme park.

Serhii Plokhy described his experiences on one such tour that prompted him to write Chernobyl: History of a Tragedy, where he compares Chernobyl to a modern-day Pompeii, noting, “It was not the heat or magma of a volcano that claimed and stopped life there, but invisible particles of radiation, which drove out the inhabitants but spared most of the vegetation.”150 He also cites the April 26, 1986, accident at Chernobyl as the beginning of the end of the Soviet Union, a “technological disaster that brought down not only the Soviet nuclear industry but the Soviet system as a whole.”151 Glasnost (openness) was a direct result of a concerned public wanting to know more, followed by perestroika (restructuring) and the fall of the Berlin Wall three and a half years later. One shouldn’t have to suffer a nuclear disaster to see the end of authoritarianism. Nor should anyone be permanently excluded from their home (Figure 3.6).

Figure 3.6 Nuclear accident exclusion zones: (a) Chernobyl and (b) Fukushima.

3.8 Fukushima: The Last Nuclear Disaster or Portent of More?

Although the chilling effects of Chernobyl are still with us more than three decades on and much more is now known about what happened, the horrifying aftermath of a triple meltdown on the east coast of Japan in early 2011 is still unknown and potentially more dangerous. Located in the Fukushima Prefecture about 200 km north of Tokyo and 100 km south of Sendai, the Daiichi Nuclear Power Plant comprised six General Electric Mark I boiling-water reactors with a total electrical generating power of 4.7 GW. Units 1, 2, and 3 were all in operation on March 11, 2011, while units 4, 5, and 6 had been shut down for maintenance when a 9.0-Richter earthquake struck in the Pacific Ocean, centered 80 km east of Sendai, the largest in Japanese history.

Reactors 1, 2, and 3 immediately “scrammed” as designed after the earthquake, the back-up diesel generators kicking in to continue powering the cooling pumps to keep the cores from overheating. Back-up power also maintained the cooling to the spent fuel pool in Unit 4 (located above the reactor), where the entire core inventory of 548 fuel rods had been temporarily stored. The Daiichi plant was completely swamped, however, when a devastating 10–15-m high tsunami hit Japan’s east coast within the hour. Seawater flooded everywhere, knocking out all of the back-up diesel generators that powered the essential cooling to the three active cores and four spent fuel pools in Units 1–4.

As at TMI and Chernobyl, when cooling is lost the fuel starts melting, generating hydrogen gas as the zirconium-clad fuel rods overheat and react with steam, risking an explosion and a radioactive release. Twenty-four hours after the tsunami, Reactor 1 blew as hundreds of workers valiantly fought to pour water over the core to avoid a full meltdown. Two days later, Reactor 3 blew, followed another day later by Reactor 2. Reactor 2 was especially dangerous as it had just started burning plutonium fuel.

All three cores in Reactors 1, 2, and 3 eventually completely melted, producing lava-like fuel-containing material (a.k.a. corium), a hot, radioactive soup of melted fuel rods and cladding. Without any cooling to remove the still simmering heat, the spent fuel in Unit 4 also caught fire, releasing 0.4 Sv/h into the atmosphere. Units 5 and 6, which had been built further inland on higher ground, both survived mostly unscathed.

The fuel meltdowns in Daiichi 1, 2, and 3 and the spent fuel pool fire in Daiichi 4 is now considered one of the worst nuclear accidents ever, collateral damage of a Pacific Ocean tsunami that pounded the east coast of Japan for over 10 minutes and saw almost 20,000 people die amid massive flooding. Although the three reactors successfully shut down as designed after an earthquake, without any back-up power in the aftermath of the resulting tsunami, the fuel overheated and all three reactor cores melted, sending plumes of radioactive material into the surrounding countryside and radioactivity-laced water into the Pacific Ocean.

The overall amount of fission-product inventory released to the atmosphere was estimated at about 10% that of Chernobyl, which had no outer containment building, releasing almost 8 tons (5%) of its core inventory primarily during the initial explosions. According to the official IAEA report, the upper estimate of radioactive releases from Fukushima were 25% of the iodine-131 and 45% of the cesium-137 compared to Chernobyl, with minimal release of the medium volatility isotopes Sr90, Ru103, and Ba140, although radioactive noble gases were about twice as high.152 Proximity to the ocean, however, clearly made the ongoing dispersal of radioactive material into the groundwater and marine environment much worse, especially because the location of the corium was unknown, a nightmare scenario that could devastatingly contaminate both local and international waters.

Within 2 weeks, 160,000 residents were forced to evacuate from a 20-km radius exclusion zone, soon after expanded to 30 km, while as many as 300,000 were evacuated from Fukushima Prefecture, 200,000 of whom were not able to return to their homes more than 5 years later. A decade after the accident, two-thirds of evacuees had no plan to come back. In nearby Namie, where 80% of the town is still restricted, only 1,500 of 21,000 former residents have returned.153 At the time of the accident, the evacuation of Tokyo with a population of 37 million people had been considered a real possibility.

If Fukushima tells us anything, it is that we cannot design against a loss-of-coolant accident, even for a reactor built on the shores of an ocean (although saltwater is too corrosive to use in regular operation). Even worse, the reactor buildings were inexplicably built belowground to protect against an earthquake that coupled with an insufficiently high sea barrier made for a grossly inadequate tsunami defense. What’s more, without an independent back-up power supply located on higher ground, as in international best practice, the Daiichi plant was helpless to cool the enormous reaction heat in its reactors and spent fuel pools after the onsite power failed. Unfortunately, the reality of nuclear power is that we can’t design against all eventualities – some only obvious after the fact – yet essential given the potential for a major radioactive release and long-term evacuation of those unlucky enough to live “up-weather” within an indeterminate range of blowing winds and air currents.

Mistakes happen with any technology, such as inserting a control rod upside down, forgetting a welding rig inside a pressure vessel, or from simple everyday corrosion, rust, and leaks, never mind building a nuclear power plant below sea level on the coast in an active tectonic region known to be susceptible to tsunamis. Giant waves are an integral part of Japanese culture, best exemplified by Katsushika Hokusai’s famous woodblock print “The Great Wave off Kanagawa” that gracefully shows the power of the sea engulfing three boats full of fishermen and the country’s tallest peak Mount Fuji. One must wonder about basic planning principles that overlooked the inevitable. The resultant clean up in and around Fukushima could cost more than $500 billion and take over four decades to complete.

As Tokyo Electric Power Company (TEPCO), owner/operator of the Fukushima Daiichi plant, began to assess the damage to the reactors and melted fuel, Arnie Gundersen, a former nuclear engineer turned whistleblower, listed the most serious problems: (1) the below sea-level reactor basements, where leaking radioactive material continues to mix with groundwater flowing directly into the ocean; (2) leakage from an aboveground onsite “tank farm,” where radioactive water was being stored during the ongoing clean up; and (3) the potential for much wider devastation in the event of another earthquake. The radioactive water storage tank farm would eventually exceed 1.3 million tons in 1,000 tanks, while up to 400 tons per day of contaminated groundwater had been leaking into the ocean before an “ice wall” built by TEPCO reduced the flow by half. Gundersen also recommended a zeolyte wall to keep the groundwater from entering the plant and to absorb radioactive cesium.

A decade after the accident, TEPCO announced plans to empty the overflowing store of radioactive water into the Pacific Ocean starting in 2023, because of lack of onsite space, instead of building more tanks outside the plant perimeter, angering both local fishermen and neighboring countries. TEPCO claimed radioactive isotopes such as Sr-90 and Cs-137 were removed from all “treated” water, despite a 2018 analysis showing a 70% contamination level that cast doubt on the company’s honesty.154 The treatment process also leaves behind a radioactive slurry particularly high in strontium that quickly degrades the containment liners and must be regularly replaced. As of 2022, TEPCO had yet to announce any “acceptable plans for dealing with the necessary transfer of slurry from weakening, almost deteriorated containers, into fresh, new containers.”155 Highly carcinogenic tritium also remains, worrying fisherman and farmers alike, who continue to suffer from loss of catch and produce. With a half-life of 12.5 years, radioactive tritium will last another century and “is a carcinogen (causes cancer), a mutagen (causes genetic mutation), and a teratogen (causes malformation of an embryo).”156 Carbon-14 and cobalt-60 also pose risks.

Allowing the water to evaporate was also proposed, but dilution via dumping was deemed the cheapest option. One wonders why TEPCO doesn’t just add more reinforced tanks, either onsite or nearby, instead of risking more damage by dumping radioactive water into the ocean or at least wait until levels decrease. What’s the hurry to undo what essentially can’t be undone? After much criticism from Pacific Rim countries, Japan finally announced in early 2023 that it would delay dumping radioactive water until more support was garnered. Later in the year, the IAEA released a report minimizing the dangers over neighboring countries’ objections and dumping finally began.

To this day, TEPCO still doesn’t know the condition of the fuel in the three damaged reactor buildings or exactly where the fuel ended up after melting into the foundations, far too hot to attempt to find even with volunteers wearing radiation suits. In 2017, a robot equipped with a camera and radiation gauges – called Scorpion because of its folding tail of instruments – was unable to venture very far into Unit 2, considered the least radioactive of the destroyed reactors. Scorpion registered 250 Sv/h before dying, likely because of a fried circuit board from the high levels of radiation, while an earlier probe fitted with a remote-controlled camera recorded levels up to 650 Sv/h at about the same spot, “enough to kill a human within a minute.”157

In early 2022, a ROV-A robot was finally able to send back pictures from inside Unit 1, showing what looked like melted nuclear fuel at the bottom of the reactor submerged in cooling water. Roughly 900 tons of melted fuel remains inside Units 1, 2, and 3, and is expected to take at least 30 years to remove, possibly much more.158 Containing radioactive damage from one reactor in Chernobyl was difficult enough; three is almost unimaginable.

The full biological impact remains unknown and is much debated. While thyroid cancers begin within 3 years, an increase in solid cancers is expected after a decade as the radiation accumulates over time in the body’s organs. Blood cancers have already started being detected in children and in the clean-up crews, while 4 years afterwards “off the coast of Oregon and California every Bluefin tuna caught in the last year has tested positive for radioactive Cesium 137.”159 As much as 80% of the overall radiation may have been deposited (and is still being deposited) into the Pacific Ocean, which will severely affect marine life and the food chain for years to come as the radiation concentrates in algae, crustaceans, and on up to smaller and then larger fish.

If Chernobyl is a guide, the numbers will rise and be in dispute by those in favor of nuclear power versus those concerned about ongoing health problems. Alas, life is anything but normal for those still barred from their homes, while the sale and consumption of foods containing more than 100 Bq/kg of cesium-134 and cesium-137 is still restricted throughout Japan. To be sure, despite the high levels of radiation detected outside the Daiichi plant in numerous locations, the number of deaths is difficult to count. But you know levels are serious when highways around Fukushima are posted with LED signs showing radiation readings in µSv/h.

Berkeley physics professor Richard Muller noted that the number of initial “radiation deaths” (<100) was small compared to the “tsunami deaths” (>15,000) and is keen to point out the human consequences of forced evacuation,160 although Ian Fairlie, the scientific secretary to the former UK Committee Examining Radiation Risks of Internal Emitters (CERRIE), believes the numbers will continue to rise. Using a 10% per sievert fatal risk of cancer, he estimates that 5,000 people in Japan will die from future cancers (72% in Fukushima Prefecture, the most contaminated part of Japan, and 28% in the rest of the country), in keeping with reports by both UNSCEAR (the UN Scientific Committee on the Effects of Atomic Radiation) and the IPPNW (International Physicians for the Prevention of Nuclear War).161

Pieter Franken, co-founder of the Tokyo-based NGO Safecast, mistrusts official numbers, literally taking a Geiger counter for a ride after the meltdown to measure street-by-street levels unrecorded by government sources: “On my first drive, the readings I was getting were significantly higher than those being reported on TV.”162 The numbers showed higher levels in some towns further from the plant, indicating that atmospheric carry is more complicated than simply devising concentric radiation zones outwards from the source. Franken also spearheaded a citizen-scientist movement that uses homemade Geiger counters to measure radioactivity for oneself, the street-level analysis deterring some locals from returning home, thinking that radiation levels were still too high.

One dairy farmer from the village of Iitate, about 30 km northwest of the Daiichi plant, began returning every month to take his own readings since he and almost 6,000 residents were forced to evacuate, finding the levels “consistently higher than those from the government monitoring posts, and are not falling anywhere near quick enough, despite the decontamination efforts.” The mayor of Iitate, also a dairy farmer, believed the danger was exaggerated, but admitted they will be unable to grow food for “years to come,” and hoped returning farmers will “switch to flowers and other crops not for human consumption.”163

Although the government cleared away topsoil for long-term storage, collecting 3 million bags in a few months and 22 million cubic meters over a decade, only one-fifth of those displaced from the picturesque village of Iitate care to return. Consumer confidence in food harvested within Fukushima Prefecture also remains low, making a return to farm life extremely difficult for displaced farmers. A decade after the accident, 15 of 54 countries that had banned Japanese food imports maintained those restrictions.

Decommissioning the Daiichi reactors will also take decades, the main challenges being the removal of melted fuel and debris from inside each of the three damaged reactors as well as the spent fuel pools. By 2015, the still hot fuel from Unit 4 had been successfully emptied, removed to the nearby Daini nuclear plant, south of Daiichi, which suffered much less damage in the “3/11” tsunami. Retrieving all the melted fuel from Units 1, 2, and 3 was expected to last until 2024, but original industry estimates were hugely optimistic. A revised time frame for removal is now set to take at least until 2050.

A Greenpeace nuclear specialist in Japan described the challenges as “unprecedented and almost beyond comprehension,” noting that TEPCO’s decommissioning schedule was “never realistic or credible.”164 Dr. Helen Caldicott, the Australian pediatrician and author of the 1982 Academy Award winning documentary If You Love this Planet about the dangers of nuclear power, noted that because of the accumulation of long-lived radioactive isotopes in the food chain and the body, a nuclear accident never ends, stating that Fukushima was “by orders of magnitude many times worse than Chernobyl.”165

The damage was not restricted to just Japanese citizens. In 2017, the United States appeals court ruled that 318 American sailors could sue the Japanese government and TEPCO for illnesses caused by radioactive exposure. Mostly stationed aboard the USS Ronald Reagan aircraft carrier, dispatched to Japan’s northeast coast to help with the relief efforts at Daiichi, the sailors complained of numerous ailments, “ranging from leukaemia to ulcers, brain cancer, brain tumours, testicular cancer, thyroid illnesses and stomach complaints,”166 while some of their children were born with birth defects. Former North Carolina senator John Edwards represented the sailors against TEPCO and GE, the supplier of the four boiling-water reactors that recommended the low-lying coastal location rather than a safer, more inland site on higher ground. Although the case was rejected in 2020, expect more litigation to continue.

Rebecca Johnson, former senior advisor to the Weapons of Mass Destruction Commission chaired by International Atomic Energy Agency head Hans Blix, stated that Fukushima “was both avoidable and inevitable,” noting that “the natural disaster of an unusually large tsunami was turned into a nuclear catastrophe by the systemic failings of the nuclear plant’s operators, the Tokyo Electric Power Company (TEPCO), inadequate technical and emergency preparations and back-ups, bureaucratic complacency and weak regulations.”167 A Japanese parliamentary report into the failings at Fukushima found that a culture of complacency led to the “man-made disaster,” citing an overly familiar relationship between Japan’s nuclear industry and the government regulatory bodies.168

In the aftermath of the accident, some questioned the wisdom of TEPCO running the clean-up operation, preferring an international body given the obvious self-interest, while others claimed information was withheld because of preparations for the 2020 Tokyo Summer Olympics, egged on by the then president Shinzō Abe, and that resource-poor Japan wanted to restart its nuclear plants as soon as possible. Openness and transparency doesn’t come easy in the nuclear world. Certainly, a better firewall is needed between industry and government.

In 2012, The Carnegie Endowment for International Peace published a report entitled “Why Fukushima was Preventable,” condemning the lack of independence of the Japanese nuclear regulator NISA and the nuclear industry as well as the governing body that “deterred NISA from asserting its authority to make rules, order safety improvements, and enforce its decisions.” In Japan, the practice of amakudari (“descent from heaven”) appoints utility executives as regulators, while amaagari (“ascent to heaven”) employs industry experts in government agencies, a cozy arrangement that promotes a lax safety culture free from open criticism and curtails lateral thinking, a.k.a. “the revolving door” in the West.

The Carnegie report added that if TEPCO and NISA had followed international best practice with regard to safety “they might have realized that the tsunami threat to Fukushima Daiichi had been underestimated.”169 Placing back-up generators below a reactor and storing spent fuel above a reactor were both major design flaws, making it difficult to supply water in the event of a LOCA. Furthermore, the approach to safety in Japan was undervalued, unlike in Europe where safety is “enhanced” by ordering necessary modifications as needed, while in the USA safety is “maintained” through a practice of “backfitting,” where costs and benefits are weighed against safety margins, a system that nonetheless “discourages safety upgrades requiring expensive engineering changes.”170 As hard as it is to anticipate a disaster, with nuclear power one must always stay ahead of the curve. Alas, one can be certain the insufficiently conservative Daiichi “design-basis” was improperly vetted given the frequency of tsunamis in the region.

Some attempts at restitution have at least begun over a decade since the tragic events of March 11, 2011. In 2022, four TEPCO executives including the then president were ordered by a Japanese court to pay 13 trillion yen to 48 TEPCO shareholders, the first case to find company liability for damages from the accident. Not surprisingly, the court found that TEPCO’s countermeasures against a tsunami “fundamentally lacked safety awareness and a sense of responsibility,” likely because of damning evidence that TEPCO had predicted the plant was susceptible to tsunami waves up to 15.7 m after a major offshore earthquake.171 It is unlikely, however, that the former executives will be able to pay the full $94 billion in damages awarded. They were also acquitted in 2023 in a separate trial of professional negligence resulting in death.

Richard Broinowski, author of Fallout from Fukushima, is concerned that the “industry has been allowed to develop as far as it has without as many checks and balances as it should have,”172 citing administrative secrecy from the top, as in a mafia or nuclear priesthood. In the USA, the independence of the Nuclear Regulatory Commission has been routinely called into question, while in Canada the Canadian Nuclear Safety Commission is primarily funded by the nuclear industry, which can easily create conflicts of interest with regard to licensing and plant renewals. Foxes guarding the henhouse, lunatics running the asylum, voluntary self-reporting in the world’s most dangerous industry – what could go wrong?

***

After a nuclear accident there is always a backlash and rethinking of energy policy. Since Three Mile Island in 1979, no new nuclear power plants have been built in the USA. What’s more, 100 orders were cancelled after the TMI partial core meltdown, while a completely finished $6 billion Long Island plant was mothballed and sold for $1.173 After the Fukushima disaster, plans for a hoped-for US nuclear renaissance were dashed, although two new plants are under construction in Georgia despite numerous delays and massive cost overruns (as we’ll see next).

In the immediate aftermath of Fukushima, Japan turned off almost all of its 54 reactors, representing 30% of Japanese grid power, relying instead on natural gas to make up the shortfall.174 Some reactors have since restarted, although ambitious expansion plans have so far been shelved. The Japanese have lived with the unseen hand of nuclear fallout far longer than anyone and must now worry even more.

Germany began exiting the nuclear business immediately after Fukushima, unwilling to risk a catastrophic accident on German soil or, as some pundits claimed, to bolster the CDU party of Chancellor Angela Merkel, which needed Green votes to rescue her ailing coalition. Within days of the three-core meltdown, Merkel – a physical chemist by training and former minister for nuclear safety – ordered all 17 German nuclear plants to be closed, eight reactors immediately and nine more in phases by 2022.

The move increased Germany’s reliance on coal in the short term – easily imported from neighboring Poland – and Russian natural gas, while accelerating their pledge to ramp up an already ambitious renewables sector. Siemens, the manufacturer of Germany’s 17 reactors, announced it would exit the nuclear business altogether. After the decision, the Swedish power provider Vattenfall sued the German government for €4.6 billion for “expropriation,” eventually losing the dispute not because of Fukushima angst, but from unresolved plant problems, ultimately receiving €340 million in compensation. Having also pledged to end coal by 2038, Germany is the first modern industrial nation to quit both nuclear and coal to stake its energy future on renewables.

In late 2021, Belgium stated it too would exit the nuclear biz, announcing the shutdown of all seven of its reactors by 2025, a decision postponed almost immediately for 10 years after Russia invaded Ukraine in February 2022. In the wake of the ongoing war, however, Germany temporarily restarted a shuttered coal plant, underscoring the danger of relying on imported energy and the need for more home-grown renewables. The nuclear exit, a.k.a. Atomaussteig, nonetheless remained intact, the coalition-party Greens refusing to change its long-standing, anti-nuclear policy. Although two of the last three reactors slated for closure at the end of 2022 (Isar 2 and Neckarwestheim 2 in the south) were kept in reserve to be connected if needed over the winter, all three (including Emsland in the north) closed on April 15, 2023, ending Germany’s 60-year nuclear industry, the loss of power covered by other sources.

Some advocates also claim that closing reactors after an accident can cause more harm than the initial damage, for example, by increasing coal emissions, but in the absence of ironclad safety assurances governments must first act to guard against more accidents. Many of the world’s nuclear reactors are built near major cities. The oldest French nuclear plant is built near a seismic fault line on the west bank of the Rhine on the French-German border and was ordered shut after Fukushima, but was still operating a decade later. One wonders how long we can keep playing nuclear roulette.

Despite promising to cut back its nuclear-generated electrical power by about a third, from around 75% to 50% by 2025, the French have instead decided to explore more nuclear power via a proposed shift to small modular reactors. As the world leader in home-grown nuclear power generation, France continues to stake its energy future on nuclear technology in a quest for continued energy independence. In the wake of the OPEC-fueled oil crisis in the 1970s, France became the world’s leading nuclear nation by percentage, building most of its 56 reactors in the following decade under the 1974 Messmer plan, determined to hedge against rising oil prices rather than worry about a potential accident. Today, France still routinely exports nuclear-generated electricity to its neighbors, although plants were forced offline in the summer of 2022 as inlet water temperatures rose to unacceptably high levels during the increasingly hotter weather.

Armenia’s Soviet-era Metsamor reactor may be the world’s most dangerous, a 440-MW VVER with no containment building. Located on the Armenian–Turkish border, the town of Metsamor was created to service the 1976-commissioned VVER that together with a second 1980 unit supply 40% of the country’s electrical power. Built in an active seismic area, the reactors were temporarily shut down after the 1988 Armenian earthquake that killed more than 25,000 people. Apparently, some inhabitants would rather live with the consequences of a potential accident than be without power, having suffered miserably without electricity through the “bone-chilling cold and dark days when the plant was closed down for several years.”175

Calling the plant “a danger to the entire region,” the European Union funded a number of fixes, while six similar plants in Bulgaria and Slovakia were closed as a condition of joining the EU. But despite the worry and an offer of €200 million to close the plant, the Armenian government has decided to squeeze out a few more years from Metsamor, the only VVER 440 in operation outside Russia, until a newer VVER 1000 is built as a replacement. One hopes the next version has sufficient earthquake defenses and a containment building.

We don’t have to travel to the dilapidated former Soviet Union, however, to uncover potential danger. In early 2016, the aging 2-GW, two-unit, LWR Indian Point Energy Center, located 35 miles from downtown Manhattan on the east bank of the Hudson River, was temporarily shut down after tritium-laced water entered the river and was detected in the groundwater at 650 times normal levels. Later in the year, hundreds of faulty bolts were discovered while a weld leak prompted another unexpected shutdown, forcing then Governor Andrew Cuomo to announce “the aging and wearing away of important components at the facility are having a direct and unacceptable impact on safety, and is further proof that the plant is not a reliable generation resource.”

A meltdown close to a large city would be unfathomable, perhaps an 8 on the INES scale. And yet, operations at Indian Point continued despite the problems and expiration of the original 40-year licenses for both units. Since 2000, the four nearest counties to Indian Point showed a 60% increase in thyroid cancer compared to the US average and underactive thyroid glands twice the national rate for babies.176 Finally, after numerous leaks and spills, recertification for 20 more years was denied and the 58-year-old plant closed in 2021.

More worrisome may be the dry-cask, spent-fuel storage facility at the San Onofre Nuclear Generation Station, 50 miles north of San Diego, which sits about 100 feet from the Pacific Ocean. Closed in 2013 and in the process of decommissioning, 73, 20-foot long, half-inch-thick, stainless-steel canisters is all that keeps the waste intact. Each canister contains an amount of cesium-137 equal to the entire Chernobyl release, where a breach would be beyond disastrous, whether caused by an earthquake (the site is next to a fault line), high-tide flooding, or something unexpected. As one nuclear energy consultant noted, “The most toxic substance on Earth is separated from exposure to society by one-half inch of steel encased in a canister.”177 One must have complete trust and unrelenting faith in the nuclear business.

While Hiroshima, Nagasaki, and the 1957 plutonium bomb-making reactor releases in Mayak and Windscale/Sellafield belong to a different era than the civilian nuclear power plants of Chernobyl and Fukushima, accidents still happen with potentially catastrophic consequences. The permanent no-go areas surrounding these devastated sites, which have upended the lives of so many forced to flee in panic, should serve as more than ghostly reminders of past errors, but a recognition of the damage awaiting anyone unlucky enough to live near the next accident.

Nuclear waste facilities and power stations are even more vulnerable in times of strife as seen during the war in Ukraine after a fire broke out in 2022 from Russian shelling at the Zaporizhzhia plant. A sudden power failure would be devastating, leading to a loss of coolant to the reactor core and emergency shutdown systems, while a direct hit to any of the six reactors would be beyond catastrophic. At least after a mining accident or oil spill, most of us can return home. After a nuclear accident, there is no home to return to.

3.9 The State of the Art: Is There a Nuclear Future?

So what is the state of today’s nuclear industry? In the West, not so great, while in China and India more nuclear reactors than ever are being built. Hoping to kick-start a dormant American industry after a 25-year hiatus, the US Congress even extended the 1957 Nuclear Industries Indemnity Act in 2005, limiting company liability in the event of a “nuclear incident.” The reworked deal provided better incentives for new builds, $18 billion in loan guarantees, $2 billion indemnification against overruns, and $1 billion in tax breaks, while the Nuclear Regulatory Commission relaxed its lengthy licensing procedures to help with the construction hurdle. Later, the Obama administration added more regulatory relief, guaranteed federal loans, and increased tax incentives for new builds. But even with all the generous government backing, nuclear is not easy to get up and running.

If they ever go online, two plants under construction near Augusta, Georgia, will be the first new nuclear reactors to start up in the USA since the 1979 partial meltdown at Three Mile Island. Georgia Power is the majority owner of two 1.1-GW advanced pressurized reactors (AP1000) being built by Bechtel and based on a Westinghouse PWR design that includes so-called Gen-III+ passive safety features and modular construction. Given a license in 2012 after years of wrangling, the final cost is expected to run to over $27 billion or double the initial estimate.178 In 2017, two similar AP1000 plants being built near Columbia, South Carolina, were axed after almost a decade of construction with $9 billion of a $25 billion estimated price tag already spent.

As a result of the mounting losses from numerous delays and high cost overruns in Georgia and South Carolina, which included a number of safety overhauls for the newer Gen-III designs, Westinghouse Electric Company filed for Chapter 11 bankruptcy in 2017, citing $10 billion in debts. Spun off from Westinghouse’s Pittsburgh-based electric powerhouse in 1998 with a majority stake bought by Toshiba in 2006, the demise of one of the original flagship nuclear power generating companies underscores the problem of large-scale construction in today’s nuclear industry, particularly in an era of increased public scrutiny and safety concerns. As for the other great American electric pioneer, GE’s nuclear division merged with Hitachi in 2007, further highlighting the difficulty of adding power to the domestic US grid via nuclear energy. Compounding the ignominy, GE was removed from the Dow Jones Index in 2018, the last nineteenth-century company to be delisted.

Although the original players are no longer calling the shots in a modern, Asia-centric, nuclear world, the Gen-III Toshiba/Westinghouse advanced power reactor (APR) and the Hitachi/GE advanced boiling water reactor (ABWR) still reflect the initial pressurized- and boiling-water designs made by the two pioneering rivals of the 1950s. GE is also trumpeting a streamlined BWR model – boldly called a “steam dryer” or “economic simplified boiling-water reactor” (ESBWR) – although the project has been beset by design issues, while Hitachi/GE was fined for lying to the regulator.

As problems mounted for the latest advanced Gen-III designs, the two 1.1-GW AP1000 reactors in Georgia slipped further behind schedule. Named Vogtle 3 and 4 after a former power-company president – adding to two earlier 1.2-GW, Westinghouse PWR reactors on the same site near Augusta – the start dates were pushed back by at least 4 years to 2021 (Vogtle 3) and 2022 (Vogtle 4), doubling the proposed construction time. After more problems surfaced over poor building practices, the start times were again delayed to 2022 and then 2023. One must have all one’s ducks in a row to add nuclear power to the grid these days – iron-clad financing, advanced safety systems, public backing, reliable construction, and a competitive price. Alas, natural gas and renewables have blown up the business plan despite nuclear’s rosy “carbon-free” and “cheap” electricity mantra.

In a bizarre financing twist to help cover the costs of the not-yet-built reactors, Georgians were even required to pay for their “premade” electricity, on the hook for almost a 10% increase in utility prices, despite not having received one iota of power. As noted in The Ecologist, “The utilities have been paying for individual elements of the two new plants as they are completed. The long delays mean that the interest costs are higher than expected and the regulator has already granted rate increases to compensate the eventual owners.”179 Even more bizarre, customers in South Carolina were tapped for rate increases from their two cancelled reactors at the Virgil C. Summer power plant near Columbus. Named for another former power-company chairman, the public is now permanently beholden to the largest pair of nuclear white elephants.

Not only has the time and cost to build a reactor become prohibitive – taking more than a decade from breaking ground to first grid-tied watt – nuclear power is more expensive at every turn as operators look for increased government bailouts to “compete” with cheaper natural gas and renewables, costing taxpayers billions more. In 2016, Exelon was given $2.3 billion to keep afloat two aging reactors in Illinois and asked for another $7.6 billion a year later to keep three more loss-making behemoths running in New York.180 In the absence of a state bailout in Ohio, a FirstEnergy reactor near Toledo, however, will close before its license expires, likely bankrupting its owner because of mounting losses.

Saddled by poor decision-making, Pacific Gas and Electric (PG&E) has also had its own financial problems related to ongoing wildfires in California, declaring bankruptcy in 2019 before restructuring in 2020. Although not initially bailed out, PG&E was permitted to keep its Diablo Canyon reactors running until 2025, the only two nuclear power stations in California, both 1.1-GW PWRs that sit precariously within 500 m of the San Andreas Fault (the largest in North America, where the North American plate meets the Pacific plate). Although scheduled for shutdown in 2025 after years of private–public debate over ridding the state of nuclear power, PG&E was given the option to keep the two 40-year-old reactors open until 2029 and 2030, ostensibly to counter a sharp increase in energy demand. The de-facto bailout will be paid for with a generous $1.4 billion California state loan covered by federal funding through the Civil Nuclear Credit Program of the Biden administration’s 2022 Bipartisan Infrastructure Law, a classic “robbing Peter to pay Paul” accounting strategy.

With renewables championed as safer, cleaner, and cheaper than nuclear power, getting a new project off the ground is almost impossible, typically a deal breaker at upwards of $10 billion per gigawatt. Ironically, because of Donald Trump’s position on climate change, V. C. Summer in South Carolina and Vogtle in Georgia were denied claim benefits via Barack Obama’s 2015 Clean Energy Plan to help soften the bill. When it comes to energy, changing policies over an extended construction period makes building new plants even harder as the loan interest for long builds exceed the principal. Nonetheless, the USA is still the world’s premier producer of nuclear power (95 GWe in 92 reactors), followed by France (61 GWe, 56 plants), China (52 GWe, 54 plants), Japan, Russia, South Korea, Canada, Ukraine, Spain, and Sweden.181 Currently, 32 countries generate nuclear power for a total of almost 400 GW from 437 plants.

In part to help offset an expected 60% reduction in energy generation by 2030 from decommissioning old nuclear plants and a continued phase-out of coal (the very long goodbye), Hinkley Point C (HPC) – now under construction on the south side of the Bristol Channel in Somerset, southwest England – will be the first nuclear reactor built in the UK since Sizewell B, a 1.25-GW PWR constructed in 1995 on the Suffolk coast in southeast England. After passing the final EU regulatory hurdle over a proposed large government subsidy, Hinkley’s two European pressurized reactors (EPR) will provide 3.2 GW of power or about 7% of UK needs at an initial estimated cost of £24 billion. To help get Hinkley going, the UK government backtracked on a promise of no new nuclear subsidies, guaranteeing a wholesale price of £92.50 per MWh for 35 years indexed to inflation, more than double the going rate, as well as £17 billion in financing. At the same time, the strike price for onshore wind/solar was £80/MWh and offshore wind £56/MWh.

If the past is any measure, cost overruns will be enormous at final grid tie-in, which will eat into governmental budgets and curtail green spending. By 2021, Hinkley was already £3 billion over budget, while a year later The Economist reported that HPC was 2 years behind schedule and £10 billion over budget.182 The director of an independent energy price comparison website thinks the project could result in British ratepayers collectively paying “an additional £5.2bn per year on top of their current electricity costs.”183 Nonetheless, nuclear advocates keep pushing for a dubious “nuclear renaissance.” Ironically, when Hinkley is completed, solar and wind will be far cheaper.

After four decades of wrangling, Hinkley C is being built by the state-owned French utility Électricité de France (EDF), keen to maintain its precarious dominance in Europe. Initially part-financed by the Chinese state-run General Nuclear Power Group (CGN), who were eager to branch out from their home-grown nuclear backyard after years of domestic construction, many are doubtful about a technology that may be obsolete by the time of completion, while some are not sure the design will even work. The so-called “next-generation” EPR is unproven, beset by technical problems and cost overruns, and has yet to produce a single watt other than from two Chinese reactors north of Hong Kong, the first of which suffered a fuel-rod radiation leak in June 2021.

The details were sketchy, but a rise in noble gas concentration in the primary circuit was detected because of problems with the fuel-rod casings, prompting a hurried memo to the US Department of Energy that warned of an “imminent radiological threat.”184 For safety purposes, CGN shut down the reactor to investigate and replace the damaged fuel. What’s more, China was removed from the Hinkley management group in 2021 by the UK government, who were worried about rising Chinese economic clout and possible national security issues. The buyout assumed CGN’s 20% stake and put an end to further Chinese involvement in two more proposed nuclear plants at Sizewell and Bradwell east of London.

The Guardian has flat out predicted Hinkley will be the “world’s most expensive power plant”185 when finished, although others are gunning for the dubious distinction. Overlooking the English Channel in Normandy on the northwest coast of France, construction began in 2007 on the 1.6-GW Flamanville 3 EPR, variously delayed by bad welds, low steel strength, cooling-system valve failures, slow component delivery, and poor quality control, the €3.3 billion price tag ballooning to over €10 billion. After more problems appeared in 2018 in almost half of the inspected welds, more delays were announced, again pushing up costs, while major metallurgical flaws were found in the massive stainless steel reactor vessel and dome, preventing the scheduled completion.186 The reactor was slated to be grid-connected in 2019, delayed until 2020 and then 2021. As of 2023, Flamanville 3 was still not operational, with an estimated final bill approaching €20 billion.

France was hoping the EPR design would provide “a safer, more powerful and long-lasting nuclear reactor that would replace its ageing fleet and boost French nuclear exports,” but has instead “become a byword for its failings and an embarrassment for the government, which owns 84% of EDF.”187 The fault may lie in the design as another EPR – the 1.6-GW Olkiluoto 3 (OLK3) in the municipality of Eurajoki in western Finland – began construction in 2005 and was expected to be up and running in 5 years, but was finally grid-tied in 2023, 18 years after first breaking ground. Problems at Olkiluoto included watery concrete and monitor and control systems issues. Both plants are well over budget with a planning-to-operation time of over two decades.

China, however, continues to ramp up its nuclear presence, away from the West’s strict regulatory control, capital-raising issues, and endless construction delays. Beginning with two reactors in 1994, 54 nuclear plants are now online, pushing 52 GW of power on to the Chinese grid.188 Providing much cheaper electricity than the West, building costs are about $2,000/kWh or one-fifth that of the USA, hopefully without compromised safety.189 While Toshiba/Westinghouse, Hitachi/GE, and EDF have seen their business models shredded in the West, China leads the nuclear charge into the twenty-first century, expecting to add another 77 GW from 69 more reactors either under construction or in planning, little impeded by cost or public opposition. Another 156 units have been proposed at almost 200 GW.190

India is also enjoying a surge in new builds to modernize its grid and provide reliable baseload power in a developing economy. Having built its first CANDU reactor at Rajasthan in 1973 – one of nine eventual plants that some claim were used to make a nuclear bomb – India has added eight reactors in the last two decades with another eight planned by 2026, hoping to double its share of nuclear power from 2.6% to 12 GW.191

Over a third nuclear, South Korea has been aggressively selling its state-run KEPCO expertise, including a possible new build at Bradwell, east of London. KEPCO is also involved in a joint venture with Emirates Nuclear Energy Corporation to build the first reactors on the Arabian Peninsula as part of a $30 billion, 5.6-GW project at Barakah, 300 km west of Abu Dhabi, where the site is “dominated by a series of enormous domes like an industrialised version of a mosque.”192 The four APR1400 reactors are expected to provide 25% of electrical power to the UAE in a region once awash in oil. As proudly noted in the UAE’s English-language newspaper The National, one uranium pellet is about half the size of a dirham coin, the local UAE currency, but is equal to 471 liters of oil, 481 m3 of natural gas, or 1 tonne of coal. Not mentioned is the over 100 billion dirham coins needed to build the Barakah reactors.

Richard Muller is a cheerleader for both natural gas and nuclear, arguing that gas is better than coal because of lower carbon emissions (about half) and that nuclear is best of all despite sketchy decommissioning plans, long-term waste problems, and the possibility of a catastrophic accident. He does cite higher construction costs, however – twice that of a coal plant and four times a natural gas plant – but champions the abundance of minable uranium and reduced nuclear fuel requirements (1 kg of U-235 = 12,000 barrels of oil).193

M. King Hubbert was excited about the potential for unlimited nuclear power, primarily because of reduced fuel needs, calculating that 358,000 metric tons of uranium was equal to all the fossil-fuel reserves in the USA, of which only 553 metric tons (0.15%) were consumed in 1956. According to the World Nuclear Association, the initial fuel load is just 3% of overall equipment costs in a reactor, while fuel operating costs are only 14% (half from refining) compared to 78% for coal and 87% for natural gas.194 Transportation is also easier and cheaper for the more concentrated uranium fuel, yet potentially much more dangerous.

***

So, what is the real cost of nuclear power, that is, when all the accounting is done? To compare nuclear power with other power-generating technologies, we must account for every expenditure, including overruns (more expensive for a long, amortized build), lifetime maintenance and fuel (40–60 years), waste management (indeterminate and endless), and a still unknown decommissioning bill.

Germany has budgeted €38 billion to decommission its 17 nuclear reactors, phased out after Angela Merkel’s Fukushima moratorium, while France’s state-run EDF will likely go bankrupt if it has to stump up more than the €100 billion earmarked for its 58 aging reactors.195 Almost all US reactors have exceeded their original 40-year lease period, and will soon need to undergo expensive shutdowns, amounting to hundreds of billions of dollars not currently on the books. To be sure, there are enormous hidden costs to nuclear power.

Shockingly, President Eisenhower knew nuclear power was more expensive than conventional power during his solemn 1953 “Atoms for Peace” speech to the UN that proposed to repurpose fissionable uranium supplies for atomic weapons to nuclear power. An internal classified report stated, “Nuclear power plants may cost twice as much to operate and as much as 50 percent more to build and equip than conventional thermal plants.”196 The first dozen US reactors collectively lost almost $1 billion, Westinghouse and GE hoping such “loss leaders” would eventually become cost-effective.197

Nonetheless, the nuclear industry pressed on, subsidized to the hilt, which continues today despite cheaper alternatives. Considered “the most successful nuclear scale up experience in an industrialized country,” even the vaunted French reactor program incurred “a substantial escalation of real-term construction costs,” resulting in an increase rather than decrease in costs over time, that is, “negative learning.”198 Not only has nuclear power not been too cheap to meter, but the customers are stuck with an ever-increasing l’addition. Even without the extensive peripheral costs, such as waste storage, clean ups, decommissioning, and subsidies, nuclear has never been as cheap as advertised.

Channeling the upside-down absurdity of Alice’s Wonderland in Through the Looking Glass, a former head of the UK Atomic Energy Authority noted that nuclear power “has been, and continues to be, a case of ‘jam tomorrow, but never today.’”199 Shockingly, half of all US nuclear reactors are losing money,200 while a 2019 study – based on 674 nuclear power plants built worldwide since 1951 – calculated that a 1-GW plant loses almost €5 billion on average. Furthermore, not one was built with “private capital under competitive conditions,” but were all heavily subsidized and tied to military objectives, which continue to be unprofitable.201 Even the US Federal Energy Regulatory Commission chairmen from 2009 to 2013, Jon Wellinghoff, called nuclear “too expensive,” adding “We may not need any, ever.”202

We must also include the cost of subsidies not on offer to other technologies. According to a 2011 report on US government subsidies, nuclear power received $3.5 billion annually since 1947 ($175 billion over the first 50 years) compared to $4.86 billion per year for oil and gas (almost $500 billion since 1918).203 Furthermore, the percentage of subsidies in the first 15 years when financial support is most needed – measured as a percentage of the federal budget – is through the roof for nuclear power, with taxpayers also on the hook for future long-term waste disposal, essentially another subsidy whose total costs are still unknown. Conversely, renewable-energy technologies received only $0.37 billion per year since 1994, under $6 billion in total, hardly a level playing field, although biofuels received $1.08 billion annually since 1980.

Subsidies are essential to develop new technology, but one must question why the nuclear industry along with the oil and gas industry still receive billions of dollars in annual government handouts, many times more than renewables. Permanent funding only helps to extend old-world thinking, restrict innovation, and prop up an inefficiently mispriced economy. Alarmingly, as we fall further behind in installing renewable energy, bailouts continue for loss-making nuclear plants, while fossil-fuel subsidies continue to rise unabated. As already noted, annual government subsidies for fossil fuels across the globe doubled from the previous year in 2021 to almost $700 billion.

The imbalance isn’t only in the USA. In Canada, Atomic Energy Canada Limited annually received subsidies of $170 million until 1997,204 while in the UK an 11% nuclear levy on electricity bills from 1990 to 1998 intended to cover future decommissioning and waste costs was instead used to build Sizewell B, an estimated £9.1 billion subsidy.205 Worldwide, the nuclear industry has received 60% of subsidies compared to 25% for coal, oil, and gas, and only 12% for renewables, while generous tax breaks are also available.

A full accounting must also include clean-up costs. Who pays for the $1 billion clean up at Three Mile Island, the $1.3 billion clean up in Port Hope, the $2 billion mess in and around Palomares, the $2 billion WIPP leaks in Carlsbad, New Mexico? What about the $2.4 billion compensation for American deaths and illnesses (especially from thyroid cancer) for those living downwind of the Trinity and other atomic weapons test sites in Nevada?206 The $7 billion Deep River, Ontario, clean up, the devastation to Mayak and Windscale, the $27 billion Rokkasho plutonium reprocessing facility in Japan to “recycle” spent fuel still not functional after 20 years, the £80 billion Sellafield shambles that still costs more than £3 billion a year to secure? The £132 billion to decommission 14 former UK nuclear sites?

Even the proposals for future clean ups are exorbitant – the Yucca Mountain waste storage site in Nevada has already cost over $15 billion without having stored a single ounce of nuclear waste and may never. The test sites in the Pacific, Kazakhstan, and Nevada are riddled with radioactivity and may be the largest nuclear waste sites in the world. Who pays to make them safe for displaced citizens to return, if ever?

The UN pegged the Chernobyl damage at more than $200 billion, not including the $1.8 billion giant steel arch installed 30 years later, while in 2021 new evidence suggested the broken core may be fissioning again, requiring even more expensive containment work. The total cost to clean up Fukushima may ultimately exceed half a trillion dollars. Are these costs included in the nuclear power bill? What about decommissioning almost half of all current nuclear reactors over the next 25 years? Will that be included in a too-cheap-to-meter bill?

Who pays for the estimated $660 billion clean up of liquid wastes stored in 177 underground tanks at the Hanford Manhattan Project plutonium production reactors, located roughly 2 miles from the Columbia River?207 Six of the nine reactors have already been encased in steel and cement for long-term storage, while the last two reactors – K-East and K-West that closed in 1970 and 1971 – will similarly be “cocooned” over the next 75 years before being dismantled. Nuclear power has become a never-ending money pit, an endless drain on public finances, restricting needed investments on more profitable clean-energy projects.

Nor is nuclear as carbon free as some claim, requiring huge amounts of concrete and water as well as carbon-intensive uranium mining, resulting “in up to 25 times more carbon emissions than wind energy, when reactor construction and uranium refining and transport are considered.”208 As Stanford University engineering professor Mark Jacobson notes, “There is no such thing as a zero- or close-to-zero emission nuclear power plant. Even existing plants emit due to the continuous mining and refining of uranium needed for the plant.”209

Neither can nuclear be labeled “green,” in particular because of the “environmental impact of nuclear waste.”210 A legacy of wartime competition and ongoing post-war diplomatic failure, the nuclear lobby continues to be well funded, however, as seen when the European Commission included nuclear and natural gas in the EU “taxonomy of environmentally sustainable economic activities.” Calls of greenwashing rang out across Europe as the pro-nuclear government of France squared off against the now anti-nuclear government of Germany. Even if nuclear power was as green as claimed, building renewable energy is faster and cheaper watt for watt, exactly what is needed now as global temperatures increase. A nuclear power plant costs 10 times as much, takes more than a decade to build, and produces much more emissions per kWh than a wind- or solar-power installation.

Energy analyst and writer Amory Lovins, who has advocated for more conservation to reduce our overreliance on energy (coining the term “negawatts”), states that so-called “low-carbon” nuclear power actually makes global warming worse, hampering climate protection precisely because of the high costs and the either–or nature of investment, where “Costly options save less carbon per dollar than cheaper options.”211 As Lovins notes, “It is essential to look at nuclear power’s climate performance compared to its or its competitors’ cost and speed. That comparison is at the core of answering the question about whether to include nuclear power in climate mitigation.”212 The environment is on short notice because of increasing carbon emissions, but reducing upstream dirt for more downstream damage is no solution, just more of the same upside-down thinking as we continue to pay through the nose for a broken dream.

Nuclear power is pretty good at staying on, however, with a capacity factor of about 90% thanks to the uprating of older reactors. Not quite plug-and-play, but fewer headaches to maintain baseload grid levels. Although a nuclear reactor can’t be used to stabilize the grid, some investors claim that small modular reactors will be able to ramp up and down to augment increased variability from intermittent sources. Still, nuclear power takes the longest to start from an off position, while costs are high to start or stop a fissioning reactor. By comparison, solar and wind output will always be intermittent, but a flatter output can be maintained with improved battery technology and interactive grid sharing in the same way electricity is drawn from numerous power stations to maintain a constant instantaneous supply (which we’ll look at in Part II). Most importantly, solar- and wind-power installations take very little time to construct compared to nuclear.

So what is the real cost compared to other energy technologies when everything is included: planning, financing, materials, equipment, building, operations, fuel, waste management, decommissioning, salvage, abandonment (never mind the overlooked cleanup costs)? The levelized cost of energy (LCOE) is a single metric in dollars per unit of power ($/MWh) that compares the overall costs of different power-generating technologies. LCOE is a life-cycle analysis that includes all present and future costs, essentially building and operating costs divided by lifetime power output.

The LCOE analysis from 2009 to 2021 by the American investment bank Lazard (Figure 3.7) shows that nuclear power has been more expensive than oil and gas for decades – an inverse learning curve – while photovoltaic (PV) solar power has been cheaper than nuclear since 2013, already having reached “parity” and now a quarter of the cost. Throw in all the extras and the comparison isn’t even close. What’s more, the levelized cost of nuclear has increased by a third since 2009 while solar has dropped 90%.

Figure 3.7 Average levelized cost of energy (LCOE) from 2009 to 2021 ($/MWh)

(source: “Lazard’s Levelized Cost of Energy Analysis 2021,” Lazard, Version 16.0, April 2023. https://www.lazard.com/media/typdgxmm/lazards-lcoeplus-april-2023.pdf).

Some think nuclear power has remained expensive because a single design has not been agreed, undermining the standard learning curve of a new technology that continually reduces costs with improved engineering efficiency, a.k.a. the “virtuous circle.” More likely, the increased costs of regulatory oversight, safety refits, and delayed construction has pushed nuclear-generated power beyond practical limits, while safeguarding sensitive atomic technology precludes sharing. Maybe it is time to give up on a power-generating technology that gets more expensive over time and can never deliver on its promises.

Science writer Fred Pearce wonders if nuclear is in fatal decline, citing the ongoing financial problems of Westinghouse, Toshiba, and EDF: “While gas and renewables get cheaper, the price of nuclear power only rises. This is in large part to meet safety concerns linked to past reactor disasters like Chernobyl and Fukushima and to post-9/11 security worries, and also a result of utilities factoring in the costs of decommissioning their aging reactors.”213 Cheaper alternatives, unresolved waste issues, and fear of an accident laying waste to more of the earth continues to keep nuclear power from ever becoming a viable future energy source.

The public has clearly had enough of our ongoing nuclear folly as the era of top-down energy gives way to a call for a cleaner, safer future. According to a 2011 poll in 24 countries, a majority of the public opposes nuclear power: 81% in Italy, 79% in Germany, and 51% in the UK.214 In the 24 countries surveyed, only 38% were in favor of nuclear power (solar 97%, wind 93%, hydro 91%, natural gas 80%, coal 48%, and nuclear 38%), while only 31% support building more reactors, 44% in the USA and 43% in the UK. And yet nuclear power is still controlled and nurtured by the state, given preferential treatment without being tasked to maintain public safety beyond the lifetime of a reactor. That the state can close off a nuclear site for all time as in Mayak, Chernobyl, and Fukushima, and make taxpayers pay for the privilege of being excluded, shows the real power of nuclear energy. How many permanent exclusion zones must be created before we stop throwing good money after bad to make right past mistakes?

Occupying a ruggedly beautiful stretch of coastline near the wilderness of Big Sur, where Jack Kerouac praised the rhythmic sound of the “ocean motor,” Diablo Canyon may be the only viable solution to an aging fleet of nuclear reactors – closure. Completed in 1985, the two Westinghouse 1.1-GW PWRs provide 10% of state power, but were slated to close by 2025 at the end of their original licenses. Alas, the last nuclear power plants operating in California were extended (a.k.a. “second license renewal”) to ease the latest so-called “energy crunch” despite the potential devastation from an earthquake that could release radiation across California, endangering 40 million inhabitants. In the absence of new builds, dangerous extensions are all that is left for the nuclear industry. One day, we may finally see the end, at least in California.

The development of nuclear power was an extraordinary achievement, involving thousands of scientists and engineers, decades of research, and trillions of dollars in construction costs, in part to assuage the guilt of having destroyed two cities, but has led to a world we are neither ready for nor can endure. It is fitting to give the last words to Albert Einstein, who so ably championed and then cautioned the world about the raw power of the atom: “Nuclear power is a hell of a way to boil water.”

3.10 Fusion: The Power of the Future, Coming Soon at Long Last?

As a first-year university physics student in 1979, I was told that a working nuclear fusion reactor was only a matter of time and would eventually solve all our energy problems. Fifty years was given as a possible time frame. Turns out the same estimate was given to my professor decades earlier when he was an undergraduate, while today’s students are given a similar story. As the old joke goes: fusion, the energy of the future … and always will be.

A working fusion reactor may happen one day, even within the next 50 years, but there are still numerous challenges to overcome, not least how to confine a 150 million degree nuclear reaction in a container that doesn’t melt. Forget about boiling water to run a steam turbine; with fusion we can’t even hold the fire. It’s even harder than trying to put the Sun in a box. How to keep the fusion process going for a sufficient length of time and dissipate the reaction heat are also major obstacles to producing abundant, low-waste power, as is getting more out than we put in – making fusion work in a box on earth is one thing, breaking even is a whole other dream.

By contrast, a fission reaction burns at less than 1,000 degrees, easily contained as long as the heat is continuously removed by a coolant (for example, water, carbon dioxide, air, liquid sodium). Even a major fire is peanuts by comparison, such as burning zirconium fuel rods and uranium fuel melting the reactor vessel and concrete containment structure, assumed to fall unimpeded through the Earth. Popularly known since Three Mile Island as the China Syndrome, the seemingly unstoppable overheated liquid blob will still be cooled by the ground about 6 feet under. Alas, at temperatures 10 times that of the Sun needed for fusion, no material box could ever do the job. We need something else to contain the energy – a magnetic field (as in a tokamak or stellarator) or a way to remove the heat almost instantaneously (as in laser confinement). A temperature over 5,000 degrees would melt any container.

As we saw earlier, fission breaks heavy elements apart (for example, U → Kr + Ba). In a fusion reaction, lighter elements join together to make a heavier element, the simplest a joining of four hydrogen atoms into a helium atom (for example, H + H + H + H → He), where at 150 million degrees the hydrogen atoms overcome their large repulsive forces to fuse. Most importantly, because the combined masses of four hydrogen atoms (41H1) is greater than the mass of one helium atom (2He4), the difference is made up in energy, lots of it according to Einstein’s famous E = mc2 equation (E ≈ (4mHmHe)c2). If you do the math, a helium atom has about 1% less mass than its component parts (a.k.a. the “mass defect”).215 We get even more if we fuse deuterium (1H2) and tritium (1H3), the two heavy hydrogen isotopes D and T, where D2 + T3 → He4 + n and E ≈ (mD + mTmHemn)c2). A DT fusion reaction generates more energy at a lower temperature than 4H fusion because of the lower Coulomb repulsion (neutrons have no charge).

Throughout the universe, higher-Z atoms are continuously being fused in the stars – nature’s “element factory” – to form elements as high as iron (Z = 26). No longer able to create elements beyond iron, a star eventually collapses from its own gravity, imploding and then exploding. Fusion is the origin of most of our material world, while elements higher than iron are created by supernovae and neutron star collisions. One obvious fusion source is our own local star, the Sun, which continuously burns hydrogen (or its heavy hydrogen isotope D) to make helium and energy, a.k.a. stellar nucleosynthesis. In the 1920s, the British astronomer Arthur Eddington suggested that solar mass is converted to energy by fusing hydrogen into helium, giving us our always-on sunlight and overturning the notion that the Sun is powered in any conventional sense like a fossil fuel that would soon burn itself out.

The German-American physicist Hans Bethe would win the 1967 Nobel Prize in Physics for “contributions to the theory of nuclear reactions, especially his discoveries concerning the energy production in stars,” in particular the proton–proton (PP) reaction chain of fainter stars such as our own relatively average, G-type, main-sequence Sun or the carbon–nitrogen fusion cycle of brighter stars. Surmising that carbon must be created in stars because all life is carbon, British astronomer Fred Hoyle worked out the intermediate helium–carbon fusion cycle or “triple-alpha process” (He2+ + He2+→ Be, Be + He2+→ C).

Given the massive size of our Sun, the energy is enormous. In fact, because of solar fusion the Sun becomes 4 million tons lighter every second, although there is no need to worry about the hydrogen fuel running out any time soon. As Bob Berman writes in Zapped, “given that the Sun has a total mass of two nonillion – that’s the number 2 followed by 27 zeros – tons, its ongoing loss of mass is not noticeable. It’ll be billions of years before any serious consequences ensue.”216

Note that at such high temperatures, atoms exist only in a plasma state, the so-called fourth state of matter (solid → liquid → gas → plasma), where bound electrons have been stripped from their atoms to form a soup of comingling, oppositely charged, ionized nuclei and electrons, that is, a highly electrically conductive charged gas or plasma. To get an idea how hot the universe is, 99.9% of all matter is plasma, more than 90% hydrogen (90% of all stars one sees in the night sky burn hydrogen). Containing such a hot plasma at temperatures greater than the Sun and extracting useful energy is the fusion holy grail.

The nuclear “binding curve” shows us which elements fuse, which fission, and the energy released in the process. Because of how nucleons vie for stable atomic space, elements below iron can fuse (Fe, Z = 26) if the material temperature and density is high enough, while elements above iron can fission when split by a neutron (as we saw earlier). Iron has the highest nuclear binding energy in the periodic table and is thus the most stable element. The idea is that we get fusion going left to right in the periodic table for elements lighter than iron because the mass of the fused sum is less than the mass of the initial parts, while we get fission going right to left for elements heavier than iron because the mass of the fission products is less than the mass of the original atom. The reaction energy is generated from the missing mass in both cases. In nuclear physics, the sum of the parts does not equal the whole without accounting for the stored nuclear energy (Figure 3.8).

Figure 3.8 Nuclear fusion. (a) The nuclear binding curve and (b) deuterium–tritium (DT) fusion reaction.

One can see how enticing fusion is from the binding curve, because we get about 200 MeV when uranium turns into krypton and barium via fission (n + U→ Kr + Ba + 3 n + 170 MeV), roughly 1 MeV per nucleon, but much more when we fuse deuterium and tritium into helium (D + T → He + n + 17.6 MeV), or about 4 MeV per nucleon. The steeper slope for fusion indicates more released energy.

Unfortunately, fusion is not easy here on Earth. Without the Sun’s gravity to help break through the repulsive atomic barrier of the reaction protons – an enormous force at a distance of 10–15 m (0.000000000000001 m!) – we have to heat a DT plasma to at least 100 million degrees, that is, six times hotter than in the solar core, to speed up the nuclear projectiles and help them fuse. That we can do with an electric current (~resistive heating), radiowaves (~microwave heating), or a particle beam. But without the Sun’s massive gravity to confine the reactions, we also have to devise a way to hold the plasma without burning down the house, for example, via a large magnetic field. A magnetic field also compresses the plasma, increasing the density and probability of fusion.

A particle accelerator works by shaping the path of two atomic particles using a series of bending magnets and smashing them into each other to see what comes out, such as the illusive Higgs boson (the so-called God particle). In the famous 2012 experiment at CERN’s Large Hadron Collider (LHC), the world’s largest particle accelerator, two beams of protons were accelerated to just under the speed of light by a massive array of superconducting electromagnets along a 27-km long circular tunnel and then smashed together. In one of the most fascinating (and expensive!) experiments ever, the magnetic field of the LHC bent the whizzing protons into two almost-circular beams to smash them head on.

But keeping stable a whole reactor full of hot protons and electrons that want to go this way and that, while hovering inside a large and fluctuating magnetic field, is a whole other game. Heated ions naturally want to recombine and cool down and fall to the ground, so the hydrogen plasma must remain in circular motion, shaped in mid-air to keep from hitting the walls. At temperatures approaching 100 million degrees, one might as well try to hold a falling star without touching it.

***

In the 1950s, a hot hydrogen plasma was generated in the Zero Energy Thermonuclear Assembly (ZETA) at the Atomic Energy Research Establishment in Didcot in the UK, the first major attempt to produce large-scale hydrogen fusion, overseen again by the original atom smasher John Cockcroft. To maintain the circulating plasma, two parallel currents were “pinched” via electromagnetic attraction (known as a “z-pinch”), but led to large instabilities. Nonetheless, a large hydrogen plasma – heated by an applied electric current – had been produced for the first time, albeit lasting only for about 1 millisecond. Initially declared to be the world’s first “artificial Sun” calculations soon showed, however, that the 5 million degree plasma was not nearly hot enough to generate fusion.

Other designs soon pushed the fusion frontiers, such as a stellarator, the Model A first built in Princeton in 1953 by American astrophysicist Lyman Spitzer, the father of US fusion research. A stellarator twists the plasma in a figure-of-eight as it circulates to even out the magnetic field and keep the hydrogen nuclei and electrons stable while in motion, but Spitzer’s design was discarded as too small, the plasma drifting too quickly to the walls. Despite the lack of success, however, the z-pinch and stellarator both helped establish the fundamental principles of terrestrial fusion, and that a bigger box would be needed to maintain plasma stability whirling around in an earthly cage.

Figure 3.9 Magnetic confined fusion reactor schematics: (a) tokamak and (b) stellarator

(source: Xu, Y., “A general comparison between tokamak and stellarator plasmas,” Matter and Radiation at Extremes 1: 192, 2016. https://doi.org/10.1016/j.mre.2016.07.001. CC BY-NC-ND 4.0).

Around the same time, another fusion device was built in Russia by physicists Igor Tamm and Andrei Sakharov, employing 2 magnetic fields to confine the plasma instead of a single field as in a twisty stellarator. Called a tokamak – a Russian acronym for “toroidal chamber magnetic coils” – the details were initially unknown to the West because of Cold War secrecy, but the results eventually showed a workable design (Sakharov was the Soviet bomb developer turned human-rights and peace activist who won the 1975 Nobel Peace Prize).

In a tokamak, the plasma moves in a helical path that wraps around itself in a donut-shaped chamber. An infinitely long straight tunnel is theoretically the best design, but is obviously impossible, so the cylindrical container is bent into a torus through which the corkscrewing plasma can continuously flow, essentially a tube without ends (in mathematical parlance a donut is a torus).

The two magnetic fields in a tokamak act in perpendicular directions, called toroidal (Bϕ) and poloidal (Bθ), where the toroidal plasma current creates the compressive poloidal field. The aspect ratio (R/a) is typically around 4, where R is the donut’s radial size (major radius) and a its thickness (minor radius). The first tokamak, T1, built in 1958 at Moscow’s famed Kurchatov Institute, had a major radius R of 0.67 meters and a minor radius a of 0.17 meters, producing a toroidal field Bϕ of 1.5 tesla. In 1968, a larger tokamak, T3 (R = 2 m, a = 0.4 m), heated a millisecond plasma to 10 million degrees, jump-starting international research on the Russian tokamak design.

But despite the success of the early tokamaks, keeping the plasma flow stable was still a deal breaker as the charged particles drifted to the walls since the bunched-up magnetic field lines in a torus-shaped device are stronger at the core (the donut hole) and weaker at the edge. Bigger plasma “bottles” were needed to maintain stability – the bigger the better to minimize bending and drift, which of course adds to the expense. What’s more, since confinement is short-lived because of fluid instabilities, a fusion reactor is pulsed; that is, we evacuate the air (no heat transfer in a vacuum), turn on the magnets, introduce hydrogen into the chamber (typically a 50–50 DT mix) that forms a plasma after the applied current heats the fuel, which then ignites the fusion reaction when temperatures exceed 100 million K. When the plasma becomes unstable, we do it all over again, while if anything goes wrong the system immediately shuts itself down.

Located just south of Oxford at an old air force base in Culham, the Joint European Torus (JET) has been running since 1984, and was the first fusion device to sustain a plasma for almost 2 seconds before loss of confinement. In 1997, JET also recorded the most powerful plasma ever, generating 16 MW of fusion power from an initial heating of 24 MW, alas a net loss because the out/in ratio is less than one (Q = Pout/Pin = 16 MW / 24 MW = 0.67). Highlighting the difficulty of making fusion work, no heat is extracted from JET – we’re still figuring out how to keep things stable and make a plasma last.

I was fortunate to see how JET works as part of an Oxford plasma program. On a tour of the facility, our group was told that before starting a run, the operators call the local utility company as a courtesy, knowing that the initial power could overwhelm the grid. A JET experiment can hog more than 1/6 of UK electrical power at the start, and although a run typically lasts only a few seconds the spike is too much to handle. The almost instantaneous 24-MW draw needed to start the process shows how hard it is to mimic the Sun and get the action going. Occasionally, as we were told, the operators were asked to wait until Coronation Street was over or until everyone had plugged out their kettles at the end of the long-running British soap. By comparison, the Manhattan Project took 20% of the US grid to run the Oak Ridge enrichment plant, an extraordinary technological and logistics challenge.

As with most large-scale science research, progress is incremental, where more is learned with each new design and iteration. The latest and greatest fusion device is the International Tokamak Experimental Reactor (ITER), located in the Cadarache research center in the south of France, 100 kilometers due east of the city of Arles, where Vincent van Gogh painted some of his most memorable works as he captured the vibrancy in his pastoral landscapes on canvas in his own whirling dynamics.

ITER (pronounced “eater”) hopes to generate “first plasma” by 2025 after almost 20 years of planning and construction, although nothing is ever easy with fusion or goes to plan – the project is more than 10 years behind schedule and three times over budget. At roughly the size of a football field including all adjunct buildings, the biggest fusion device ever will produce the highest yield yet, while the reaction chamber itself will contain a plasma volume of 840 cubic meters – ten times larger than JET – with an outer chamber wall diameter of 16.4 meters and an inner donut wall diameter of 4 meters (R = 6.2 m, a = 2.0 m).

As one might expect, development costs are too much for one research group, and thus international cooperation is needed. The seven ITER Agreement members (a.k.a. domestic agencies) are Europe, China, India, Japan, Korea, Russia, and the United States, who share costs and intellectual property, bringing together scientists and engineers from around the world. At roughly $20 billion and counting, some have called ITER the most expensive experiment ever, more than the $13 billion spent by CERN to build and run the LHC that successfully detected the Higgs boson by colliding protons at super-high speeds.

As in the name, ITER is a tokamak design (previously the T was for “thermonuclear” but was changed to “tokamak” to sound safer), while “experimental” means we’re still working out the kinks, that is, a proof of concept to show how to contain a fusion reaction for long enough to get more energy out than we put in, known as the “net energy” or Q factor, where the total power generated (Pout) is greater than the thermal power injected into the chamber to heat the DT fuel to a plasma (Pin) and initiate fusion. ITER also means “journey” or “route” in Latin, highlighting the many steps to reach the end. Considered half-completed at the end of 2018, the bill could be a whopping $40 billion by the time of first plasma.

The goal of JET was to show that a “burning plasma” could be stably contained for longer periods, while the goal of ITER is to produce a Q of 10, producing 500 MW of fusion power from 50 MW of initial heating power. In a burning plasma, alpha particle reaction products become the main source of heating, thus producing further fusion reactions (D,T: α, n). ITER is not designed to generate electricity, but to demonstrate that net energy can be achieved, thus preparing the way for the next design that will extract GW-scale electrical output in a working fusion reactor. No one should ever underestimate the challenges of bottling the Sun.

In 2035, 10 years after the projected first plasma, ITER is expected to demonstrate how to sustain a DT reaction by internal heating (thus greatly reducing the input energy) as well as breed tritium, because tritium is expensive, short-lived (t1/2 = 12 years), and available only in limited supply (now delayed until the 2040s). Costing $30,000 per gram, the only source is collected as a by-product in a CANDU reactor, regularly detritiated from the heavy-water moderator. If all goes well, the next iteration after ITER will be a working fusion reactor, already in the early planning stages called DEMO, where high-speed neutron reaction products collide with a blanket-lined reactor wall, transferring kinetic energy to a circulating fluid to generate heat (as in a conventional power station). Originally planned to generate 500 MW, DEMO has been scaled back to 200 MW.

ITER is by far the biggest and most complex experiment ever devised and has seen many setbacks since breaking ground in 2008. Just to create the magnets for the tokamak chamber, 100,000 km of niobium-tin superconducting strands of metal were manufactured by nine suppliers over seven years (the agency agreement requires duplicate member production).217 Built on top of a cryogenic base that is liquid-helium cooled to a temperature just above absolute zero (-269°C or 4.15 K), the largest electromagnets ever made will create the magnetic fields needed to keep the super-heated plasma moving on its merry curved helical way. The stronger the magnetic field the better – by doubling the magnetic field, the plasma volume needed to produce an equivalent power is reduced by a factor of 16.

In the meantime, about 30 fusion projects are up and running around the world, all of various shapes and sizes (such as a spherical tokamak that has less surface area and doesn’t need as large a magnetic field), while the stellarator has made a comeback, including the Large Helical Device (LHD) in Japan and the Wendelstein 7-X (a.k.a. “star in a jar”) in Greifswald, Germany, each hoping to find the right geometry and dimensions to imitate the Sun, or at least add to our understanding of controlled terrestrial fusion.

W7-X is the largest ever constructed stellarator, designed with the aid of advanced computer simulations to work out the precise shape of its 2 million component parts to create a mostly toroidal helical coil in an extraordinarily futuristic-looking twisty construction. As in all magnetically confined plasmas, the goal is to minimize drift by keeping the plasma suspended inside the walls, ever hovering without touching. W7-X’s first hydrogen plasma was achieved in 2016, measuring 80 million degrees and lasting a quarter of a second. Flipping the switch, the then German chancellor and PhD scientist Angela Merkel noted “When we look at nuclear fusion, we realize how much time and effort is needed in basic research. In addition to knowledge, a good deal of stamina, creativity, and audacity is required.”218 A 16-month upgrade completed in 2017 produced a world-record plasma discharge lasting over 100 seconds, with further plans to generate continuous power in a 30-minute plasma.

In December 2021, the Chinese HT-7U Experimental Advanced Superconducting Tokamak (EAST) produced the longest hot plasma yet, a record 70 million degrees for 1,056 seconds (17 minutes, 36 seconds), adding to its record plasma temperature milestone earlier the same year of 120 million degrees for 100 seconds. EAST had eclipsed the previous record of 100 million degrees for 30 seconds by the Korea Superconducting Advanced Research (KSTAR) device. In December 2021, JET also generated a record 59 MJ of heat in a 5-second fusion burst, the maximum possible using copper magnets and more than twice their 1997 best.219 The first fusion tests in over 2 decades at JET – now billed as “little ITER” to better simulate the ITER setup – significantly lengthened the plasma lifetime of a few milliseconds by refitting the hydrogen-absorbing graphite materials in the reactor chamber with tungsten and beryllium. The redesign at JET is helping to confirm the more advanced goals of ITER.

***

Not all fusion projects employ magnetic confinement fusion (MCF). Others generate a controlled thermonuclear reaction (CTR) via confinement on a small scale by laser ignition, where opposing laser beams compress a DT fuel pellet to such a small size that the crunching breaks through the repulsive barrier of the hydrogen atoms to trigger fusion, known as “inertial confinement fusion” (ICF). Today, most major fusion research is either by magnetic containment (tokamak or stellarator) or inertial confinement (for example, laser ignition).

In 1960, only days after Ted Maiman demonstrated the world’s first laser – the “ruby” laser emitting red light at 694.3 nm – a physicist at Lawrence Livermore National Laboratory (LLNL) imagined how a laser system could be designed to compress hydrogen atoms and generate the same high temperature and density needed for fusion (~ 108 K, ~ 1,000 times solid). LLNL’s John Nuckolls published his ideas in a 1972 Nature article entitled “Laser Compression of Matter to Super-High Densities: Thermonuclear (CTR) Applications,” noting that “Hydrogen may be compressed to more than 10,000 times liquid density by an implosion system energized by a high energy laser. This scheme makes possible efficient thermonuclear burn of small pellets of heavy hydrogen isotopes, and makes feasible fusion power reactors using practical lasers.”220 The article had been delayed for 12 years because of national security restrictions.

In ICF, symmetrical implosion of a spherical DT fuel pellet is created by splitting a laser pulse into a number of opposing beams, a.k.a. “laser pistons,” the many-sided impact – called a “shot” – applied uniformly to the target, both spatially and temporally. For example, 6 beams are sufficient to produce target compression – 2 along each of the 3 Cartesian axes acting in opposite directions – although the more the merrier. X-ray implosion occurs as the opposite reaction to the pellet surface vaporizing outward, igniting the fusion reaction.

The eventual goal is to exploit the process in a working fusion reactor by continuously dropping DT pellets via gravity, each irradiated by the high-energy opposing beams. Most of the energy will come from hot reaction neutrons absorbed in a lithium blanket, where cooling pipes transfer heat as in a conventional power plant or are converted directly to electricity in a magnetic field. The expensive tritium can also be recovered in a lithium blanket or cheaper DD fuel used with a more energetic laser pulse, although a higher ignition temperature is needed to overcome the higher Coulomb repulsion force.

On the same site as Nuckolls’s Livermore lab, the $3.5 billion National Ignition Facility (NIF) was completed in 2009 to harness inertial-confined fusion energy. Boasting the world’s largest and most powerful laser ever built, NIF can deliver a 1.9-MJ shot (at 351 nm in the UV) from 192 beams for a total deliverable power of 500 TW, 60 times more than global grid capacity although only for a fraction of a second. To get your head around the enormity of a laser-ignition facility, the setup occupies roughly 16,000 m3 (3 football fields) and employs 38,000 optical units (mirrors, beam splitters, and lenses) that make and shape the 192 beams to produce the shot. Although the shot lasts only a few nanoseconds (10–9 s or 1 billionth of a second), NIF is 500 times more powerful than any other laser and delivers more power than “all the sunlight falling on the Earth.”221

To produce an even compression, the 192 beam lines are directed onto a cylindrical gold container about the size of a pencil eraser known as a hohlraum (German for “cavity”), generating x-rays that create uniform shock waves at more than 10 million atmospheres, imploding the peppercorn-sized pellet that fuses the deuterium and tritium fuel inside. The goal is to compress the DT target, initiate fusion, and extract the energy before everything is obliterated, achieving a self-sustaining reaction or “ignition” in fusion parlance. As noted by Mike Dunne, director of the UK Central Laser Facility and former fusion director at NIF, “Getting it right requires a lot of effort; for example, the target chamber is under vacuum to allow the lasers to be focused down to spots just 1 mm in diameter, and the fuel pellet itself has to be extremely round and smooth, as any imperfection is exponentially amplified in the course of the implosion.”222

Figure 3.10 Symmetric compression of a spherical DT fuel pellet for an inertial confined fusion experiment at NIF

(source: National Ignition Facility).

Incomplete implosion is a fusion no-no, where the slightest surface imperfection in the target stops uniform compression as in the mangled grapefruits at Los Alamos. An asymmetric implosion reduces the percentage of kinetic energy converted to heat – at low laser intensities, we get hydrodynamic instability because of surface roughness, while at high intensities, plasma oscillations retard laser absorption. As such, the conditions for successful ignition were not initially achieved (only 1/3 necessary compression) and NIF was mostly used instead for material studies, astrophysics analysis, and stockpile security for advanced weapons testing (required since the 1996 Comprehensive Nuclear Test Ban Treaty), reducing overall “beam time” for fusion research from 60% to under 30%.

After 3,000 shots, the target and laser pulse shape were redesigned in 2020 and the imaging diagnostics improved to view the compression better, allowing researchers to close in on a burning plasma, the interim self-heating stage essential for ignition and eventual “runaway energy gain.”223 While 100 kJ is needed to achieve a burning plasma, the initial yields were below 60 kJ, although on August 8, 2021, a self-sustaining output energy of 1.35 MJ was achieved (70% of the 1.9-MJ input laser energy), 8 times more than earlier results and producing NIF’s first artificial star albeit only for 1 trillionth of a second.

Different hohlraum shapes were then investigated to improve the laser focus, including a double-walled capsule to trap and transfer x-ray energy more efficiently and a foam-soaked fuel pellet to produce a higher-temperature central hot spot. Alas, the breakthrough ICF results could not repeated in subsequent experiments because of difficulties at the point of ignition, casting doubts about the design and the “inability to understand, engineer and predict experiments at these energies with precision.”224

Some questioned the inherent limitations of the ICF design because of target inconsistencies and wanted to see a complete overhaul of the setup to concentrate on the next-generation laser on the “road to ignition.” Others thought the knowledge learned at NIF was essential to achieving eventual ignition, requiring only more time and money. Despite the concerns, NIF finally achieved “break-even” in December 2022 with a Q of 1.5 (3 MJ out from the 1.9-MJ laser shot), although the shot still took 300 MJ to generate, repeated with greater gain six months later.

Even with successful ignition and a fudged Q, the current laser repetition rate is insufficient – 10 times per second is needed for GW-power station output – NIF firing only about once an hour. Each hohlraum also costs over one million dollars and so a redesign is in order to make a working system. As is typical in large research projects, funding is the rate-determining step, especially when the goal is to build and contain a star. As such, ICF is being explored in a number of other novel projects, including HiPER (High Power laser Energy Research) to study “fast ignition” with smaller and thus much cheaper lasers and ELI (Extreme Light Infrastructure), both working to improve implosion and increase the repetition rates needed for power-plant scale output.

Chirped pulse amplification (CPA) – a laser confinement system for which Gérard Mourou and Donna Strickland won the 2018 Nobel Prize in Physics – is being tried on a hydrogen-boron mixture (HB11 or p-B11) in a cylindrical target that fuses via directed motion in an axial “burn wave.” The p-B11 fuel ignites at much higher temperatures, on the order of 1 billion degrees or 10 times DT fusion, where the magnetic bottle is created by the spinning plasma. A prototype power plant could be built in a decade for under $100 million.225

An Oxford University spinoff, First Light Fusion, is also investigating smaller-scale “projectile” ICF, in which a 200,000-volt, 14 million amp, 500-nanosecond electromagnetic discharge pulse shoots a high-velocity slab at a gas-filled DT target. Rather than a perfectly symmetrical target implosion, which is difficult to achieve, First Light’s Machine 3 device generates an intense shock wave that induces asymmetrical collapse of the fusion-fuel target. At roughly £4 million, the pulsed projectile device costs a fraction of the $3.5 billion NIF laser, while producing a shot equivalent to 500 simultaneous lightning strikes.226

Researchers at MIT and the spinoff company Commonwealth Fusion Systems (CFS) in Cambridge, Massachusetts, are also developing improved superconducting magnets to dramatically decrease a tokamak chamber size to less than 2% ITER as in their SPARC device, hoping to generate 200 MW in 10-second pulses at a Q ratio of 10 or more. To keep a lid on material costs, cheaper liquid nitrogen (77 K) is used instead of liquid helium (4.15 K), the high-temperature superconductors producing larger magnetic fields in a smaller chamber volume.227 Smaller, less-expensive designs can help work out the mysteries of terrestrial solar power without getting bogged down by the overwhelming costs of a large-scale device. Comparing nascent fusion development to the proliferation of independent space-flight research after NASA ended the shuttle program, Bob Mumgaard, chief executive of CFS noted, “Companies are starting to build things at the level of what governments can build.”228 SPARC could become the first tokamak device to produce net energy, while Tokamak Energy has also produced gain in a small spherical tokamak (ST 25).

Slow to finance a national fusion program beyond basic research (such as at NIF), the United States announced a plan in 2021 to fund a prototype fusion power plant on a much smaller scale than ITER, hopefully starting within 2 decades “to move forward with fusion on a time scale that can impact climate change.”229 As always, budget constraints present a major obstacle. The US Department of Energy’s Fusion Energy Sciences annual budget is a miniscule $671 million, more than one-third of which already goes to ITER.

Fusion is certainly not cheap, but another novel design is reducing costs by merging magnetic and inertial confinement to produce magnetic target fusion (MTF). A mix between magnetic containment and inertial compression, the fusion chamber is heated and compressed, the hot reaction neutrons absorbed by liquid metal to produce power. The confined plasma is compressed and allowed to expand, a process continuously repeated every second, generating fusion heat during the compression stage.

MTF was initially proposed at the US Naval Research Laboratory in the 1970s as a “compromise between the energy-intensive high magnetic fields needed to confine a tokamak plasma, and the energy-intensive shock waves, lasers or other methods used to rapidly compress plasma in inertial-confinement designs.” A British company, GF, based in Culham, Oxfordshire, near the original JET research site, hopes to have a demonstration plant running by 2025 that will “power homes, businesses and industry with clean, reliable and affordable fusion energy by the early 2030s.”230 One should never overestimate the claims of private companies keen to satisfy short-term investment goals, nor the perpetually bold predictions of limitless fusion in our lifetime, but it is still exciting to imagine the possibilities.

***

Fusion also sees a fair share of outside-the-box thinking, creativity as important as science when it comes to inventing. Even simple ideas can help, such as a sand pile informing researchers about criticality in a fusion containment chamber. For example, by continuously dropping sand grains on a flat surface, one sees the pile rise with each drop, building up in regular fashion despite occasional grain clusters sliding down the sides, until all of a sudden the whole pile fails. Learning more about how a sand pile fails can help us understand more about how criticality is lost inside a fusion reactor that was stable up to the last moment.

Another “table-top” fusion device called a “fusor” has been employed in a number of university labs to create a working neutron source. One such device was built by Taylor Wilson at the University of Nevada, Reno, costing $100,000, orders of magnitude less expensive than the $3.5 billion NIF, $13 billion JET, or $20 billion ITER (so far). A fusor “shoots” positively charged deuterium ions at each other from the inside wall of a spherical chamber that are attracted at high speeds to a central, negatively charged, golf-ball-size tungsten/tantalum grid, igniting a fusion reaction when the ions collide (some collide, some miss, some come back for another pass), seen in the tell-tale, blue-white color of the generated plasma. All of this is done under a vacuum comparable to interstellar space, that is, 1025 times less dense than on Earth, and requires a highly pure, ultra-thin meshed grid.

Although not designed to generate electricity, the fusor advances our understanding of practical terrestrial fusion and can be used to make affordable radioisotopes if the resultant neutron output is sufficiently focused. Short-lived artificial radioisotopes for nuclear medicine are currently made off site and transported to hospitals, a time-constrained delivery process that would benefit from a cheaper, readily available, onsite device. What’s more, the fusion process can be analyzed at a fraction of the cost of Big Science research. A fusor is almost child’s play, especially as Taylor Wilson was only 14 years old when he built one at the University of Nevada, begging the question about why more money isn’t spent on Small Science.

In The Boy Who Played with Fusion, the extraordinary story about Taylor’s nuclear obsession, which includes building his first fusion reactor in his parents’ Arkansas garage before graduating high school, Tom Clynes notes that “While the amateurs’ experiments aren’t nearly as advanced as those done in multibillion-dollar facilities, it’s conceivable that developers of these homebrew reactors could play a vital role in moving fusion forward, as citizen scientists have in other realms.”231 Taylor would go on to wow the science community at the Intel International Science and Engineering Fair, further explaining in a few TED talks how fusion can help radiotherapy and bomb detection.

We should also mention cold fusion, although one shouldn’t be misled by the hype. “Cold fusion” continues to be studied in various research labs, despite being ignominiously slammed after electrochemists Stanley Pons and Martin Fleischmann claimed to have solved the seemingly unsolvable in 1989. Three decades after their famously ill-advised press conference at the University of Utah, which was hurried to get the news out before another competing group could steal their presumed glory, cold fusion is still spoken of in whispered terms. Of course, cold fusion is not a fusion process, but rather a chemically assisted or low-energy nuclear reaction (CANR or LENR), doomed by its misleading moniker.

Some form of reaction clearly occurs when an electric current is passed through a jug of heavy water (D2O) containing a palladium and a platinum electrode in a typical CANR/LENR experiment – liberating more heat than chemically expected – but what exactly is going on is uncertain. The results have been hard to replicate consistently, the cornerstone of science, thus hampering a full understanding and more funding.

Some believe the status of “cold-fusion” technology is similar to that of semiconductors in the 1950s,232 while others note that the research is valuable but mislabeled. Hal Fox, the editor of New Energy News, was adamant about Pons and Fleischmann’s original discovery:

[T]he discovery of cold fusion, although vigorously attacked (especially by hot fusion lobbyists), marked the beginning of a series of discoveries of low-energy nuclear reactions. Pons and Fleischmann deserve a Nobel Prize. The nuclear reactions are complex and not, as yet, fully explained.233

Nonetheless, if they can get the science worked out and the results replicated (and explained), palladium is very limited and expensive, and thus a working LENR device would be hard to scale up to a usable MW-size device, although one possibility is to use a non-platinum group metal catalyst. As always, the future awaits with fusion or fusion-like research.

There are many purported “new” energies on the horizon that never quite live up to their billing, touted by those who have big ideas but can’t find a way to share them. Most are of the snake-oil variety or variations on a perpetual-energy device with or without the hidden wires. One company claims to have tapped into a sub-ground-state energy of the hydrogen atom, offering the power of “2,000 Suns in a coffee cup.” Another didn’t know why their supposed over-unity device “worked,” but stated “It absolutely does.” If it looks like a duck, walks like a duck, and quacks like a duck, … .

There are also fusion detractors, who think neutron-induced radioactivity and tritium storage are potential problems, presenting more challenges to ensure the safe operation of a working fusion reactor. Radioactive reactor components, lithium blankets, and plutonium waste will all need to be safeguarded and disposed of, although their collection and storage won’t create anywhere near the problems of long-lasting radioactive fission products. A fusion reactor is also expensive to build, can be repurposed to collect weapons material (Pu-239), and still requires long-distance grid transmission as in any large central thermal power plant model.

Why all the effort for something that may never work and requires a large fortune to build? Despite the extraordinarily high development costs and uncertain results, fusion-generated electricity could run as low as 0.001 cent per kWh, 10,000 times less expensive compared to the roughly 10 cents per kWh in today’s power plants. If we can get the mega machines to work, the price tag will be worth every penny to run a power station on water.

Indeed, fusion fuel costs are minor compared to fission. Deuterium is fairly easily obtained – D2O exists naturally in about 1 in 7,000 parts H2O – and the more expensive tritium in a heavy-water reactor or as is hoped on the go in a neutron-multiplying reactor blanket of a next-generation fusion device. A working 1-GW fusion reactor, however, will need about 50 kg per year of tritium at about $1.5 billion per year if tritium breeding proves difficult, another obstacle as only about 500 grams per year is currently available.234 Machine costs are naturally expensive for a developing technology, but replacement costs should be reasonable. Reactor walls will need to be replaced after becoming radioactive and radiation-degraded by prolonged neutron bombardment.

No one said bottling a star would be easy, but fusion may well be the answer to all our energy needs and become too cheap to meter … one day – all with few contaminants and water as the only fuel. But many challenges still remain, chiefly that more energy is needed to fuse hydrogen atoms than we get out, a losing venture any way you slice it. Containing the Sun on Earth is still not possible beyond the hugely complex and very expensive.

There are plenty of reasons to keep trying, such as cheap, clean, and abundant energy, but there is still much to do to contain the heat of a man-made Sun. It seems nothing is as good as the real deal (as we look at now).

Part I Coal, Oil, Nuclear Milestones

Table I.1 Milestones in the age of coal and steam

1776James Watt invents the first general-purpose steam engine with separate condenser at the University of Glasgow
1798Benjamin Thompson publishes “An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction”
1807Outdoor street lighting in London begins with coal gas
1811Robert Fulton runs a steamboat up the Mississippi River
1830The first official passenger train service runs from Liverpool to Manchester
1838SS Sirius makes the first transatlantic crossing faster than sail
1869The Union Pacific and Central Pacific railways meet at Promontory Summit, Utah
1879Thomas Edison produces a long-lasting incandescent electric light bulb at his lab in Menlo Park, New Jersey
1882The first commercial electric power plant starts operating in Pearl Street, Lower Manhattan, powered by Edison’s coal-fired Jumbo dynamos
1884The first dynamo turbine is designed by Charles Parson at Holborn Street, London

Table I.2 Milestones in the age of oil

1859“Colonel” Edwin Drake finds oil at 69.5 feet in Titusville, Pennsylvania, after drilling for months
1864Nikolaus Otto designs a 4-stroke internal combustion engine run on piped-in coal gas
1882John D. Rockefeller forms Standard Oil Trust, a centrally organized “corporation of corporations” to circumvent state laws
1900The 35-hp Mercedes racing car is introduced by Wilhelm Maybach
1909Model T Fords roll off the assembly line in Dearborn, Michigan
1932A former WWI British army quartermaster, Major Frank Holmes, finds oil off the Saudi Arabian coast in Bahrain
1944Returning from the Yalta conference, Franklin Roosevelt meets the Saudi Arabian king Ibn Saud aboard the USS Quincy in the Suez Canal
1956M. King Hubbert presents his “peak oil” paper at an American Petroleum Institute meeting in San Antonio, Texas
1960OPEC is created following discussions between Juan Pablo Pérez and Abdullah Tariki, representatives of Venezuela and Saudi Arabia
1973 / 1979Oil prices rocket from the First Oil Shock after an Arab oil embargo and the Second Oil Shock after the toppling of the last shah of Iran
1989 / 2010Exxon Valdez spills 11 million gallons of crude into Prince William Sound after hitting a reef. Deepwater Horizon spills 200 million gallons of crude into the Gulf of Mexico after a blowout preventer failure

Table I.3 Milestones in the nuclear age

1903Marie Curie shares the Nobel Prize in Physics with Pierre Curie and Henri Becquerel for the discovery and research on spontaneous radiation
1909Ernest Rutherford postulates a dense, positively charged, central nucleus after alpha particles fired at gold leaf were deflected back at the source
1932James Chadwick discovers the neutron at the Cavendish Lab in Cambridge, roughly equal in mass to a proton but without any charge
1938John Cockcroft and Ernest Walton build the first atom smasher, breaking apart lithium atoms with protons
1939Lise Meitner calculates the considerable energy released from fission of a U-235 uranium atom bombarded by neutrons (200 MeV)
1939Albert Einstein signs a letter to President Roosevelt warning of a possible German nuclear bomb-making program
1942Enrico Fermi builds the first nuclear reactor (CP-1) in a squash court at the University of Chicago
1945The first atomic bombs are dropped on August 6 on Hiroshima (uranium-gun design) and on August 9 on Nagasaki (plutonium-implosion design)
1953President Eisenhower gives his “Atoms for Peace” speech to the UN General Assembly
1955The first nuclear-powered submarine, the USS Nautilus, is launched
1956The first civilian nuclear reactor, Calder Hall, goes online, a 92-MW, graphite-moderated, gas-cooled reactor in northwest England
1957The first major nuclear accident at the Mayak Soviet nuclear weapons complex
1979Three Mile Island partial meltdown (March 28)
1986Chernobyl hydrogen explosion (April 26)
2011Fukushima multiple explosions and triple meltdown (March 11)

Table I.4 A few fusion milestones

1905E = mc2 equation appears in Einstein’s “Does the Inertia of a Body Depend Upon Its Energy Content?” Annalen der Physik paper
1920Arthur Eddington suggests the Sun is a fusion reactor
1950sZETA z-pinch tested at Didcot, UK
1958First Russian tokamak T1 constructed at the Kurchatov Institute
1960Ted Maiman demonstrates the world’s first laser at Hughes Aircraft
1967Hans Bethe wins a Noble Prize for explaining how stars burn
1972John Nuckolls publishes ICF fundamentals in Nature
1984Joint European Torus (JET) in Culham, Oxfordshire
1991Almost 2-second plasma at JET
2008ITER breaks ground in Cadarache, France
2009First NIF shot breaks laser power record
2016First W-7X hydrogen plasma at world’s largest stellarator
2022NIF “break-even” with Q of 1.5
2025ITER’s planned first plasma
2035ITER’s planned DT internal heating
20??DEMO up and running, a working fusion reactor
Figure 0

Figure 3.1 Nuclear fission: (a) U-235 fission-product yield versus mass number A (source: England, T. R. and Rider, B. F., “LA-UR-94–3106, ENDF-349, Evaluation and Compilation of Fission Product Yields 1993” (table 7, Set A, Mass Chain Yields, u235t), Los Alamos National Laboratory, October 1994) and (b) fission process started by a captured neutron. One possible reaction yields the fission products krypton (A = 92) and barium (A = 141).

Figure 1

Figure 3.2 Uranium enrichment: (a) uranium ore (source: Geomartin CC BY-SA 3.0), (b) uranium hexafluoride

(source: Argonne National Laboratory), and (c) natural uranium to weapons-grade uranium (90% U-235) or reactor-grade uranium (3–5% U-235).
Figure 2

Figure 3.3 The basics of a pressurized light-water nuclear reactor (PWR).

Figure 3

Table 3.1 Number of nuclear power plants by reactor type, total power output, and fuel

Source: “Nuclear Power Reactors,” World Nuclear Association, May 2023. www.world-nuclear.org/information-library/nuclear-fuel-cycle/nuclear-power-reactors/nuclear-power-reactors.aspx.
Figure 4

Table 3.2 Nuclear power, 2022 (country, number, power, global and national percentage)

Source: “World Nuclear Power Reactors & Uranium Requirements,” World Nuclear Association, August 2023. www.world-nuclear.org/information-library/facts-and-figures/world-nuclear-power-reactors-and-uranium-requireme.aspx.
Figure 5

Figure 3.4 The 14 decay stages of uranium-238 to stable lead-206.

Figure 6

Figure 3.5 Penetrating power of ionizing radiation (α, β, x/γ, n).

Figure 7

Table 3.3 Important radioactive isotopes (radioisotopes)

Source: “Emergency Preparedness and Response: Radioactive Isotopes,” Centers for Disease Control and Prevention, Atlanta, Georgia. https://emergency.cdc.gov/radiation/isotopes/index.asp.
Figure 8

Table 3.4 A survey of some of the most radioactive places on Earth

Source: “The Most Radioactive Places on Earth” [documentary], presented by Derek Muller, Veritasium, December 17, 2014. www.youtube.com/watch?v=TRL7o2kPqw0.
Figure 9

Table 3.5 International Nuclear Event Scale (INES) of at least a “serious incident”

Figure 10

Figure 3.6 Nuclear accident exclusion zones: (a) Chernobyl and (b) Fukushima.

Figure 11

Figure 3.7 Average levelized cost of energy (LCOE) from 2009 to 2021 ($/MWh)

(source: “Lazard’s Levelized Cost of Energy Analysis 2021,” Lazard, Version 16.0, April 2023. https://www.lazard.com/media/typdgxmm/lazards-lcoeplus-april-2023.pdf).
Figure 12

Figure 3.8 Nuclear fusion. (a) The nuclear binding curve and (b) deuterium–tritium (DT) fusion reaction.

Figure 13

Figure 3.9 Magnetic confined fusion reactor schematics: (a) tokamak and (b) stellarator

(source: Xu, Y., “A general comparison between tokamak and stellarator plasmas,” Matter and Radiation at Extremes 1: 192, 2016. https://doi.org/10.1016/j.mre.2016.07.001. CC BY-NC-ND 4.0).
Figure 14

Figure 3.10 Symmetric compression of a spherical DT fuel pellet for an inertial confined fusion experiment at NIF

(source: National Ignition Facility).
Figure 15

Table I.1 Milestones in the age of coal and steam

Figure 16

Table I.2 Milestones in the age of oil

Figure 17

Table I.3 Milestones in the nuclear age

Figure 18

Table I.4 A few fusion milestones

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×