Boing Boing 

Fukushima: The first 24 hours

IEEE Spectrum has a big special feature online now about the Fukushima nuclear disaster and its after-effects. It includes an interactive map showing the impact that Fukushima has had on evacuation of residents, contamination of soil, and contamination of food and water supplies.

It also includes a blow-by-blow account of what happened during the first 24-hours of the disaster. This solid investigative reporting by Eliza Strickland highlights several key points where simple changes could have lead to a very different outcome than the one we got.

True, the antinuclear forces will find plenty in the Fukushima saga to bolster their arguments. The interlocked and cascading chain of mishaps seems to be a textbook validation of the "normal accidents" hypothesis developed by Charles Perrow after Three Mile Island. Perrow, a Yale University sociologist, identified the nuclear power plant as the canonical tightly coupled system, in which the occasional catastrophic failure is inevitable.

On the other hand, close study of the disaster's first 24 hours, before the cascade of failures carried reactor 1 beyond any hope of salvation, reveals clear inflection points where minor differences would have prevented events from spiraling out of control. Some of these are astonishingly simple: If the emergency generators had been installed on upper floors rather than in basements, for example, the disaster would have stopped before it began. And if workers had been able to vent gases in reactor 1 sooner, the rest of the plant's destruction might well have been averted.

The world's three major nuclear accidents had very different causes, but they have one important thing in common: In each case, the company or government agency in charge withheld critical information from the public. And in the absence of information, the panicked public began to associate all nuclear power with horror and radiation nightmares. The owner of the Fukushima plant, the Tokyo Electric Power Co. (TEPCO), has only made the situation worse by presenting the Japanese and global public with obfuscations instead of a clear-eyed accounting.

Citing a government investigation, TEPCO has steadfastly refused to make workers available for interviews and is barely answering questions about the accident. By piecing together as best we can the story of what happened during the first 24 hours, when reactor 1 was spiraling toward catastrophe, we hope to facilitate the process of learning-by-disaster.

I'm reading Perrow's Normal Accidents: Living with High-Risk Technologies right now. I'm not very far into it yet, but it will be interesting to contrast the thesis I see him putting together— i.e., you're never going to account for all those simple-in-retrospect things that could have stopped a disaster and, in fact, trying to solve some of those lapses actually causes others—with Strickland's riveting account of the first day of Fukushima.

Image: Fukushima 1 Nuclear Power Plant_27, a Creative Commons Attribution (2.0) image from hige2's photostream

A new system for studying the effects of climate change

I've talked here before about how difficult it is to attribute any individual climactic catastrophe to climate change, particularly in the short term. Patterns and trends can be said to link to a rise in global temperature, which is linked to a rise in greenhouse gas concentrations in the atmosphere. But a heatwave, or a tornado, or a flood? How can you say which would have happened without a rising global temperature, and which wouldn't?

Some German researchers are trying to make that process a little easier, using a computer model and a whole lot of probability power. They published a paper about this method recently, using their system to estimate an 80% likelihood that the 2010 Russian heatwave was the result of climate change. Wired's Brandon Keim explains how the system works:

The new method, described by Rahmstorf and Potsdam geophysicist Dim Coumou in an Oct. 25 Proceedings of the National Academy of Sciences study, relies on a computational approach called Monte Carlo modeling. Named for that city’s famous casinos, it’s a tool for investigating tricky, probabilistic processes involving both defined and random influences: Make a model, run it enough times, and trends emerge.

“If you roll dice only once, it doesn’t tell you anything about probabilities,” said Rahmstorf. “Roll them 100,000 times, and afterwards I can say, on average, how many times I’ll roll a six.”

Rahmstorf and Comou’s “dice” were a simulation made from a century of average July temperatures in Moscow. These provided a baseline temperature trend. Parameters for random variability came from the extent to which each individual July was warmer or cooler than usual.

After running the simulation 100,000 times, “we could see how many times we got an extreme temperature like the one in 2010,” said Rahmstorf. After that, the researchers ran a simulation that didn’t include the warming trend, then compared the results.

“For every five new records observed in the last few years, one would happen without climate change. An additional four happen with climate change,” said Rahmstorf. “There’s an 80 percent probability” that climate change produced the Russian heat wave.

Taxonomy of technological risks: when things fail badly


"A Taxonomy of Operational Cyber Security Risks" by CMU's James J. Cebula and Lisa R. Young is a year-old paper that attempts to classify all the ways that technology go wrong, and the vulnerabilities than ensue. Fascinating reading, a great primer on technology and security, and as a bonus, there's a half-dozen science fiction/technothriller plots lurking on every page.
This report presents a taxonomy of operational cyber security risks that attempts to identify and organize the sources of operational cyber security risk into four classes: (1) actions of people, (2) systems and technology failures, (3) failed internal processes, and (4) external events. Each class is broken down into subclasses, which are described by their elements. This report discusses the harmonization of the taxonomy with other risk and security activities, particularly those de- scribed by the Federal Information Security Management Act (FISMA), the National Institute of Standards and Technology (NIST) Special Publications, and the CERT Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) method.
A Taxonomy of Operational Cyber Security Risks (PDF)