When human beings are asked to monitor computers, disaster ensues

Ashwin Parameswaran's "People Make Poor Monitors for Computers" is a fascinating look at (and indictment of) the way we design automation systems with human fallbacks. Our highly automated, highly reliable systems — the avionics in planes, for example — are designed with to respond well to all the circumstances the designers can imagine, and use human beings as a last line of defense, there to take control when all else fails. But human beings are neurologically wired to stop noticing things that stay the same for a long time. We suck at vigilance. So when complex, stable systems catastrophically fail, so do we. Parameswaran quotes several sources with examples from air-wrecks, the financial meltdown, and other circumstances where human beings and computers accidentally conspired together to do something stupider than either would have done on their own.

Although both Airbus and Boeing have adopted the fly-by-wire technology, there are fundamental differences in their respective approaches. Whereas Boeing's system enforces soft limits that can be overridden at the discretion of the pilot, Airbus' fly-by-wire system has built-in hard limits that cannot be overridden completely at the pilot's discretion.

As Simon Calder notes, pilots have raised concerns in the past about Airbus' systems being "overly sophisticated" as opposed to Boeing's "rudimentary but robust" system. But this does not imply that the Airbus approach is inferior. It is instructive to analyse Airbus' response to pilot demands for a manual override switch that allows the pilot to take complete control:

"If we have a button, then the pilot has to be trained on how to use the button, and there are no supporting data on which to base procedures or training…..The hard control limits in the Airbus design provide a consistent "feel" for the aircraft, from the 120-passenger A319 to the 350-passenger A340. That consistency itself builds proficiency and confidence……You don't need engineering test pilot skills to fly this airplane."

David Evans captures the essence of this philosophy as aimed at minimising the "potential for human error, to keep average pilots within the limits of their average training and skills".

It is easy to criticise Airbus' approach but the hard constraints clearly demand less from the pilot. In the hands of an expert pilot, Boeing's system may outperform. But if the pilot is a novice, Airbus' system almost certainly delivers superior results. Moreover, as I discussed earlier in the post, the transition to an almost fully automated system by itself reduces the probability that the human operator can achieve intuitive expertise. In other words, the transition to near-autonomous systems creates a pool of human operators that appear to frequently commit "irrational" errors and is therefore almost impossible to reverse.

People Make Poor Monitors for Computers (Thanks, Patrick!)