When human beings are asked to monitor computers, disaster ensues

Discuss

45 Responses to “When human beings are asked to monitor computers, disaster ensues”

  1. justaddh3o says:

    It’s true, topics such as change blindness are commonly investigated in psychology.

    The notion of humans being neurologically wired to stop noticing static displays is evidenced by bloodflow changes involving transcranial doppler (TCD), showing that cerebral bloodflow declines as time on task increases. This is a part of the burgeoning field of Neuroergonomics!

  2. kmoser says:

    It’s astounding to me how some of the most highly engineered systems in the world still lack the ability to communicate necessary information to their human operators. Case in point: when the computers on AF447 switched from “normal law” to “alternate law”, it failed to notify the pilot. (Larry Tessler would probably argue that these multiple modes are a bad idea to begin with.) Then, when the computer warned of an impending stall, it failed to suggest a suitable resolution, leaving it up to the pilot to guess whether to push forward or back on the stick (based on the current mode). Rhetorical question: how are these things not obvious to people designing these system?

    • Phanatic says:

       “leaving it up to the pilot to guess whether to push forward or back on the stick”

      Guessing to pull back on the stick when you’re in a stall is never the right answer.  And the FCS *does* issue warnings when you’re in alternate law, all sorts of lights go on. 

      • digi_owl says:

        Well this specific event was going on in the middle of a thunderstorm, and various alerts was already going off.

    • nixiebunny says:

       The description that I read of the computer-pilot interaction in alternate mode indicated that the pilots hadn’t trained for alternate mode. Without that training, they ignored the stall warning because they thought that the aircraft was incapable of stalling.

      Please correct me if I’m wrong.

      • digi_owl says:

         They were also in the middle of a thunderstorm, and the people in the seats were basically two first officers. The main problem it seems was that there was no ready indication to anyone else in the cockpit that one of them was holding the stick back, forcing the plane to attempt a climb continually. And the reason for him doing so was that he wanted out of the thunderstorm and believed the aircraft could not stall because of the fly by wire system.

      • Rosiemoto says:

        Not training for alternate mode in general doesn’t sound accurate. Not training for that specific scenario sounds very accurate.  

    • digi_owl says:

       There was also some early crashes with the Swedish JAS Gripen fligher plane that was caused by both the fly by wire computer and the pilot making correction inputs during near stall conditions. End result was basically a series of repeated over-compensations.

    • Robert says:

      From what I understand, the pilots were probably overwhelmed by all the damn alerts that were going off. They didn’t know what the hell to do, there were so many.

  3. Rossi says:

    The company of the future will consist of two employees: a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man away from the computers.

    • phisrow says:

      On day N+1, the company of the future will hire Accenture to unplug the keyboards and implement a total ERP solution. They will then rightsize the man and the dog and see their share price rise 18% for the 3rd quarter on strong cost reductions.

      • Ito Kagehisa says:

         This, in turn, will allow humans to access the computers, and the company will cease generating value.  In theory, this will result in the turnover of organizations that capitalism requires in order to function properly, but in practice the CEO will purchase government regulations that will enable to company to continue making profits irregardless.

  4. Bloo says:

    1. I think we are evolutionarily tuned to ignore constant-level input – after all, survival depends on noticing the threats as they arrive in our environment so we’re wired to notice change.
    2. In my business, mainframe systems administration, we also see that once a system operates at faster-than-human speeds,  having a human monitor and react to systems doesn’t work well at all.  They either react too slowly, or they get impatient with how things are and rather than wait for automatic controls to adjust things so that the overall goals are obtained they get impatient and tweak controls manually which just screws up the automated adjustments

    • digi_owl says:

      We are also somewhat poor at noticing various changes. One example was two images of a house shown with a short time between them. The only difference was that the chimney had moved. If they were shown quickly, one noticed the chimney moving. But produce perhaps a second or two of blank screen and it was near impossible to notice.

  5. Guest says:

    So, basically, Boeing is a PC and Airbus a Mac?

  6. If it’s not a Boeing, I’m not going!

  7. Phanatic says:

    “But if the pilot is a novice, Airbus’ system almost certainly delivers superior results.”

    That’s not clear at all.  The novice pilot on Air France 447 stalled the airplane and kept his stick pulled all the way back for pretty much the entire 30,000′ fall to the ocean.  And why not, when the FCS is in normal mode it won’t let the pilot stall the aircraft.  Oh, but since the  airspeed data is questionable they’re not in normal mode anymore.

    Read the cockpit voice transcripts.  The airplane’s blaring a stall warning at them, and they’re ignoring it because they think the FCS won’t let them stall the aircraft.  That’s not a “superior result,” that’s exactly the sort of thing that lets a novice pilot kill 230 people. 

    • digi_owl says:

      Well it did not help that the side stick had no force feedback or mirroring, so the other pilot could not notice the position of the controls with a glance or touch (unlike on mechanical systems). And i think it was also found that the notification for the mode change was confusingly similar to one of the other alarms going off at the time, thanks to the storm induced change in apparent airspeed (for reference to other readers, airspeed is measured via a fan. Higher speed, more air over the fan and therefor higher rotation. With a storm present, this flow can be disrupted and cause sudden shifts in detected airspeed).

      • Mark Dow says:

        Aircraft use pitot tubes to measure airspeed, not anenometer type fans.
        http://en.wikipedia.org/wiki/Pitot_tube

        • digi_owl says:

           Crap, i knew i should have looked up. Anyways, the issue still stands that sudden changes in the air around the plane can play tricks on the instruments.

      • I think airbus are marketing their aircraft to airlines on the basis that you can spend less on experienced crew because the  aircraft is so smart. Its a race to the bottom, literally.

      • Ultan says:

        If Airbus did not design in mirroring and force feedback, then that seems criminally negligent to me.

      • Stev84 says:

        The problem was also that the stall warning was turned off when airspeed readings became unreliable/indeterminate. Which is the case during a stall! So a high angle of attack caused a stall warning, but it also made the airspeed unreliable, turning off the alarm. Then they eased back the angle of attack somewhat, which didn’t abort the stall entirely, but was enough to raise the alarm again.

        That created a counter-intuitive situation where pulling back the stick seemingly ended the stall and pushing it fowards caused it. In reality, it only switched the alarm on and off

    • Wilson Afonso says:

      Agreed. But that kind of reinforces the point Airbus makes: humans are a poor fallback. The problem is that the Airbus system does not quite have “hard” limits as he claims, but it has “less soft” limits – they can still be overridden, but only when the airplane decides that they should (and then pilots need to be aware that it happened – what is slightly negligent in my opinion is that they are not explicitly and very loudly told that this happened). That’s much worse than having a button allowing pilots to choose to override the systems.

  8. Baldhead says:

    According to Airsafe.com there are only slightly more fatal Airbus crashes than those from the Boing 747 alone. Gonna say that this at least suggests Airbus is doing something right comparatively.

    • C W says:

      “According to Airsafe.com there are only slightly more fatal Airbus crashes than those from the Boing 747 alone. Gonna say that this at least suggests Airbus is doing something right comparatively.”

      Does it?

    • Astovimo says:

      This article is speaking about fly-by-wire systems, which for Boeing means the 777. That website you list reports that the 777 has had 0 fatal crashes. So your conclusion that Airbus’ approach is better is unsupported by available data.

  9. Max Allan says:

    I like the idea at the end (less about monitoring) that if you have an automated system that fixes your mistakes, you never learn that you’re making a mistake.
    I feel the same way when I see learners driving in modern cars with ABS/ASC/etc… You just get used to slamming on brakes and the car does the rest. When you’re on snow or some other problem exists that makes those driving aids fail to work as expected, your reactions are bollixed.
    I wonder how long before the auto-brake systems that are coming in will start to have a similar effect. 
    “Why did you crash into that radar transparent big object?”
    “The collision avoidance didn’t stop me and I didn’t know what to do. I just drive at everything else and it stops me.”

    • phisrow says:

      ‘Mistake’ is relative to context: As you note, systems that train you for infallibility but fail excessively are dangerous. However, the whole point of automated systems is to handle parts of the task for you more reliably than you would be expected to.

      If the system can’t actually do that, that’s a design problem; but if it can, then just trusting the system isn’t a mistake, it’s the correct action. This is most clear-cut in situations where human reflexes simply aren’t fast enough, since those don’t have anecdotal rebuttals available; but at a population level situations where the automated system is superior to the reasonably expected user skill crop up as well.

  10. pete baston says:

    hi Corey – Where is the link to this fascinating book – Ashwin Parameswaran’s “People Make Poor Monitors for Computers” Amazon says WTF

    If you think this is unusual go look at any of Don Normans books on afordances ( lin k below ) and why empty coffee cups ( turned upside down on controls you should not use in XX maneuver etc ) and post it note tabs are the pilots friend   http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0465067107/ref=sr_1_1?ie=UTF8&qid=1333907186&sr=8-1

  11. Frederik says:

    You’ll notice airplanes aren’t dropping out of the sky in large quantaties. Sure, aircrashes still happen en yes it is a problem. Something designers are actually working on, but it isn’t so catastrophic that flying is suddenly unreasonably dangerouse. The crashes happen when a combination of factors go wrong. Things that were designed for, warnings do happen and pilots do take over control but there is then some extra unexpected variable, some other system failure or minute broken part that means things still go wrong. 
    Thankfully, that’s a rarity.

  12. bcsizemo says:

    I think part of the problem here isn’t the “normal” situations.  Computer and automated controls can be built to handle those quite well.  The problem comes in those fringe events, where you have multiple inputs all verging on extremes or unknowns/unreliable information.  It’s easy to see that an experienced pilot may come up with a completely different solution to a situation than the computer, or even a novice pilot.

    A good, if not completely Hollywood example is the shuttle landing scene in Space Cowboys…  Yes it’s a movie, but still in terms of solving a complex landing situation using chaotic variables a well experienced pilot may come up with a workable solution that exceeds certain conditions to arrive at a safe solution.

    Frankly I welcome our computerized transportation overlords.  If only for the shear fact I will arrive at my destination faster and with my blood pressure not pushing hypertension.

  13. anansi133 says:

    There’s a continuum here someplace, between the kinds of things that shouldn’t need human operators, and those that still do. Time was, elevators were considered too dangerous to let the users handle directly, you needed to hire a professional. When Seattle built a light rail, it was decided to include a human operator, despite Vancouver’s example proving that automated trains can work safely.

    If passenger planes seem too risky to go unpiloted, what about cargo planes? And if cargo planes are still too dangerous to automate, then what of the drones we put weapons on?

    I’m still not sure if I rather ride in a car that’s autopiloted or not. For long trips it sure seems like a waste of my attention. I keep thinking of those Amtrak engineers who have to push a lever to prove they’re not asleep at the wheel.

  14. bigorangemachine says:

    I really love user interface design.  I think about it all the time and it is especially true with pilot error.  I also watch discovery’s mayday and that show makes it appear that when its purely human error is involved its most likely due to  a physical problem with the pilot, cultural issue in the cockpit (co-pilot not speaking up) or some kind of control confusion like the Air France Flight.

    In some instances humans taking direct control saved lives such as the guy who landed in the Hudson a few years ago.

    There is a lot of issues with Software and Life or death situations.  The F22 raptor pulled a y2k on its Maiden Voyage to Japan.  The software didn’t account for the time change at the international date line and the whole computer crashed.  They would have lost those planes and pilots if they didn’t happen to be refueling at the time and were successful landing in Hawaii and they patched the software sometime later.  Software/computers are only as good as the people who built it and the people using it.  Its a tool!

    Interface design is especially important with emerging technology.  The hidden complexity needs to be balanced with the priority complexity.  In the Air France situation the design of the planes systems should consider being under extreme duress.  The more extreme the situation the more the alarms should be smarter.  A display (I know pilot displays are too small to do this… for argument sake) indicating new errors rather than recurring errors would be useful.  Pretty complex problem because you don’t want pilots flying by a multitude of computer screens either.

    I’m not saying Apple should be designing plane cockpits, but they should approach it with a Philosophy that the interface should consider within the context of the experience might save lives.

    However, when so many lives are relying on a computer; the person operating the computer has to know everything about it no matter what that computer is attached to.  The biggest problem is complancy in the end.  People just get use to everything is normal and everything is ok.  If pilots had more weekly scares they’d be sharper at thinking clearly under pressure.

    • In some instances humans taking direct control saved lives such as the guy who landed in the Hudson a few years ago.

      By making it possible to land the aircraft in a river you have just made it more difficult for the customer to get a good insurance payout on the plane. Far better to write it off cleanly. Satisfy the performance requirements to the letter so the customer comes back to replace the planes which they crashed.

  15. BonzoDog1 says:

    I have to wonder what the Airbus computer would have done had Capt. Sullengerger had just sat back after both engines failed climbing out of LaGuardia.
    My guess is a safe water landing in the Hudson (avoiding bridges and boats) was WAY down the programmers’ list.
    Pilots kill more people than they save, but I’d still rather have them.

  16. Spikeles says:

    Computer control is all well in good until there is a bug. Such as the Qantas flight a few years back where the computer couldn’t handle multiple spikes in AOA from one ADIRU that were 1.2 seconds apart  and automatically commanded pitch down causing multiple serious injuries to passengers and crew.

    Still I’d rather the computer was there then not, out of 128 million flight hours and it’s only happened 3 times, pretty good odds.

  17. Rosiemoto says:

    I’ve flown the 737 since 1995. I’ve had the Pitot Static System Fail on the 737 NG twice. Both times were an absolute handful. In one incident a wasp body inside the tube froze at altitude causing the plane to think it was stalling when it was not. In the other incidence a bird impaled itself on the tube blocking it completely during takeoff. In both incidences it required the non-flying Pilot to give the flying pilot extremely good information about what to look at and what to ignore in the now partially crippled airplane.  It came down to the ability to communicate very clearly, very effectively, and tell the other guy in a very loud voice IGNORE THIS IGNORE THIS LOOK AT THAT LOOK AT THAT.  I would like to think that I would not have screwed up those same incidents had I been in an Airbus, but that is my ego talking, not my gut.

  18. Vincent Maldia says:

    I remember an incident on national geographic air crash investigation where the pilot puled the stick one way and the co pilot pulled it the other way. the airbus sticks have no force feedback so if I remember correctly, the computer interpreted the opposing control inputs as a zero, so nothing happened and the plane crashed

Leave a Reply