The precautionary principle comes up a lot when you're talking about the side effects of technology in the real world. When you don't have evidence that something is dangerous — but you suspect it might be — you could cite the precautionary principle as a reason to ban or limit the use of that thing. It's a messy idea, though, and I'm still not sure what to think about it. On the one hand, technology is often available before data on the wide-ranging effects of that technology are available. Do you use it or not is a legitimate question. On the other hand, following the precautionary principle in a blind sort of way can lead to things like this.

34 Responses to “How safe is safe?”

  1. nixiebunny says:

    The big problem is that bad consequences of new stuff (in terms of harm to people or the environment) are often delayed by years or decades, so there’s no way to know if a new idea will cause trouble down the road. There are enough examples of drugs gone wrong to fill books.

  2. Jon Welch says:

    The precautionary principle seems to work best as an “approach” that says: if there’s at least some evidence that an action could cause irreversible harm, it’s prudent to wait for stronger evidence of safety before proceeding. 

    Rather than giving advice on how to weigh the benefits and risks of an action, this concept is often used to influence the legal or political process of regulation: that’s why the word “principle” is used.

    A problem with this principle: there are always costs for NOT acting, and the principle does not address those. For example, in the article on electronic devices on planes linked above, the pint is made that continued inaction carries a cost in terms of lost productivity, economic competitiveness, etc. As Rush (the band) says: “If you choose not to decide, you still have made a choice”. ;-)

    • allenmcbride says:

      I think you’ve nailed the problem with the precautionary principle. I’d argue it’s such a basic problem as to render the principle useless. Which side of a decision counts as “action” and which counts as “inaction” can always be reversed. I think we tend to consider whichever side of a decision we think has more uncertainty in outcome and call that the “action” side… but then the precautionary principle becomes “take the course of action with the most predictable consequences”. Any maybe some people would agree with that, but I don’t think it’s what most people have in mind when they say they endorse the precautionary principle.

  3. Boundegar says:

    Now where would we be if that first French caveman hadn’t tried the chanterelles?  He probably knew full well what the risks were.  I think he deserves a monument, maybe even a holiday.

    • Mr. Winka says:

      Everyone assumes that the first caveperson to try a new food tried it themself. I think there must have been cases where they either watched animals eat the food or coerced another caveperson to eat it using a spear or other weapon.

      • wysinwyg says:

        Maybe cave people had lots of children like farmers do but instead of doing chores the children “test” potential foods.

  4. HarveyBoing says:

    In the case of electronic devices on airplanes, it seems to me that the argument easily falls on the side of caution. Devices are required to be turned off only during brief periods of the flight (typically the first and last 10-15 minutes, the time the airplane is lower than 10,000′), and these devices are not critical for use by passengers in any case. While on the other hand, the rules hopes to protect electronic equipment that’s part of the airplane and which is critical for use by every passenger, every flight.

    There have been documented cases where pilots have noted incorrect behavior in flight electronics, behavior that stopped when passengers were reminded to turn off their electronics devices. That doesn’t prove electronic devices were at fault, or even that they pose a real risk to the safety of the flight. But it certainly leaves open that possibility. Without actual proof the contrary (e.g. a demonstrated alternate explanation for the flight electronics failure), it’s sensible to play it safe.

    The irony here of course is that while airlines are falling on the cautious side of the equation when it comes to electronic devices, the exact opposite attitude has been taken by the TSA (and implicitly condoned by the airlines) with respect to x-ray scanners. Devices that use a form of radiation known to be harmful, and yet no real studies of safety or reliability have been produced.

    • lvdata says:

      ALL GSM phones cause interference with the speakers on every audio amplifier I have tried, the pre-call buzz/click. I can understand why the FAA does NOT want them used on a plane. How many signal amplifiers are in a plane? 100+ plus phones (some of poor quality and poorly tuned RF transceivers) in a large metal can is a untestable situation. Laptops and tablets do leak RF. I don’t want to be the test case. It does seem like a reasonable precaution when a failure/crash could be avoided. NOT having a computer for 30 min, is not a life or death situation. A plane with a non-responsive control is a life and death situation. Just shut off the phone for flight, and the laptops and tablets for the take off and landing.

    • GawainLavers says:

      I’m certainly struggling to appreciate the terrible harm that has accrued as a result of the FAA’s decision.

  5. Another Kevin says:

    The precautionary principle – when applied blindly – amounts to, “Nothing shall be done for the first time, ever.”

  6. nnu-16121 says:

    Perhaps they could do some research — get some rf emitters that can emit at the frequencies and power levels both the intentional and unintentnal radiation of common personal electronic devices like gsm and CDMA mobile phones. Then systematically move them about the passenger cabin and work out whether the possible emissions from the devices are capable of causing problems with the avionics.
    Or they could just look at the published papers of the UK CAA and some small obscure us government agency called NASA, who have already done so, and discovered that there really is the potential for a problem, as documented by experiment and data.

    • Todd Knarr says:

       The problem is that you can’t control passengers. The only valid test would be one with one emitter operating at the worst-case emissions for the worst-case device in every seat. You have to do that because if your safe limit is 50 such devices on a 100-seat aircraft you can’t tell passenger #51 that he can’t turn his device on even though the 50 before him are OK using theirs. More importantly, you can’t _stop_ him from turning his on while not stopping the 50 before him because you don’t have enough crew members to babysit every passenger constantly.

      • Wreckrob8 says:

        Why not? Passengers are only cargo with an apparently ever-increasing appetite for masochism to protest their freedom to travel further, faster and more frequently in order to keep the airlines in business. They deserve all they get.
        (Only half joking.)

  7. Boris Bartlog says:

    Evaluating risks and benefits is hard. You don’t really make any headway by creating a simple heuristic, even if it has a nice name like the Precautionary Principle. You could as easily make up things like the Progress Principle or the Extropy Principle or what have you and it would still be more or less rhetorical bullshit. 

    • wysinwyg says:

      Are you on any medications that haven’t proceeded through the FDA-mandated trial process?

      Me neither.  Maybe the precautionary principle ain’t so bad.

  8. Antinous / Moderator says:

    Once upon a time, somebody’s cell phone interfered with some bit of hospital equipment, so everybody had to turn off their cell phones in the hospital.  The problem was solved by the mid-1990s, and equipment manufacturers not only tell clients that it’s safe, they use it as a selling point.  But it’s still common for hospitals to order people to turn off their phones because I SAID SO!

  9. nixiebunny says:

    I read something recently about how the FAA doesn’t want people to operate electronic equipment during takeoff and landing because of the distraction from following the flight attendants’ instructions, rather than the RF interference potential.

    You all are correct that it’s possible to quantify the risk from N FCC approved RF emitters in the aluminum can to the receiving antennas outside the can. Trouble is, it’s non-zero if someone’s using a 1978 vintage Apple ][ computer on board.

    • wysinwyg says:

      I’d be more concerned about the diesel generator that person would need to bring with them to power the Apple ][.

  10. JohnQPublic says:

    :Kindles weigh less than six ounces; Walter Isaacson’s biography of Steve Jobs weighs 2.1 pounds in hardcover. I’d rather be hit in the head by an iPad Mini than a 650-page book.”
    This author takes a giant detour from the topic of electronic interference with the more Newtonian issues of “unsecured objects flying and making impact with a human face”.
    Even in this poor example, he thinks weight is the only factor.  I think density is more important.  I’ve gotten hit by falling books and depending on how they fall, the best case scenario is to get hit by fanning pages which distribute and absorb the weight of impact.  The worst case – the corner of a hardbook spine – is still not so bad compared to being hit by a dense gorilla-glass and steel object whose pressure point has absolutely no give or bounce to it.

  11. viadd says:

    The precautionary principle may be dangerous; therefore I avoid using it.

  12. Anax says:

    The precautionary principle is a flat-out terrible idea. Why? Because it presumes that NOT doing things has no consequence, and we should just wait, and wait, and we will not pay any price for waiting.

    Imagine a world where fossil fuels are depleting rapidly (our world, though shale gas has put this on hold), and we need a new power source ASAP. Fusion power remains somewhat experimental – but it is viable. Even if there are unknowns, it’s still far better to forge ahead with fusion, than risk the world economy grinding to an utter halt because of fuel shortages.

    • wysinwyg says:

      Because it presumes that NOT doing things has no consequence, and we should just wait, and wait, and we will not pay any price for waiting.

      No it doesn’t.  It says: “if an action or policy has a suspected risk of causing harm to the public or to the environment, in the absence of scientific consensus that the action or policy is harmful, the burden of proof that it is not harmful falls on those taking the action.”

  13. Todd Knarr says:

    Personally I juggle it based on probability and severity. The problem is that in this case the severity is high: 200 or so people dead (depending on the size of the airliner) if things go wrong the wrong way at the wrong time. And in this case the risks are shouldered by people who didn’t choose to do so, who weren’t necessarily the ones using the device. It’s generally considered worse if I force you to shoulder the burden of the risk of my choice whether you want to or not. Given that combination, I tend to come down on the side of “Don’t allow them until the people who want them can show strong evidence that they don’t pose a significant risk.”. And not just one device, since the crew can’t babysit every passenger constantly the standard should be no significant risk for the maximum possible number of worst-case-emission devices simultaneously. Or let the people who want the devices usable come up with a way to let the crew manage which passengers can use which devices when to keep them within provable safe limits.

  14. DrDave says:

    Electronic devices and aircraft avionics may have evolved sufficiently since the rules were initially established to justify relaxing them. I remember an article from the mid-90′s where a main concern was transistor-radio emissions reaching antennas on the fuselage, with the windows acting as a phased array inducing constructive interference.

    I haven’t seen much field research on cellphones on aircraft. IEEE Spectrum had an article in 2006, but it was reporting on data gathered two years prior. http://spectrum.ieee.org/aerospace/aviation/unsafe-at-any-airspeed

    Personally, I hope the in-flight cellphone ban continues. Unlike a waiting area, or even a train, there would be little possibility of escape if the passenger next to me decides to narrate the flight to his Mom. “Oh, it looks like they’re bringing us chicken for dinner.”

  15. daev says:

    The same principle is in play in other industries as well.  Telco switching offices still have an official ban on RF or flash generating devices in their buildings. Even leaving a cover off a piece of equipment will generate errors in other nearby equipment, so leaving doors and covers unsecured in addition to talking on a cell phone is a pretty big no-no.

    Cable TV operators must undergo an annual over-flight by the FAA where radio sources produced by faulty components in their network must be corrected. You can sometimes hear aircraft radio traffic on channels 17-21 if squirrels have chewed through the shielding in your neighborhood or house wiring.

    Although any particular brand of device may not cause issues, when you’re dealing with the public there’s no telling what kind of device a passenger may have and turn on, and multiple devices just add to the radio noise. A ban simply prevents things like poorly shielded or damped RF devices from interfering with the electronic eyes and ears of the aircraft, which almost always use radio waves to sense the other devices on the ground or in other aircraft to keep things at a safe distance in busy airports or poor visibility.

    I’m gonna have to side with the FAA on this one.

    • greggman says:

      Unfortunately that’s a non solution. Just because you tell passengers to turn off their devices doesn’t mean they turned them off. Even if you see them turn them off they often just turn them back on as soon as the flight attendant has left. The attendants only notice the devices that are out as well. They don’t notice how many devices are still on in people’s bags or pockets. It’s not even just bad passengers. Sometimes they forget. I know I have. I’ve gotten on the plane, intended to turn my phone off, got distracted reading the airline magazine or talking to my partner. Didn’t notice I forgot to turn off my phone until 30 mins into the flight.

      There are only 2 real solutions off the top of my head

      1) Some kind of sensors that can detect if devices are on (seems unlikely)

      2) Making the plane not sensitive to these issues (if they are now)

  16. Ian G says:

    The precautionary principle presents a significant quandary for Bright Green Environmentalists; a philosophical approach that endorses technological and science solutions to environmental problems cannot ignore that uncertainty is a built-in hazard to all technological advance. Americans fall easily into Bright Green Environmentalism because it tracks closely with our national outlook of consumerism, frontiersmanship, and pursuit of progress. Other cultures, with more emphasis on tradition, take a dimmer view of unfettered progress and recognize the merits of a more methodical approach to adopting technological solutions. I would argue that the precautionary principle has roots in the connection people have to their land. Take say, Europe, where traditionally you never moved much due to land ownership norms and thus had an interest in both respecting your neighbor and having them respect you since you were both in for the long haul- versus the United States where the only real concern you have from your fellow citizens are whether or not they can sue you for actions you take. One approach causes you to think about the ramifications of your actions in the future, the other says “better to ask forgiveness than permission.”

    Philosophical wanderings aside, technologists must acknowledge that failure and unintended consequences are not just an option, they are likely outcomes. Proper adoption of technological solutions must factor in the negatives seriously before moving forward, depending on the criticality of the system affected; switching a incandescent bulb to an LED in your own home to see how the light affects your mood is one thing, attempting so-called geo-engineering to alter the global climate is another entirely. The precautionary principle acts as a scientific due diligence check on your work and is not an obstacle. It is simply asking “Before I jump off this building with my homemade wings, am I really confident in my design and workmanship?” Understanding the risk with clear eyes is part of proper decision-making.

  17. Vince Brown says:

    the precautionary principle is about as anti-science as one can get this side of magical thinking.  It effectively demands one prove a negative.  It makes a joke of risk assessment.  The motto of the SAS is “He who dares, wins.” not “He who makes sure he’ll never stub his toe, wins.”

    • Todd Knarr says:

       Two counter-points:

      1. The SAS is universally made up of people who are *trained* for that kind of job. They’re uniquely prepared to deal with the risks that go along with that motto.

      2. Even the SAS don’t take *stupid* risks. They don’t go on a mission without getting every bit of intelligence they can on the target and the territory they’ll have to cross getting in and out. Then they plan the operation out, work out all the things that could go wrong and develop contingency plans for dealing with them. Suggest they go in without that groundwork and they’ll probably tell you to… well, do something physiologically improbable and extremely uncomfortable should you manage to do it.

    • wysinwyg says:

      The precautionary principle only demands one “prove a negative” in the reduction ad absurdum version.  In the real world the precautionary principle just means performing due diligence.  If a policy seems to have a significant probability of doing harm it makes sense to assess the actual likelihood and severity of that harm.  If the cost determined by this process is less than the gains of implementing the policy then chances are the policy will be implemented.

      The Black Swan by Nassim Nicholas Taleb makes a great argument for why the precautionary principle makes sense as a correction to the usual tools of risk assessment.  A turkey doing a statistical study on turkey mortality rates the Tuesday before Thanksgiving would probably end up being overconfident about his life expectancy.

  18. Another significant problem with the precautionary principle stems from the following three facts:

    1) Technology is continually evolving, but like organisms, lineage is important. All new technologies build on the foundation of previous technologies.

    2) This evolution does not happen in isolation — innovations in one branch of this lineage can be applied to others.

    3) This evolution moves at a pace dependent on the number and variety of implementations of a technology. Restricting the application of a technology reduces the number and variety of its implementations.

    Because of this, putting restrictions on a technology early in its growth can result in downstream “pruning” of the potential evolutionary branches of that technology. This in itself wouldn’t be a bad thing if all of the subsequent branches of a technology were dangerous. But this seems like it’s impossible to predict. And, in fact, it’s probably more likely that technologies get safer over time, unless there is some vested interest in making them more dangerous (weapons) or an interest in another dimension that overrides safety (land-speed record cars, or hypersonic planes).

    For example, take NiCad batteries.

    Cadmium is really toxic. A heavy metal, a cumulative toxin, and a carcinogen — nasty. But much of this was unknown in 1899 when NiCad cells were invented. So you could argue that if these batteries were invented later, the precautionary principle might have been invoked to restrict their use. And, in fact, we see modern significant restriction on use and disposal of these type of batteries (particularly in the EU).

    But NiCad cells were essential for the portablization of electronic devices. Alkaline batteries were too expensive to realistically support the function of high powered devices, particularly early portable phones and laptop computers (which ran on NiCads before NiMhs, Lithium-ions, or LiPos).

    And it was the early proliferation of portable electronics which proved the value of the market, driving demand for more devices with longer battery life, and ultimately, the development of new battery technologies to fill these needs.

    And, interestingly, these new batteries (NiMH, and Lithium-based cells) are dramatically less toxic than cadmium-based batteries. Not that there was necessarily an interest in lower toxicity (thought the EU ban probably influenced added some pressure) — These chemistries happen to be both safer and higher power.

    In this example, I would suggest that early significant regulation of NiCads (in the early 1900s, when it would have made the most difference from an environmental toxicity perspective) would have significantly altered the development of rechargeable battery technology, and the market of consumer electronics which demanded these batteries. Probably this would have resulted in a slower growth of portable consumer electronics. Maybe this would have completely pruned the portable phone region of the tech tree, until alternative technologies like fuel cells could be made to drive these devices. In such an alternate universe, we might just now be seeing the emergence of cellphones.

    This is a pretty optimistic example — we didn’t exercise the precautionary principle, and things worked out good, so we should live it up, right!? Well, not so fast.

    For another quick example, look at nuclear power.

    Identified as dangerous early, regulated heavily from the get-go, the nuclear technology branch of potential was pruned until only very large, very expensive reactors were feasible (because only governments or massive gov contractors were able to navigate the overburden of regulation to build one).

    As the saying goes, the bigger they come, the harder they fall, and this is definitely true in large power plants of any stripe — failure is going to result in release of massive amounts of energy, usually as fire, plumes, explosions, and in the case of nuclear plants, radioactive stuff in all of the above.

    This alone wouldn’t be a problem, if all nuclear plants were kept in good shape, but the second part (they have to be expensive) means that plants are operated for as long as possible — usually beyond the time when they were designed for, or at least beyond the time when they have strong community support.

    In this case, the precautionary principle seems to have made nuclear more dangerous, or at least limited the possibilities of this technology to large, expensive installations which are more likely to become dangerous over time.

    That’s not to say that there aren’t super-safe nuclear options out there — but remember rule 3 — technology only evolves as fast as the number of instances riffing on it. Regulation for safety limits the number of applications of nuclear, and the variety of those applications, so that we are today looking at the failure of reactors with technology from the 50s and 70s. Imagine if personal electronics evolved that slowly — we would have given up the search long ago.

    At the same time, I don’t think that no regulation on nuclear is a good idea. Sure, because we didn’t regulate cadmium, there are lots of creeks and fields where people dumped batteries where there are now heavy metal toxic sites. But those only hurt you if you eat them. Lots of small nuclear reactors would probably make lots of small leaks of low-level radiation, which is bad for the ol’ sperm count just by sitting next to it.

    Long story short, the precautionary principle is a useful tool in a toolbox for technology policy making — but it’s far from an answer. And, like other binary methods for making decisions about emergent properties of complex systems, the places where it succeeds are as unpredictable as the ones where it fails.

Leave a Reply