Futures Impossible : a new methodology to study world events

Discuss

46 Responses to “Futures Impossible : a new methodology to study world events”

  1. Thomas Zaraat says:

    It’ll take quite a bit more evidence before I accept a statement like “remote classified facilities can be detected, visited, and accurately described by mental powers alone” as anything but sheer woo.

    I’ll try to remain open-minded about whether or not I’m actually just closed-minded.

    • MeOnBoingBoing says:

      *That* (remote viewing) was the point when I mentally turned off the whole premise (if there is a premise, hard to tell).

  2. daen says:

    the fact that remote classified facilities can be detected, visited, and accurately described by mental powers alone remains beyond accepted concepts

    Beg pardon? “Fact”? No.

  3. John Ohno says:

    People in the comment thread dismissing remote viewing are making the same mistake as the OP was pointing out. It doesn’t matter whether or not RV is possible: it matters whether or not we consider and prepare for events that currently seem impossible. RV is an excellent example, since it’s something that appears to be impossible, but that if possible would have the capacity to totally fuck with all of our existing intelligence systems. Even if RV ends up continuing to be understood as impossible indefinitely, it is entirely useful to contemplate what kinds of mechanisms would be necessary to circumvent some arbitrary but well-defined model of RV, both as practice in planning for the fallout of other impossible things and in case something comes along that has many of the strengths and weaknesses of the fictional conception of RV.

    • Thomas Zaraat says:

      Not accepting something as a proven fact is not the same thing as categorically dismissing the concept.Nevertheless I don’t plan to spend much time devising a counter response to the possibility that unicorns with tentacles could manipulate the price of gold from Alpha Centuari.

    • daen says:

      No.  It’s not useful at all to spend time and effort on planning countermeasures for something which is, and will remain, fictional.  Planning for category D futures is, anyway, a massive waste of time because (by definition) they are unimaginable.  The example given (remote viewing) is purely fictional, not an unimaginable future.  The fact that it can be written down makes it imaginable (by definition), but it doesn’t make it plausible.There’s a huge confusion in the above article between genuine black swan events, and bad risk management.  Fukushima wasn’t a black swan: it was a disaster waiting to happen. Frankly, treating it as a black swan is actually to let the Japanese nuclear regulatory organizations, the Japanese government, TEPCO and GE too easily off the hook.  If Fukushima had been built in a region where earthquakes and tsunamis had never been known, *then* it would have been a black swan event.  The fact is it was badly designed, badly sited and badly run.  And several people had tried to point this out over time, so it’s not as if it was exactly a surprise.  Given this, there was no mystery as to what happened next when nature did what nature does from time to time in Japan.

      • Jim Saul says:

        “True” black swan events are not unpredictable, just unpredicted by the subject cohort, according to the Taleb book.  You are still right that Fukishima doesn’t even fall into that more open definition though.

        I propose there should be a new name for high-impact, low-frequency risks that are purposefully ignored and discredited by those with an economic incentive precisely to ignore the externality of said risk.  

        Perrywinkle Penguins?  Magenta Magpies?  ;-)

        • daen says:

          According to Taleb:

          What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives.

          He absolutely emphasizes the lack of prospective predictability here.

    • Peter Ellis says:

      No, remote viewing falls under category C.  Proven impossible rather than simply not conceived of.

      • Thomas Zaraat says:

        I’d probably use the term “extremely unlikely” instead of “proven impossible” with regards to remote viewing.

        For all I know there might be a way to remotely spy on people via quantum entanglement or from a topologically higher dimension.

        I can’t think of a way to flat out disprove either of those possibilities, but I’m not a scientist.

        I wouldn’t hold my breath waiting for someone to actually demonstrate these phenomena, either.

    • ben lidgus says:

      Well put Ohno.  It doesn’t matter if any of these “impossible” events come to pass or not.  I see the author recognizing the difficult state of future planning and trying to build a new foundation.  Hopefully future future thinkers will take this or similar premises, evaluate their validity and either assimilate or discard them.  In this way the author has done a great service.

    • ben lidgus says:

      Well put @google-f421b28a24f33ff21a6b49e3c07c2182:disqus .  It doesn’t matter whether any of the “impossible” things mentioned in the article come to pass.  The reality is that we as a species are encountering many events that we thought at one time were impossible.  It can’t be bad to begin to develop a way of talking about things we can’t predict.  Science can really move forward when someone lays out the groundwork to describe a new set of ideas. Future future thinkers can assimilate and then augment or discard these ideas.

      About Fukishima not being a black swan; can we really say it was poor planning and bad management?  It seems to me to be a case of experts with incredible resources doing the best they can to eliminate or mitigate all threats and then being surprised about something “impossible” that they couldn’t plan for.  If Fukishima isn’t a black swan, how about 9/11? Global warming? The Titanic? The breaking of the curse of the Bambino by Boston in 2004?  In retrospect, the evidence was there for all these but something about our worldview/evidence gathering/smarts as a species wasn’t up to snuff and they were impossible.  I look forward to seeing where others take this line of reasoning.

      P.S. In reference to another comment in this thread, has RV been “proven” to be impossible? I don’t “Person A remote viewed Situation B” is a disprovable statement.

      • Daen de Leon says:

        Once again, Fukushima was an avoidable disaster.  From the Wikipedia page on regulatory capture, Japanese examples [http://en.wikipedia.org/wiki/Regulatory_capture#Japanese_examples]:

        Despite warnings about its safety, Japanese regulators from the Nuclear and Industrial Safety Agency approved a 10-year extension for the oldest of the sixreactors at Fukushima Daiichi just one month before a 9.0 magnitude earthquake and subsequent tsunami damaged reactors and caused a meltdown.

        What makes Fukushima *not* a black swan (and, conversely, an avoidable disaster due to human greed and incompetence) is the fact that some people *did* prospectively predict that there was a problem.  That ineffective (and possibly corrupt) regulatory agencies did nothing to heed that advice does not take away the fact that *some people* identified problems with the site, thus breaking the third requirement for a black swan – because if their advice had been heeded in time, the accident would not have happened (and it was always going to be the case that there was going to be a large earthquake and tsunami).

        And from the Wikipedia entry on remote viewing [http://en.wikipedia.org/wiki/Remote_viewing]:

        For the largest paranormal research institution, the James Randi Educational Foundation, out of all of the applicants who applied for the One Million Dollar Paranormal Challenge, nobody has even passed the preliminary tests.

        Sounds like a non-starter to me.

    • Duncan McPherson says:

      Good point, John! If someone were to come up with a technology that effectively delivered the benefits of remote viewing (without the woo), then any scenarios that accounted for RV would have direct applicability.

      Seems too far fetched? This “ancient” (2008) web article points at delivering images directly from the brain to a computer screen, something that was considered science fiction (bad science fiction, at that) in the not-too-distant 1980s: http://www.physorg.com/news148193433.html

      Okay, okay… you want something practical. In the 1990s, say, an invisibility cloak would have seemed like a silly thing to consider when designing future strategies. Fast forward to 2010, and we’re tantalizingly close to having a practical invisibility cloak (http://abcnews.go.com/GMA/Weekend/scientists-step-closer-real-invisibility-cloak/story?id=10155543)… all science, no woo. The futurist’s contingency for this sort of emergent tech would certainly come in handy, now wouldn’t it?

      The point is, developing contingencies for things that seem to be in the realm of fantasy _now_ is exactly part of the futurist’s job. We, as a species, have an interesting tendency to make fantastic things _real_ in one way or another… so long as we don’t get too comfortable with our assumptions.

  4. novium says:

    *cues Fringe title sequence*

    How wonderful.

  5. Jim Saul says:

    I think the example chosen for D is just fine… it is literally impossible to give an actual example of something unimaginable, obviously, so the closest one can come to it is to take a known example that would evoke exactly the “that’s impossible” reaction we see in these comments.

  6. atimoshenko says:

    The problem is not that we see too few impossible futures, but that we see too many. Should we be evaluating the impact of a catastrophic meltdown at Chooz due to unresolved thermal fatigue issues? A transport ship bombing that renders the Panama Canal beyond repair? The invention of an efficient, light, and compact electricity storage technology? And this is just off the top of my head in 30 seconds…

    We have stuck for simple extrapolation for so long not for a lack of imagination, but because extrapolation only offers a very narrow range of events for which to prepare. Granted this is a bit like looking for your wallet under a street lamp (rather than where you lost it) because there is more light there, but take extrapolation away, and we would be overwhelmed by the number of possibilities for which we can prepare.

    Question is, what should we do?

  7. BillGlover says:

    The author may have a valid point here, but I’m not able to evaluate it properly for two reasons, one superficial and one which may be a deeper misunderstanding of the premise (on my part).

    I do think remote viewing falls more reasonably into C than D and the inclusion in D seems odd enough to distract me and even make me question the rest of the thinking that went into the idea.

    My second issue is more with the idea itself as I understand it.

    Image two intelligent actors in an environment with considerable risk and limited resourced. The author seems to be suggesting that an actor who weighs events only by their likelihood and impact when allocating resources in preparation will fare worse than an actor who adds an arbitrary, increased weight to extremely unlikely occurrences and spends resources preparing for those. That seems fundamentally flawed and, in fact, sounds like exactly the approach that led to most of the outcomes described in the same article.

    It was no great stretch of the imagination (or probability) to assume that mismanagement and unscrupulous activity would follow financial deregulation. In fact that falls squarely in @jjsaul:disqus ‘s colorful bird category (I suggest brown duck).

    The same for the reactor and the same for the technological weaknesses of the Iraqi army. None of those were particularly unlikely. No one predicted the coalition would be technologically inferior to Hussein’s forces. The arguments were that they were numerous and had quite a bit of combat experience. It turned out that didn’t count for as much as good logistics and technology in that particular conflict.

    An example of applying the author’s suggestions might be seen in the “War on Terror” where a vast percentage of the resources of the wealthiest civilization in history (I’m talking worldwide, but the US led this charge) have been spent trying to fight the threat of lunatics setting their shoes or underwear on fire, and trying to stop teenagers from ingesting things that might make them giggly and potentially subversive. All this while underfunding disaster preparations for likely occurrences like floods, tornadoes, hurricanes, fires, drought, famine and disease outbreaks.

    Now that I think it through, this alternative, the cousin to the the brown duck, should be called a wild goose.

    • JakeHamilton says:

      The problem with your allocation of resources scenario is that you’re not taking into account a) spillover benefits or b) declining confidence in estimates of probability. The whole thrust, as I understood it, is that the confidence of weights derived from parsimonious inductive reasoning about future states of affairs or knowledge has declined precipitously with the increasing complexity, volume, and rate of change in available information in the contemporary era. If remote viewing made this all seem a bit woo-woo, consider how similar this is to Popper’s critique of historicism (certainly a sober and skeptical thinker).

      The point is that this entails an allocation of analytical resources more broadly distributed across the probability spectrum (which includes remote viewing and reincarnation, unless we granted current scientific models sacred doctrinal status while I wasn’t looking), even more so because of the spillover and reinforcement benefits of training to immediately, effectively assimilate and respond to major, unforeseen disruptive events. The ‘call to theorize’ being put forth is a statement to the effect that we need statistical and epistemological methods that are more robust and useful outside the center peak of high-comprehension, high-predictability outcomes.

      Rough, but maybe useful analogies I’d draw would be to paraconsistent logics and the thermodynamics of systems far from equilibrium.

      • BillGlover says:

        I like your argument, and if we restrict the models to only influencing where we place our bets for outliers, I think I can agree it’s a reasonable and useful effort. So spending some discretionary resources on thought experiments on counter measures against remote viewing or security leaks by the reincarnated might be worthwhile even if only for, as you mentioned, the spillover benefits. I’ll also grant that we can be sure our scientific models will change, sometimes in unexpected directions, but we will still be right, more often than not, by presuming that unlikely things are unlikely.

        If we imagine a new model which takes into account how to allocate these outlier bets, haven’t we just moved the boundary of what we think is likely? We are simply saying that based on a more sophisticated model, those outliers are more likely than the much larger set of outliers upon which we choose not to bet. Aren’t we?

        I’m not a philosopher or mathematician, so perhaps I’m just putting too much stock in the analogies without understanding the complexities of the underlying ideas.

        • JakeHamilton says:

          That’s a good point; a method like this is definitely concerned with extending and refining the map of outlier events/observations that we can effectively analyze for their likelihood, implications, optimal response, and so forth. Further than that, though, it’s also being proposed that this isn’t just a technique for producing marginal gains in the efficiency of risk allocation; rather, it’s an urgently necessary reaction to an inflationary burst in the growth and complexity of new data, novel phenomena, and changing models. Under those circumstances, the probability distribution across outcome-space is increasingly less confident, because we have access to a diminishing portion of relevant data, that we analyze with models that are increasingly likely to be inaccurate or wholly irrelevant at an arbitrary point in the future.

          So that means that we should expect (and I would argue are certainly experiencing) a growing incidence of events that would have been rationally rated low-probability, or not even considered, a given amount of time previously. The problem is that as our probability distribution flattens and spreads out, our overall allocation efficiency suffers, because our analytical tools are best at the center, where our models are reliable and our information complete. We don’t have the requisite methods or concepts available to usefully evaluate, compare, and map many heterogeneous unlikely events.

          It sounds somewhere between dry and alarmist, but I really find it pretty inspiring. In global practice and in theory, we’re pushing the horizons into a faster, unlikelier world, where our understanding of the possible and the real transforms to fit a larger, more uncertain and chaotic domain.

  8. Adam Cahan says:

    What daen said about the article getting Black Swans wrong.

    And is something like Madoff a black swan? It was, in fact, a conspiracy. Are those black swans? From what I’ve read, the data about Madoff was in the open – enough of it for at least some to attempt at ringing an alarm bell.

  9. Kevin P. Sullivan says:

    There is a excellent lecture by Andrew Lo and associated paper he did with physicist Mark T. Mueller that discusses this (among other things) from an economic point of view.  Their “Taxonomy of Uncertainty” is the best example I have seen of trying to add structure to the risk and uncertainty of the world. You can view the lecture at http://mitworld.mit.edu/video/794/.  I think it should be required watching and/or reading for anyone in the finance industry. It would add to the work mentioned above.

  10. KanedaJones says:

    The entire history of the American Intelligence community is designed to evade the possibility of Remote Viewing.

    Dosing of mental patients, staring at goats, prayer — I’d say they’ve planned for that quite well.

    –GimpWii

  11. JakeHamilton says:

    My main objection is to the sort of muddled use of probability and possibility. The inductive probability of particular outcomes, given a heretofore-reliable model of the underlying system or trends, is an at least partially different dimension of variation than the ‘deductive’ probability of outcomes that would force some degree of ad-hoc alteration to models that cannot accommodate their occurrence. Generally, it seems to me that the first step to developing a proper methodology of this kind would consist of tossing out the archaic assumption that single general definitions of broad concepts like ‘possibility’ and ‘probability’ are adequate to capture the analytical complexity of their fields.

  12. Unfortunately, this article seems to be perpetuating a common but fundamental misunderstanding regarding foresight: that foresight is the same as prediction, or should be. A failure to predict a specific event is not a failure of foresight; only a failure to build resiliency that can cope with specific events is a failure of foresight. Prediction was never really possible, and in recent times it’s become more obvious that it isn’t. Some foresight practitioners have over the years dreamed of improving our ability to predict events, but as soon as one leaves the realm of trends and cyclic processes, it quickly becomes impossible. What foresight practitioners do is attempt to increase the resiliency of organizations like governments, meaning that when the inevitably unpredictable happens, we’re not left unable to cope. There are many different methods and techniques for achieving this, but they all share the common understanding that the game is not about prediction. What this means is that building for resilience actually *is* the “deeper grid” that the authors talk about–and the only possible one in a world of radical unpredictability.

    Anything else is snake oil.

  13. smaturin says:

    And just how many angels WILL fit on the head of a pin?

  14. My turn to be a pedant.

    Methodology = The Study of Method.

    A more accurate title would be “A new method for studying world events”.(Not that it actually matters that much).  :-)

  15. I prefer this one

    Methodology: A pretentious way of saying “method”.

    http://encyclopedia2.thefreedictionary.com/Research+methodology

    The irony isn’t lost on me…

  16. PlaneShaper says:

    No one is going to be as successful at developing new methods (or methodologies or w/e) for predicting, detecting, and mitigating risks as the leader who is willing to spend at least twice as much of their available resources than the person in their organization with the greatest interest in short-term profits and corner cutting wants to allow towards risk management via classical techniques.

    It’s not that people are willy-nilly disregarding potential scenarios as impossible, it’s that they’re willing to disregard as many scenarios as they think they can get away with.  Come up with a technique that’s twice as good at accurately predicting and mitigating risk for the same cost, and they’ll simply spend half as much money on it…at best.  And that’s my prediction.

  17. Wow… my major problem with this article has very little to do with considering the “impossible,” or creating new methods for more complex environments. Rather, that the lead-in makes a slew of inaccurate statements about the work of futurist tools, such as “Such techniques for describing the future and anticipating its opportunities and dangers have largely become obsolete because of the acceleration of technology itself and the increasing vulnerability of our society to chaotic processes that are not well behaved under most classic models.” The examples given in the article weren’t wild cards or black swans in anyone’s scenario? I can’t begin to tell you how many times I have heard scenarios in the past for building too close to to barrier lands (close to water – New Orleans anyone, not to mention a nuclear power plant), how robotics and radar/ultrasound would be used to detect enemy fortresses, or how a real estate and corporate meltdown was imminent not long before it occurred. Granted, maybe those in charge of implementing these ideas did not listen, but to say that the methods no longer work? In reality, it’s in the hands of the futurist(s) producing the scenarios as to how immersive and thought-provoking the pattern and sense-making is that gets created by these tools. Don’t get me wrong, I’m all for better and more innovative methods, and I look forward to this one (that is the point – a new method is on the way – right?) But if we want to look deeper into social, organizational, and global complexities, you failed to mention incredible tools for this kind of “overlay” such as Causal Layered Analysis, VERGE Ethnographic Scanning, Panarchy mapping (large-scale adaptive system mapping which points out that different layers of social, technological, and political life do not move at one large complex speed), or Spiral Dynamics, just to name a few (the last of which has been used widely to point toward reconciliation in Palestine, no less).

    I truly love the work of IFTF, but some very broad strokes were painted here.

  18. theo says:

    If you RTFP [http://www.urbandictionary.com/define.php?term=RTFP], you will see but a single link to here: http://iftf.org/

    At the bottom of the home page for the site – which calls itself the “Institute for the Future” – you will note that the copyright notice is dated 2009.

    Thus, some of you may be making hysterical remarks about the historical or vice versa…

  19. digi_owl says:

    I think the D example of remote viewing is an attempt at putting the improved airborne and orbital sensory systems into context. Remember, recent sats have found cities in the amazon, rivers in Sahara, and may well be able to read the layout — and perhaps content — of a building if the roof is not properly shielded.

  20. Adam Smith says:

    apologies, but I don’t see a method or methodology here. Correct me if I’m wrong, but this is just 4 categories to place impossible futures in. In the article you call it a typology, which I think is more accurate, or maybe a taxonomy. If there is actually a method, would someone point it out to me. I’d be very interested.

  21. Mike Brotzman says:

    I believe someone once said that if you eliminate what is impossible, whatever remains, no matter how improbable, must be the solution.

  22. Jody Radzik says:

    Folks have so much investment in keeping possibility tied to their personal world view. While I’m more skeptical than your average woo-woo bunny, I’ve been close enough to the work of Jacques, Dr. Ed May, Dr. Jeff Kripal, and others to have ascertained that at the very least, weird shit happens all the time that just won’t get stuffed into these small boxes some folks carry.

  23. So, will you be writing a series of plays to be discussed as versions of the same episode, or a slightly more meta alien invasion videogame? I don’t get it. Is this newly fungible?

  24. Interesting approach and one worth further study. The value of developing scenarios for possible as well as impossible events is to build intellectual muscle. In fact the more “impossible” an event the better it serves as a builder of these forecasting skills. The process of considering the impossible serves as a way to avoid “breathing our own fumes”, i.e., becoming so self satisfied that we stop accepting new and potentially upsetting information.  From my own experience, when I dismiss ideas out of hand, ridicule them and refuse to even discuss the ideas, I realize that I am very far from scientific or rational. In fact when I behave this way I have learned, as an antidote, to ask the question,”what am I afraid of and why”?

  25. Daniel says:

    The RV bit put me for a loop too, but I came around after reading some of the comments, especially since I had just been looking at this:

    http://en.wikipedia.org/wiki/Thing_%28listening_device%29

    A passive/driven listening device invented by Leon Theremin, Soviet inventor of the surf rock/50′s B movie musical instrument (the theremin).  Not quite remote viewing, but with the right device it’s remote listening and I doubt many people were pointing to Leon Theremin as an inventor of spy apparata.

    So in other words maybe remote viewing isn’t a realistic concern, but going by Clarke’s law there are probably many possible technologies that look a whole lot like remote viewing.  In this view, it’s worth asking what it would take to defend against RV as an indication of what it would take to defend against these other technologies.

Leave a Reply