Forecasting versus the stubbornly arbitrary world

In a fascinating long, thinky piece, economist Tim Harford looks at the history of business and political forecasting, trying to understand why both Keynes and his rival Irving Fisher both failed to forecast the Great Depression and were wiped out (and why Keynes managed to bounce back and die a millionaire, while Fisher died in poverty).

Along the way, Harford looks at the contemporary forecasting industry, whose most promising members are trio of psychologists working for the "US intelligence community" on a something called the Good Judgment Project, which solicits predictions from some 20,000 varied experts after first giving them a short course in understanding how to improve their guesses' accuracy.

A first insight is that even brief training works: a 20-minute course about how to put a probability on a forecast, correcting for well-known biases, provides lasting improvements to performance. This might seem extraordinary – and the benefits were surprisingly large – but even experienced geopolitical seers tend to have expertise in a subject, such as Europe's economies or Chinese foreign policy, rather than training in the task of forecasting itself.

A second insight is that teamwork helps. When the project assembled the most successful forecasters into teams who were able to discuss and argue, they produced better predictions.

But ultimately one might expect the same basic finding as always: that forecasting events is basically impossible. Wrong. To connoisseurs of the frailties of futurology, the results of the Good Judgment Project are quite astonishing. Forecasting is possible, and some people – call them "superforecasters"– can predict geopolitical events with an accuracy far outstripping chance. The superforecasters have been able to sustain and even improve their performance.

And here's that 20-minute training's heart, in a nutshell:

● Comparisons are important: use relevant comparisons as a starting point;

● Historical trends can help: look at history unless you have a strong reason to expect change;

● Average opinions: experts disagree, so find out what they think and pick a midpoint;

● Mathematical models: when model-based predictions are available, you should take them into account;

● Predictable biases exist and can be allowed for. Don't let your hopes influence your forecasts, for example; don't stubbornly cling to old forecasts in the face of news.

How to see into the future [Tim Harfort/FT]

(Thanks, Tim!)