Hands of a guy on laptop keyboard

Predicting – does it pay to develop sophisticated methods?

Published on 05 May 2012
Updated on 05 April 2024

Prediction is dicey – I’ve argued this many times. Does close study and sophisticated methodology pay as compared to “back of the envelope” predicting?

Duncan J. WATTS reports[1] on studies he has carried out, comparing various kinds of predictive models, and testing them against basic heuristics – rules of thumb.

Take the outcomes of NFL football games. The “basic” predictors were that (a) the home team statistically has some (measurable) edge; (b) recent win-loss records provide information about the overall “health” of a team. These two predictors were tested against two “betting markets” and two “opinion polls”. What was the result?

Here the conclusions:

given how different these methods were, what we found was surprising: all of them performed about the same.” The best performing method was only about 3 percentage points more accurate than the worst-performing method. Then they moved to baseball: “the outcomes of baseball games are even closer to random events than football games”.

So they conclude: “there are strict limits to how accurately we can predict what will happen. (..) it seems that one can get pretty close to the limit of what is possible with relatively simple methods.” (…) Predictions about complex systems are highly subject to the law of diminishing returns: the first pieces of information help a lot, but very quickly you exhaust whatever potential for improvement exists.”

In hindsight: this does not surprise me. The human brain is a rather imperfect machine for observing and predicting natural events. Yet we have prospered by drawing simple lessons from experience. Nothing sophisticated, just “good enough” to give us an evolutionary advantage.

Policy used to be made “on the back of an envelope” by people who had a lot of practical experience. It was messy; it included a lot of “silent” knowledge – hunches, intuitions, lore. They were replaced by “modelers” who had a lot experience in statistical methods.

I shall pass lightly on the fact that mathematical manipulations necessarily destroy information as they elaborate it[2].

The fact remains that “making more realistic models” tends to disaggregate a limited number of basic variables, in an attempt to deepen the understanding. Refinement of a limited number of independent variables trumps, when we might be better advised to be more inclusive of variables – even at the cost of consistency. Refinement is based on the unstated assumption that the past contains all that is necessary to know to understand the future, i.e. that nothing important will change. Instead, things may have structurally changed and the time series we use no longer reflect the evolution.

What lessons are to be drawn?

The first is: always check the performance of your predictors. We tend to remember our “hits”, and forget how many times we missed.

The second would be foremost to focus on what has changed, rather than refine the understanding of the past as laid out in a given predicting model, thus expecting the past to replicate itself in the future. Re-engineering the model may be more productive than refining it. Of course, the new version may be a “jury-rigged” affair, but that’s what life is about.

______________

Original post at the DeepDip.

[1] Duncan J. WATTS (2011): Everything is obvious (once you know the answer). How common sense fails. Atlantic Books, London. (pg. 168 ff.)

[2] Oskar MORGENSTERN (1955): On the accuracy of economic observations. 2nd. Edition. Princeton University Press, New Haven; 322 pp.

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Subscribe to Diplo's Blog