Wednesday, February 25, 2015

the rationality of heuristics

I recently joined a very interesting reading group and we're working our way through the new book "Evolution and Rationality: Decisions, Co-operation and Strategic Behaviour". The discussion about chapter 5 (by Brighton and Gigerenzer) was very thought-provoking. The chapter discusses the difference between "small world" and "large world" problems. The former are problems in which we are certain of the underlying processes, such as playing roulette. Large worlds are too complicated to be certain of the underlying probabilistic processes, perhaps too complicated to be certain of which processes are relevant at all, and the whole thing may not even be stationary. For example, playing the stock market.

The gist of the paper is that trying to model behavior in large worlds by deriving the optimal, rational thing to do, is misguided. This approach works well in small worlds but in large worlds it's highly likely that you'll specify the problem incorrectly. Heuristics can work better than complicated statistical bayesian reasoning. There's a great example of guessing which of two German cities is larger, based on a vector of attributes such as whether it has a university, whether it's on a river, whether it's located in the industrial belt, etc. In this case, a simple take-the-best heuristic, which looks only at the most relevant attribute that can distinguish between the two towns in question, outperforms an SVM.

This is a strong statement: not only can you understand actual choices better if you allow yourself to consider non-bayesian agents, you may understand what is actually optimal better.

I'll say that again a different way because I think it's that important: When we observe people behaving in a way that seems suboptimal, we should not infer that people are violating the rational/bayesian/vNM agent model. We should first question whether we truly understand the problem as well as we think we do.

This means that one very common response to psychologists' claims that people are non-(Bayesian/vNM/rational) by economists who are trying to rescue homo economicus, while clearly true in many cases, is sometimes not even necessary to resort to. In particular, heuristics are often seen to be rational because they are the optimal balance between mental calculation costs and accuracy. As the German city example proves, heuristics may in fact be a better approach to large world problems than a more sophisticated statistical analysis.*

Heuristics therefore definitely belong in the basket of reasons behind one of my favorite soapboxes: respect revealed preferences! Behavioral economics is too often seen as an excuse for all kinds of intervention in choices in order to "help" people optimize. But trying to do that is problematic for all kinds of reasons, including that it is very hard to prove that people are actually making mistakes. Observing demand for commitment devices is one of the rare cases where we can definitely say that restricting the choices of some people would make them better off. Such clear cases are few and far between.**


*My other objection to this frequent assertion (which I do believe is true in many cases, just not so many) is that critics of economics don't understand that most economic models are "as-if" models, and many economists have started to forget it, probably partially in response to all the negative press about classical economics that fixates on the implausibility that people actually make the calculations we model. But predicting the trajectory of a baseball is difficult to calculate, yet humans instinctively can do it very very well. Predicting the trajectory of a frisbee in gusty wind may not even be analytically tractable but somehow humans can do it reflexively. So why is it so hard to believe that humans are as good at optimizing their utility as they are at optimizing their frisbee catching? High mental calculation costs are not implied by analytically complicated problems.

**Not that situations in which people can be helped are rare, but situations in which we're sure there is room to help, and that by trying we won't make things worse, are rare.


JohnRaymond said...

Let me comment on just one very small point and hopefully it will have implications for the broader points you're making: Your comparison between the skill of catching a frisbee in the wind and optimizing human utility in the economic domain for making wise choices between this or that investment or application of resources suffers from the fact that the former but not so much the latter draws on innate abilities (vision, hand-to-eye coordination, muscle reflexes, etc. for which no formal education is necessary). What innate human abilities or skills other than basic logic and rationality help us making the right investment or decision about where we should apply our resources?

Vera L. te Velde said...

Actually the latter is also dependent on innate abilities and both can be improved through learning. Shachar's (my adviser at Berkeley) work is pretty interesting for showing that a measure of rationality (consistency in preferences) is a better predictor of success than, e.g., IQ or big 5 or various other things.