Saturday, March 6, 2010


I think anyone who runs economic experiments should have to do lots of them too, for two reasons. 1) Experimental results are much more surprising when you realize that the players don't really understand what's going on on a larger scale, don't know how to maximize their payoff, don't know what the equilibrium is, and are just fumbling around trying to make a few bucks. 2) Theoretical results aren't behavioral, and theoretical intuition is very different than behavioral intuition. If you are used to thinking in terms of the latter, it guides the former (and methodologically, guides the experimental design to start with, which is critically important.)

As an example of the first point, I once played a committee voting game where we tried to get majority support for an agenda that was as close as possible to our ideal point. I had virtually no idea what other people's preferences were based on a few votes and propositions, and REALLY had no idea what equilibrium was. Yet results from these experiments show robustly that equilibrium is reached quickly and exactly. It's pretty crazy, from that confused perspective.

As for the second point, Once as an undergrad I played a variant of matching pennies. Theoretically, the optimal behavior is to randomly choose an action, 50% of the time on each choice. In reality, you sit there trying to guess what your partner will do based on their previous decisions. Humans aren't very good randomizers and the more rounds went on with both of us sticking to a single decision, the more the tension rose. Every time I chose the same action, I implicitly said, "hey come and get me, I'm doing the same thing over and over, I dare you to optimize against it."

Of course, theoretically, the other person's choices period by period have nothing to do with what you should do (if they're anywhere near equilibrium, which is obviously true in this game...). Choices should be perfectly uncorrelated with previous choices. Yet in models of learning, beliefs about your partner's next choice is some kind of weighted average of their previous choices, ie positively correlated. And in reality, with real humans who aren't good at randomizing and partners who are the same or at least anticipate this, they should be negatively correlated. Having done it, it's obvious that those standard belief models are inappropriate in that situation. But that's what they focused on anyway.

(Not that it's a bad paper, but I think a different game would be a better setting in that sense...)