Making Kansas Less Like California...

Weekend Reading: Narayana Kocherlakota: On the Puzzling Prevalence of Puzzles

Narayana Kocherlakota: On the Puzzling Prevalence of Puzzles:

Academic macroeconomics is about solving a seemingly never-ending series of puzzles, as model after model fails to... [fit] only limited slices of the available data...

Very few macroeconomists try to look at, say, inflation swap options to make sure that the model is providing a good guide to market participants’ assessments of inflation tail risks.

But the prevalence of these puzzles is actually more than a little puzzling. Here’s what I mean: To an outsider or newcomer, macroeconomics would seem like a field that is haunted by its lack of data, especially good clean experimental data. In the absence of that data, it would seem like we would be hard put to distinguish among a host of theories with distinct policy recommendations.  So, to the novice, it would seem like macroeconomists should be plagued by underidentification or partial identification. But, in fact, expert macroeconomists know that the field is actually plagued by failures to fit the data – that is, by overidentification. 

Why is the novice so wrong? The answer is the role of a priori restrictions in macroeconomic theory.  Macroeconomists use a body of theory that imposes a number of a priori parametric restrictions on households and businesses.  Households maximize expected discounted utility flows, with utility functions that are required to lie in a narrowly specified class. Businesses maximize profits and engage in monopolistic competition (or perfect competition). Everyone updates their beliefs according to a priori specified rules of some kind (usually Bayesian updating).

The mistake that the novice made is to think that the macroeconomist would rely on data alone to build up his/her theory or model. The expert knows how to build up theory from a priori restrictions that are accepted by a large number of scholars.  (Indeed, in the academe, that’s exactly what it means to be an expert macroeconomist.) Those restrictions are what give the models their empirical content. As it turns out, the resulting models actually end up with too much content – hence, the seemingly never-ending parade of puzzles.

My tone is more than a little mocking - but I actually think that this approach to macro made a lot of sense in the mid-1970s or early 1980s.  Data and estimation were both much more expensive than they are today.  It was (I think) reasonable to substitute relatively cheap theory-driven restrictions for relatively expensive data-driven information. (Although it’s a little disturbing how little empirical work underlies some of those agreed-upon theory-driven restrictions – see p. 711 of Lucas (JMCB, 1980) for a highly influential example of what I mean.)

But it’s 2016, not 1976.  I think that it’s time for us to rely a lot less on theory and a lot more on data when we come to addressing questions.  If we make that switch, we won’t be confronting puzzle after puzzle anymore.  Instead, we’ll have to learn to live with a different problem: partial (as opposed to exact or over) identification of parameters that will translate into uncertainty about answers to questions of interest.  (But so what? Chuck Manski taught our micro colleagues how to deal with partial identification many years ago.)  I admit that I’m not at all sure yet what such a change would mean in a practical sense – but I’m looking forward to finding out.

It's simple: you start with the correlations in the data. You then require theoretical assumptions that move your model away from the correlations in the data. You then try to "test" or "fit" your model, which cannot be done without playing some kind of intellectual Three Card Monte. You then dismiss all other approaches as "unscientific" and "ad hoc".

Comments