For the Weekend: Stephen Vincent Benet: The Devil and Daniel Webster XII

Comment of the Day: I have evoked some rants from Robert Waldmann...: Robert Waldmann: Monday Smackdown: Oh Dear!: "It isn't exactly Robert Waldmann's critique...

...In 1982 someone told me that he thought macroeconomics had taken a wrong turn and commenced a sterile research program which would last decades and be fruitless. That's a lot more impressive than saying such a thing now.

Who was that guy? Oh yeah, his name was Brad DeLong.

He was explaining why he had chosen history and econometrics as fields.

Experiments? Bah, humbug!

I don't know where to put this, but I have a theory as to why people call simulations experiments: If you are dealing with something you don't understand, you attempt to learn how it works with experiments. The perception that theoretical work assisted by computers is experimental is due to the fact that no one understands what drives the behavior of modern DSGE models.

This is one of their defects. One use of a model is to clarify thought. A model which is mysterious like a cell (or an economy) can't clarify thought. If you need to do numerical experiments to understand the behavior of your model, it has failed one of the purposes of models. If it yields bad predictions, it has failed the other purpose. The sense that theoretical macro is experimental is just another aspect of its failure as theory. It isn't reality, but it is almost as incomprehensible as the real world. This is not a good thing.


@Shameful: No. Something which can be tested with a numerical experiment is a conjecture not a hypothesis. It is mathematics not social science. With numerical experiments, one can learn about one's model. One can't learn about the real world. Now mathematicians don't accept numerical experiments as a replacement for proofs. That is, numerical experiments are low quality mathematics. As mathematics they aren't theorems, they are examples. But they are low quality mathematics and not science at all.

Now they are also fun. Understanding the behavior of a complex model feels like learning about the world. It is much easier than proving theorems (rigour is tiresome) or collecting new data. It feels like research and also feels like a video game (fun). It's not science. It's not even like writing up the results of numerical experiments (ugh). It's not good. It's calling me. Sorry I have to simulate my model. I can't remember the fiddle factors that made it fit the data. I had them once and I must find them.

Kids Don't Let This Happen To You.


@JEC Oh my. I teach PhD level macro so I better have an answer to that one. I think we have more to say about growth than about the cycle. The Solow growth model is weirdly useful even though it is so simple that it seems that it can't possibly be. Now economic historians have adopted it, but Solow was a macroeconomist. Newer models challenge it in interesting ways.

I think an IS curve and a Phillips curve with what used to be called static expectations is, by far, the most useful model for policy makers. There is some point in reviewing the arguments that these models can't work (out of sample -- for conditional forecasts of the effects of policy etc) on the way to noting that they work better than newer models. That is, to tell kids don't use them because they are very simple, use them because they work. Not much for 45 years of effort by very smart people, but something.

Krugman and Blanchard are macroeconomists. They have a lot of useful things to say. Their stuff is published on a blog and Peterson Institute Working papers, but it can be taught to graduate students (or undergraduates but that's a strength). It is obvious that they have a lot to say which you average smart person doesn't understand the first time they explain it. This is also true of Brad (here) but he's an economic historian, pundit, blogger, moralist oh hell you're here you know.

Here I think Krugman especially has 2 points useful to graduate students:

  1. Old Keynesian economics with an IS very useful (the fact that they studied it already as undergraduates doesn't mean they can't be told that).

  2. Everything which is different in New Keynesian economics can be explained with 2 periods: the present and the long run. This demystifies it and shows that the assumption that there is only one possible long run steady state (or balanced growth path) and the economy converges to it is central to all the implications of new vs old Keynesian economics. Also worth noting there is no evidence this has anything to do with reality.

So that's lecture 1. Now I have to plan lecture 2.


I object to another word in the sentence—'only'. I will modify it to answer your objection: "The only place that we can do [thought] experiments is in dynamic stochastic general equilibrium (DSGE) models." The assertion is that all models are general equilibrium models.

Now back when I was a student, general equilibrium models were Walrasian models with price taking agents. Perfect competition was one of the assumptions. That's what the phrase meant. Now the equilibrium which is general is Nash equilibrium. Imperfect competition is a standard assumption. The triumph of imperfect competition was the Eichenbaum, Cristiano and Evans model and it was a triumph because it meant that people who worked next to great lakes had admitted that people who worked nearer to the Atlantic Ocean were more nearly correct.

But nothing has ever supported the argument that anything is gained by assuming the world is in Nash equilbrium. There are two problems. By itself the assumption of Nash equilibrium implies nothing at all—it can't impose discipline on our theory. Only if you require plausible assumptions about tastes and technology does it imply anything. Yet, standard models have confessedly implausible assumptions. It is argued that it is necessary to assume Nash equilibrium to impose discipline but fine to assume things which are absurd and contradicted by micro data. The arguments are opposite. Not only is it impossible for them both to be true, it is impossible for both to be false.

Longer rant here: http://rjwaldmann.blogspot.it/2012/03/modern-macroeconomic-methodology-modern.html

The history of efforts to base macro on Nash equilibrium is, as our host Brad notes, what one would expect from a fundamentally misguided research project. With great effort you can make the new improved models behave like the old models. Once this is achieved, nothing at all has been gained. The hope might be that once the model is fiddled to fit 10 aspects of old Keynesian models (now considered stylized facts not proper good economic DSGE models) then it will fit the 11th. This is what happens in a nondegenerative reasearch program. But consistently, the new high brow high tech DSGE models require one fiddle factor for every moment of the data they match. This is what happens when one's approach is worthless and one's core assumptions are false.

Also, all macroeconomists I know agree with all of this.


What is "the distinction between adding elements that take the model closer to reality (heterogeneous agents, bounded rationality, financial sector etc.) and a fiddle factor?"

Simple. If you observe micro data to model heterogenous agents, it is not a fiddle factor. If you assume whatever distribution of traits allows you to fit aggregate moments, the parameters of the distribution are fiddle factors. Similarly, if the financial sector is estimated with financial data, it's not a fiddle factor. If there are unexplained and unobserved financial factors which enable you to fit aggregates, they are fiddle factors.

Finally, and difficultly, if your model of agents without rational expectations is based on psychological experiments (or again micro data) it is not a fiddle factor.

The key distinction is that in any legitimate science there isn't one arbitrary factor for every phenomenon to be explained (I am quoting Einstein from memory).

Only if you can kill two birds with one stone do you have any business submitting an article for publication.

If a model with parameters chosen (and latent variables invented) to fit moments gives good out of sample forecasts, then you might be on to something. If the forecasts are about as good as univariate time series models, you have wasted your time.

It is a fact that the Smets-Wouters model works equally well when Cosma Shalizi randomly renamed the time series, say replacing the nominal interest rate with hours worked (actually on average the random permutations gave higher likelihoods than the standard DSGE model).

This is what one would expect with a totally wrong, utterly worthless modelling strategy.

I have another question. Is there ever a time to drop a research program and start over as if no one had tried to explain the phenomena? If not, why are you so sure people should have stuck with alchemy and the 4 humours theory of disease? If so, why don't you think DSGE macro is at that stage?

Comments