Econ 2: Spring 2014: UC Berkeley: Administrivia for April 21, 2014
Morning Must-Read: Neil Irwin: How Underpaid German Workers Helped Cause Europe’s Debt Crisis

Mark Thoma on Academic Macroeconomics's Streetlight Problems: Tuesday One Year Ago on the Internet Weblogging

Mark Thoma: Economist's View: Empirical Methods and Progress in Macroeconomics: Empirical Methods and Progress in Macroeconomics

The blow-up over the Reinhart-Rogoff results reminds me of a point I’ve been meaning to make about our ability to use empirical methods to make progress in macroeconomics. This isn't about the computational mistakes that Reinhart and Rogoff made, though those are certainly important, especially in small samples, it's about the quantity and quality of the data we use to draw important conclusions in macroeconomics.

Everybody has been highly critical of theoretical macroeconomic models, DSGE models in particular, and for good reason. But the imaginative construction of theoretical models is not the biggest problem in macro – we can build reasonable models to explain just about anything. The biggest problem in macroeconomics is the inability of econometricians of all flavors (classical, Bayesian) to definitively choose one model over another, i.e. to sort between these imaginative constructions. We like to think or ourselves as scientists, but if data can’t settle our theoretical disputes – and it doesn’t appear that it can – then our claim for scientific validity has little or no merit….

Without repeated experiments – with just one set of historical data for the US to rely upon – it is extraordinarily difficult to tell the difference between a spurious correlation and a true, noteworthy relationship in the data. Even so, if we had a very, very long time-series for a single country, and if certain regularity conditions persisted over time (e.g. no structural change), we might be able to answer important theoretical and policy questions (if the same policy is tried again and again over time within a country, we can sort out the random and the systematic effects). Unfortunately, the time period covered by a typical data set in macroeconomics is relatively short (so that very few useful policy experiments are contained in the available data, e.g. there are very few data points telling us how the economy reacts to fiscal policy in deep recessions).

There is another problem with using historical as opposed to experimental data, testing theoretical models against data the researcher knows about when the model is built. In this regard, when I was a new assistant professor Milton Friedman presented some work at a conference that impressed me quite a bit. He resurrected a theoretical paper he had written 25 years earlier (it was his plucking model of aggregate fluctuations), and tested it against the data that had accumulated in the time since he had published his work… a test against data that the investigator could not have known about when the theory was formulated is a different story – those tests are meaningful (Friedman’s model passed the test using only the newer data).

As a young time-series econometrician struggling with data/degrees of freedom issues I found this encouraging. So what if in 1986 – when I finished graduate school – there were only 28 quarterly observations for macro variables (112 total observations, reliable data on money, which I almost always needed, doesn’t begin until 1959). By, say, the end of 2012 there would be almost double that amount (216 versus 112!!!). Asymptotic (plim-type) results here we come! (Switching to monthly data doesn’t help much since it’s the span of the data – the distance between the beginning and the end of the sample – rather than the frequency the data are sampled that determines many of the “large-sample results”).

By today, I thought, I would have almost double the data I had back then and that would improve the precision of tests quite a bit. I could also do what Friedman did, take really important older papers that give us results “everyone knows” and see if they hold up when tested against newer data.

It didn’t work out that way. There was a big change in the Fed’s operating procedure in the early 1980s, and because of this structural break today 1984 is a common starting point for empirical investigations….

I haven’t completely lost faith, but it’s hard to be satisfied with our progress to date. It’s even more disappointing to see researchers overlooking these well-known, obvious problems – for example the lack pf precision and sensitivity to data errors that come with the reliance on just a few observations – to oversell their results.

Why Politics and Economics Are a Toxic Cocktail:

Macroeconomics has not fared well in recent years. The failure of standard macroeconomic models during the financial crisis provided the first big blow to the profession, and the recent discovery of the errors and questionable assumptions in the work of Reinhart and Rogoff further undermined the faith that people have in our models and empirical methods. 

What will it take for the macroeconomics profession to do better?

Prior to the crisis, macroeconomists believed that large financial crashes – the type that could cause a Great Depression – were all but impossible in a modern economy guided by brilliant economists. Because of this, standard theoretical models focused on other questions.

When it became evident that economists weren’t so brilliant after all – that the risk of a financial meltdown and a deep, prolonged recession was a very real possibility – the standard macroeconomic models provided little or no guidance about how monetary and fiscal policymakers should respond. My solution at the time, one heartily endorsed by others, was to go back to the IS-LM or old Keynesian model constructed after the Great Depression, a model built to provide guidance on exactly the types of problems we were facing. Keeping in mind the pitfalls in the IS-LM model and what we have learned since, this proved to be very useful.

However, those of us who used the older model for guidance faced considerable criticism from the advocates of modern models. The older models were rejected for good reason the advocates argued, and it seemed as though the powers within macroeconomics were rising up in defense of existing models. I was very pessimistic about making theoretical progress at the time. But my view has changed…. First, the existing models have proven more flexible than I imagined…. I am not convinced that these “Dynamic Stochastic General Equilibrium” models will, in the end, be capable of being pushed as far as we need to go. But there is progress…. Second, there are efforts to challenge the mainstream with competing models from groups such as the Institute for New Economic Thinking, and there is far more willingness than I expected among young researchers to look into alternatives such as network and agent-based theoretical models. This will help to push the research forward.

But when it comes to the empirical methods we use to sort between competing theoretical models, it’s hard to be as optimistic….

I’m more encouraged than I expected to be about progress in macroeconomics, but I am not at all satisfied with the current state of the profession. We still have a long way to go.

Comments