Paul Krugman reminds me of Noah Smith in July 2012:
Back in 2009 [John] Cochrane predicted inflation, it hasn't happened yet, and DeLong made fun of Cochrane for that fact. Cochrane... [responds] The inflation prediction was (and is) a statement about risks, not a time-specific forecast.... This is a very fair retort. Predictions are not necessarily forecasts...
Naughty, naughty Noah!
What Cochrane actually said was this:
The danger now is inflation. And I would say it's a greater danger than most of the other people have said. Our danger now is a run on Treasury debt. It's not just can the Fed soak this stuff back up again, but can it soak this enormous amount of debt back up again when people don't want either money or Treasury bills or anything labeled "U.S. Government." The danger is not 1932; the danger is Argentina, a massive run from Treasury debt. And then monetary policy will not be able to do anything. You can fool around with interest rates all you want. When people don't want Treasury bills or money you're stuck...
When you say something is "the danger", you are saying that other things are not dangers. "The". Definite article. You are saying that there is one and only one danger. And you are saying that it is this:
Inflation... a run on Treasury debt... people... not want[ing] either money or Treasury bills or anything labeled "U.S. Government"... Argentina, a massive run from Treasury debt... monetary policy will not be able to do anything...
And if other things are in fact dangers? Then the fact that you did not say that "the danger" would arrive by a date certain does not rescue you. You are wrong in your dismissal of those other things as not dangers.
You can plead hyperbole--that you did not mean that this was really the only danger but that it was the only significant danger, or the only truly worrisome danger, and that it was the only worrisome or significant danger.
But then, when asked about what actually happened, and whether your dangers was in fact the only worrisome or significant behavior, all you can then do is say something like: "LOOK! HALLEY'S COMET!!"
I still say that while it is true that predictions and scenarios are not necessarily forecasts, a willingness to mark your beliefs to market is the first virtue of a wannabe academic.
I say a failure to mark your beliefs to market is a fatal vice for anybody who wants to be taken as an intellectual.
I say that the only way to deal with people who are not willing to do so that is fair to everyone--to them, to readers, and to other writers--is to classify them as examples of irrational sociological and historical forces at work: of fads, of fashions, of bubbles, of intellectual Ponzi schemes, et cetera...
So say we all.
And we have: Paul Krugman: Inflationistas at Bayes:
I’m putting together a syllabus... ended up looking at... the usual suspects.... John Cochrane... declared, back in 2009, that “the danger now is inflation.” Cochrane angrily denied that this was of any significance--he only said that it was a danger, he didn’t necessarily predict that it would happen within any particular time frame. (He thereby provided a demonstration of another key fact about our economic debate: nobody ever admits that they were wrong about anything, and nobody changes views in the light of evidence.) Noah Smith, characteristically, tried to find some extenuating circumstances. Etc., etc.
So anyway, retreading this old ground, I found myself thinking about Bayes’s theorem.
It seems to me that Cochrane’s position--he only said it was a danger, not that it would happen at any particular time, so it signifies nothing if it doesn’t happen even after four years have passed--is just untenable in its strong form. If saying that something is a danger carries no implications for the likelihood that it will actually occur, what is the point of saying it? You might as well stand up there and say:
Nice day for weather
Mary had a little lamb.
No, clearly talking about the danger of inflation was some kind of statement about probabilities--in particular, a statement that the probability of inflation is, according to the speaker’s model of the world, higher than it is in other peoples’ models of the world. And that means that actual events do or at least should matter--they may not prove that one model is wrong and another is right, but they should certainly affect your assessment of which model is more likely to be right.
In short, it’s a Bayesian thing.
Now, language is often vague here. But let’s do a sort of finger exercise. Imagine John, a finance professor, and Paul, an economist/columnist. (George and Ringo wisely stayed out of the whole thing.) In 2009, John says “the danger now is inflation,” while Paul says “there is little danger of inflation.” So, let’s try to assign probabilities to those statements. I don’t think it’s unfair to imagine that John was giving an 80 percent probability to serious inflation over the next four years, while Paul was giving it only a 20 percent probability.
You, as an outside observer, have no way to judge these guys, so ex ante you give each of them a 50 percent probability of having the right model.
Four years pass, and inflation fails to materialize.
You can think of this as a lottery in which there are two urns--a John urn and a Paul urn--containing black balls representing inflation and white balls representing no inflation, with the John urn containing 80 percent black balls but the Paul urn only 20 percent. (What’s a finance professor urn? A lot, when you take consulting fees into account.)
First, a coin is tossed to determine which urn will be used, then a ball is drawn. You know that the ball is white; you now have to estimate the odds that it was drawn from John’s urn.
And the answer is that there’s only a 20 percent chance. Ex ante you considered it equally likely that John and Paul might be right; ex post those odds have shifted to 4 to 1 in Paul’s favor.
The point is that using hedged language doesn’t insulate you from consequences if things don’t turn out the way you were clearly suggesting they would, nor does the true point that sometimes the right model makes a wrong prediction. If your model led you to believe that inflation was a “great danger” in 2009, the fact that this danger never came to pass should substantially reduce your belief in that model--and should substantially reduce your credibility if you refuse to revise your beliefs.