**Must-Read:** At least this has produced some useful work in how to teach the ignorant today things about convergence to equilibrium that Frank Fisher, Tom Sargent, and many others knew very well back at the end of the 1970s:

**: Neo-Fisherian Equilibrium with Upper and Lower bounds: "Naryana [Kocherlakota]... [thinks] models should have relatively robust predictions....**

If what happens in the limit is totally different from what happens at the limit, we have a problem.... If each boy racer had wanted to drive at 90% of the average speed, we get exactly the same Nash equilibrium, where they all drive at 0km/hr and stay in Ottawa, only now it's a 'stable' equilibrium. We do not get multiple equilibria by adding an upper (or negative lower) bound to their speed. Any plausible equilibrium should be like that; it should be robust to minor changes in the boundary conditions. But if each boy racer wants to drive at 110% of the average speed, so driving at 0km/hr becomes an unstable equilibrium, adding boundary conditions creates new equilibria that are more plausible than the original unstable equilibrium, simply because they are stable....

We can see what Narayana is doing, when he considers a finite horizon version of the same game, as being like adding boundary conditions. If the game's equilibrium is very fragile when you add or subtract or change those boundary conditions, there is something wrong with that equilibrium. We ought to get the same results in the limit as at the limit. If we don't, we have a problem. Something like the Howitt/Taylor principle (or controlling a monetary aggregate or NGDP rather than a nominal interest rate) can convert an unstable equilibrium into a stable one.