Previous month:
July 2018
Next month:
September 2018

August 2018

Hysteresis: Some Fairly-Recent Must- and Should-Reads

stacks and stacks of books

  • The empirical studies are finding more and more hysteresis—more hysteresis in the sense of a persistent downward shadow cast by a recession than I would have believed likely. I keep hunting for something wrong with these studies. But there are too many of them. And they all—at least all those published that cross my desk—point in the same direction: Karl Walentin and Andreas Westermark: Stabilising the real economy increases average output: "DeLong and Summers (1989)... argue that (demand) stabilisation policies can affect the mean level of output and unemployment...

  • As Chief Acolyte of the "hysteresis view", I must protest! The "hysteresis view" has proved correct: Benoît Cœuré: Scars that never were?: Potential output and slack after the crisis: "To be clear... I do believe that deep recessions can have effects on the supply capacity of the economy that may take some time to unwind...

  • We are not yet at maximum feasible employment: Jared Bernstein: Employment Breakeven Levels: They’re higher than most of us thought: "We know neither the natural rate of unemployment nor the potential level of GDP...

Continue reading "Hysteresis: Some Fairly-Recent Must- and Should-Reads" »


This may, to some degree, be the growing pains of new technology. There were people who strongly objected to printing, on the grounds that the only way to truly grok a book was to copy it out word-for-word by hand. In their view, printing produced a bunch of shallow intellectual poseurs who would have only a surface and inadequate knowledge of the books that they had not really read but only skimmed (cf.: Elizabeth L. Eisenstein (1980): The Printing Press as an Agent of Change https://books.google.com/books?isbn=0521299551; Johannes Trithemius (1492): In Praise of Scribes https://books.google.com/books?isbn=0919026087). And Sokrates's attitude toward writing as a greatly inferior simulacrum and inadequate mimesis that could not create the true knowledge obtained through real dialogue is well known (cf.: Plato (370 BC): Phaedrus). Nevertheless, we believe that we have managed to adapt to printing and indeed to the creation of manuscript rather than just the oldest oral master-and-apprentice intellectual technologies. Perhaps we will find different things to be true once we will have trained our information-technology networks to be our servants as trusted information intermediaries and intellectual force multipliers, rather than (as they know are) the servants of the advertisers that pay them and thus that try to glue our eyeballs and attention to screens whether having our eyeballs and attention so-glued helps us become more like our best selves or not. But as of now the empirical evidence has become overwhelming: Susan Dynarski: For better learning in college lectures, lay down the laptop and pick up a pen: "When college students use computers or tablets during lecture, they learn less and earn worse grades. The evidence consists of a series of randomized trials, in both college classrooms and controlled laboratory settings...

Continue reading "" »


The view that all government should do in the economic realm was to establish property rights and enforce contracts was never true. Smart governments always did much, much more. (Dumb governments did much, much more too.) Indeed, it is only with proper regulation that a market can fulfill its appropriate social role as a consumer surplus-generating mechanism: Diane Coyle: Three Cheers for Regulation: "One of the striking changes any rich-world traveler to low-income countries cannot fail to have missed during the past decade or so is the rapid spread of mobile phone use...

Continue reading "" »


The more interesting question, I think, is: Should we use the natural rate hypothesis in forecasting and expect it to materially affect our forecasts over the next three years? And the answer, I think, is: no. The gearing of inflation on its past is low, and there is little impact of unemployment on inflation in the short run. Plus there is no good reason to think anything like the natural rate hypothesis holds near zero inflation: Olivier Blanchard: Should We Reject the Natural Rate Hypothesis?: "Fifty years ago, Milton Friedman articulated the natural rate hypothesis... the natural rate of unemployment is independent of monetary policy.... there is no long-run trade-off between the deviation of unemployment from the natural rate and inflation...

Continue reading "" »


As we try to figure out how to create a functional rather than a dysfunctional Habermasian public sphere to support at least semi-sane policies, I find it useful to look back at how previous functional and dysfunctional public spheres emerged and maintained themselves. The general view—which may be false—is that the Eighteenth Century Enlightenment did pretty well. And it had one of its wellsprings in the development of new genres, all of which she argues were in some way created as echoes and transformations of the personal letter. Well worth reading: Rachael Scarborough King: Writing to the World: Letters and the Origins of Modern Print Genres: "Rachael Scarborough King examines the shift from manuscript to print media culture in the long eighteenth century...

Continue reading "" »


A very good question asked by Michael Tomasky: Michael Tomasky: What Are Capitalists Thinking?: "Every once in a while in history, cause and effect smack us in the face.... The kind of capitalism that has been practiced in this country over the last few decades has made socialism look far more appealing.... If you’re 28 like Alexandria Ocasio-Cortez... what have you seen during your sentient life?...

Continue reading "" »


The "advantages of backwardness" have been a powerful factor in global equitable growth for centuries now. But they have definite limits. Very much worth reading from David Pilling on those limits: David Pilling: African economy: the limits of ‘leapfrogging’: "The rapid spread of technology has raised hopes for Africa, but digital services cannot take the place of good governance...

Continue reading "" »


Just think: if the New York Times had been willing to play ball with Nate Silver, they could have things of this quality—rather than more of their standard politician-celebrity-gossip and "Javanka are going to save us all" that has done so much to empower the Orange-Haired Baboons of the world: Nathaniel Rakich: 538 Election Update: How Our House Forecast Compares With The Experts’ Ratings: "FiveThirtyEight’s forecast is a tad more bullish on Democrats’ chances overall than the three major handicappers...

Continue reading "" »


Josh Marshall: We Know Trump Is Guilty. We’re Having a Hard Time Admitting It: "The greatest conceit in public life today is the notion that we don’t already know President Trump is guilty... of... conspiring... with a foreign power... and then continuing to cater to that foreign power either as payback for the assistance or out of fear of being exposed...

Continue reading "" »


Orange-Haired Baboons: Some Fairly-Recent Must- and Should-Reads

stacks and stacks of books

  • Just think: if the New York Times had been willing to play ball with Nate Silver, they could have things of this quality—rather than more of their standard politician-celebrity-gossip and "Javanka are going to save us all" that has done so much to empower the Orange-Haired Baboons of the world: Nathaniel Rakich: 538 Election Update: How Our House Forecast Compares With The Experts’ Ratings: "FiveThirtyEight’s forecast is a tad more bullish on Democrats’ chances overall than the three major handicappers...

  • Why are Fox News's victims so easily-grifted with respect to making them scared of liberal universities?: Jacob T. Levy: "I’ve made a lot of arguments in my life to people who didn’t want to hear them. I argued about sodomy laws and Bowers vs Hardwick with my grandmother when I was 15...

  • Michael Tomasky: Hail to the Chief: "It’s worth stepping back here to review quickly the steps by which the Republican Party became this stewpot of sycophants, courtesans, and obscurantists...

Continue reading "Orange-Haired Baboons: Some Fairly-Recent Must- and Should-Reads" »


In my opinion, Arindrajit Dube is one of the best economists around in figuring out what we should control for and why in order to achieve real econometric identification. The contrasting pole is simply to throw in a bunch of controls until you have produced the numbers you want. In my view, we do not teach what should be controlled for and how enough, so people pick it up on the fly. Arindrajit has picked it up, and is a master: Arindrajit Dube: Minimum wages and the distribution of family incomes in the United States: "I find that a 10 percent increase in the minimum wage reduces poverty among the nonelderly population by 2.1 percent and 5.3 percent across the range of specifications in the long run...

Continue reading " " »


An excellent piece from Marinescu, Dinan, and Hovenkamp: one of our working papers laying out how the analysis of how antitrust policy should be done given that compensated firms face their counter parties not just in the product but in labor markets. I think this is the most important thing I have seen out of our shop here at Equitable Growth this week*Ioana Marinescu, James G. Dinan, and Herbert Hovenkamp*: Anticompetitive mergers in labor markets: "Increased market concentration in labor markets threatens to facilitate coordinated interaction among employers that could lead to lower output and wage suppression in employment markets...

Continue reading "" »


Monday DeLong Smackdown/Hoisted: Greenspanism Looking Pretty Good...

Oy: This was perhaps the biggest thing I got most wrong in 2008. It's not saved by the weasel-words at the end: "If the tide of financial distress sweeps the Fed and the Treasury away--if we find ourselves in a financial-meltdown world where unemployment or inflation kisses 10%--then I will unhappily concede, and say that Greenspanism was a mistake...: Greenspanism Looking Pretty Good...: Martin Wolf is gloomy:

A year of living dangerously for the world: It is now almost a year since the US subprime crisis went global. Many then hoped that the repricing of risk would be no more than a brief interruption.... Such hopes have been disappointed.... So where is the world economy now? And where might it go? Here are some preliminary answers to these questions.

Continue reading "Monday DeLong Smackdown/Hoisted: Greenspanism Looking Pretty Good..." »


Monday Smackdown: George Borjas P-Hacking His Way Along...

How George Borjas p-hacked his way to his conclusion that immigrants have big negative effects on native-worker wages: Jennifer Hunt and Michael Clemens: Refugees have little effect on native worker wages: "Card (1990) found that a large inflow of Cubans to Miami in 1980 did not affect native wages or unemployment...

Continue reading "Monday Smackdown: George Borjas P-Hacking His Way Along..." »


An interesting paper saying that Glick and Rose's findings are not robust. I am generally pro-customs unions. I was taught when I was knee-high to a grasshopper that the Zollverein was a big deal. And I have always been impressed by the scale of cross-state trade in the U.S., which dwarfs cross-nation trade within Europe. But I may have to rethink—and I believe I certainly have to revise up my beliefs about how precisely these effects can be estimated: Douglas L. Campbell and Aleksandr Chentsov: Breaking Badly: The Currency Union Effect on Trade: "A key policy question is how much currency unions (CUs) affect trade...

Continue reading "" »


We economists spend a lot of time looking at aggregates and averages—Trevon Logan likes to quote Robert Fogel on how counting is our secret game-changing analytical technique. But it is as important to get thick description of what happens to individual people's lives, so that you know what your aggregate and average numbers mean: Blythe George: “Them old guys... they knew what to do”: Examining the impact of industry collapse on two tribal reservations: "Using 46 in-depth interviews conducted on the Yurok and Hoopa Valley reservations...

Continue reading "" »


Assessing the "China Shock"

The School of Athens by Raffaello Sanzio da Urbino The School of Athens Wikipedia

Assessing the China Shock: I enter into a conversation between Noah nd Larry to give my views:

Noah Smith: An easy way to reconcile @de1ong and @joshbivens_DC on trade is to see the China Shock (and thus China's entry into the WTO) as sui generis http://www.bradford-delong.com/2018/08/eg-it-has-always-seemed-to-me-that-the-sharp-josh-bivens-is-engaging-in-some-motivated-reasoning-here-1-_putting-pen-to-.html

Larry Mishel: What I don't like about the 'shock' terminology is that the damage to jobs and wages are permanent, not a temporary phenomenon. One can argue...

Continue reading "Assessing the "China Shock"" »


Following the pattern of the Bank of England, the United States's regional Federal Reserve Banks are quasi-governmental corporations with special charters, missions, and governance structures created by the central government. This provides them with an unusual degree of autonomy. For example, the Bank of England's charter strictly regulated the kinds of financial transactions it could undertake without going ultra vires. But the Bank of England did, repeatedly, engage in ultra vires actions. In fact, the Chancellor of the Exchequer would, not infrequently, write to the Governor of the Bank of England inviting him to and requesting that the Bank do so.

Somehow, in the summer of 2008, the systemically-important American investment bank of Lehman Brothers entered into a state in which it was grossly insolvent, albeit still liquid. Somehow, in the summer of 2008, the Federal Reserve failed to either to develop a plan to guide the successful conservation and resolution of Lehman Brothers should it also become insolvent or to immediately shut it down before the insolvency of this systemically-important financial income became large enough to threaten the stability of the system as a whole. So when Lehman did hit the wall in the fall of 2008, Bernanke, Paulson, and Geithner dithered. That, I think, was the biggest policy mistake of the last decade.

Ryan Cooper has a very good candidate for the second biggest policy mistake of the past decade, however: Ryan Cooper: The biggest policy mistake of the last decade: "After the 2008 financial crisis, old-fashioned Keynesians offered a simple fix: Stimulate the economy. With idle capacity and unemployed workers, nations could restore economic production at essentially zero real cost. It helped the U.S. in the Great Depression and it could help the U.S. in the Great Recession too...

Continue reading "" »


Ten Years and One Month Ago on Grasping Reality: July 15-17. 2008...

stacks and stacks of books

Brad DeLong was totally, utterly, completely wrong on July 16, 2008. In my defense, I would say that my wrongness was because I did not understand that Bernanke, Geithner, and Paulson were about to abandon Bagehot's Rule for how to deal with a financial crisis. Why did they abandon Bagehot's Rule? None of them has ever given an explanation of their thinking that I can regard as other than transparently false: Greenspanism Looking Pretty Good...: The dot-com bubble and the real-estate bubble were bad news for the investors in Webvan, WorldCom, Countrywide, FNMA, and securitized subprime mortgages. But they were, by and large, good news for the rest of us. And investors are supposed to take care of themselves. Now we are not yet out of the woods. If the tide of financial distress sweeps the Fed and the Treasury away--if we find ourselves in a financial-meltdown world where unemployment or inflation kisses 10%--then I will unhappily concede, and say that Greenspanism was a mistake. But so far the real economy in which people make stuff and other people buy it has been remarkably well insulated from panic at 57th and Park and on Canary Wharf...

Why We Need a Different Opposition Party to Compete with the Democrats (Miscellaneous): The spinmasters for Goldwater, Nixon, and Reagan rooted the Republican Party in three beliefs: 1. the government is not on your side--the government is on the side of the Negroes. 2. tax cuts always raise revenues. 3. the people outside our borders (and the people inside our borders who came from outside our borders) are not our friends. The ramifications of these beliefs have poisoned the entire party. They are the reason that smart well-intentioned Republicans--like George H.W. Bush--turned out to be mediocre presidents; that not-smart but well-intentioned Republicans--like Ronald Reagan (who with the help of his wife and her astrologer partially escaped #3)--turned out to be lousy presidents; and Republicans who were neither smart nor well-meaning--like George W. Bush--has turned out to be either the worst or the second-worst president in American history (depending on what you think of James Buchanan)...

Let Us Now Speak Ill of the Economist of London: I would not have thought that a British publication could write an obituary for Jesse Helms that omits Helms's claim that British Prime Minister Margaret Thatcher was a communist dupe helping the Russians conquer Central America. Nevertheless, the London Economist does...

Continue reading "Ten Years and One Month Ago on Grasping Reality: July 15-17. 2008..." »


Tony Judt (2009): What Is Living and What Is Dead in Social Democracy?: Weekend Reading

Il Quarto Stato

Weekend Reading: The right answer to why the American political system is doing so very badly at turning wealth into societal well-being is John Steinbeck's: "I guess the trouble was that we didn’t have any self-admitted proletarians. Everyone was a temporarily embarrassed capitalist..."

Tony Judt (2009): What Is Living and What Is Dead in Social Democracy?: "Americans would like things to be better. According to public opinion surveys in recent years, everyone would like their child to have improved life chances at birth. They would prefer it if their wife or daughter had the same odds of surviving maternity as women in other advanced countries. They would appreciate full medical coverage at lower cost, longer life expectancy, better public services, and less crime...

Continue reading "Tony Judt (2009): What Is Living and What Is Dead in Social Democracy?: Weekend Reading" »


John Steinbeck: America and Americans: Weekend Reading

Il Quarto Stato

Weekend Reading: John Steinbeck: America and Americans: "SURE I remember the Nineteen Thirties, the terrible, troubled, triumphant, surging Thirties. I can’t think of any decade in history when so much happened in so many directions. Violent changes took place. Our country was modeled, our lives remolded, our Government rebuilt, forced to functions, duties and responsibilities it never had before and can never relinquish...

Continue reading "John Steinbeck: America and Americans: Weekend Reading" »


Monetary Policy: Some Fairly-Recent Should- and Must-Reads

stacks and stacks of books

  • Two and a half years after Jared wrote this, and there is still no sign that the economy has reached "full employment", or that the pace of wage and price growth is even beginning to spiral upwards. Thus he Federal Reserve continues to work with a model of the economy in which we should have very little confidence, if any: Jared Bernstein (2016): Important new findings on inflation and unemployment from the new ERP: "The 'Phillips curve'... negative correlation between inflation and unemployment...

  • Martin Sandbu: The devastating cost of central banks’ caution: "Timidity on monetary policy since 2008 has been as costly as the financial crisis...

  • But are we sure that our debts are in dollars? Would we know it if the big New York banks had been trying to boost their earnings by selling unhedged dollar puts, in the (probably correct) belief that if they all do this together they do not have a problem, the rest of us have a problem?: Paul Krugman: Opinion | Partying Like It’s 1998 - The New York Times: "Those of us who devoted a lot of time to understanding the Asian financial crisis two decades ago were wondering whether Turkey was going to stage a re-enactment. Sure enough...

Continue reading "Monetary Policy: Some Fairly-Recent Should- and Must-Reads" »


Put me down as somebody who is now not feeling sorry at all for these entitled clowns who greatly overestimate smarts and skill vs. luck. And F--- you, @jack, especially: Cathy O'Neal: Mark Zuckerberg Is Totally Out of His Depth: "I might be the only person on Earth feeling sorry for the big boys of technology. Jack Dorsey from Twitter, Mark Zuckerberg from Facebook, all those Google nerds: They’re monumentally screwed, because they have no idea how to tame the monsters they have created...

Continue reading "" »


Potsdam this year is 7F warmer than it averaged in the century before 1980. Berkeley is now Santa Barbara: Stefan Rahmstorf: Europe’s freak weather, explained: "Naive.... The smoothed curve shows... global warming... the scattering of the grey bars... random variations of the weather.... Slightly more than half of the 4.3 degrees would be due to global warming, the rest to weather. That... likely underestimates the contribution of climate change...

Europe s freak weather explained POLITICO

Continue reading "" »


Weekend Reading: Why the World Would Be Better with Weblogs: Daniel Kuehn on James Buchanan

Daniel Kuehn et al.: _Buchanan Was Not a Massive Resister. He Had All the Other Hallmarks of a "Moderate Segregationist", and of Course Supported Large Infusions of State Funds Into Segregated Private Schools to Avoid "Involuntary Integration": "There's a narrow sense in which you could fairly say it came from the 'political center' (keeping in mind-obviously-that the 'center' in 1950s Virginia was segregationist)...

Continue reading "Weekend Reading: Why the World Would Be Better with Weblogs: Daniel Kuehn on James Buchanan" »


Weekend Reading: Cosma Shalizi (2012): In Soviet Union, Optimization Problem Solves You

Sebastiano Ricci 045 Centaur Wikipedia

Weekend Reading: Just for what it's worth, my view is: centaurs, yes...

Cosma Shalizi (2012): In Soviet Union, Optimization Problem Solves You: "Attention conservation notice: Over 7800 words about optimal planning for a socialist economy and its intersection with computational complexity theory. This is about as relevant to the world around us as debating whether a devotee of the Olympian gods should approve of transgenic organisms. (Or: centaurs, yes or no?) Contains mathematical symbols (uglified and rendered slightly inexact by HTML) but no actual math, and uses Red Plenty mostly as a launching point for a tangent...

...There’s lots to say about Red Plenty as a work of literature; I won’t do so. It’s basically a work of speculative fiction, where one of the primary pleasures is having a strange world unfold in the reader’s mind. More than that, it’s a work of science fiction, where the strangeness of the world comes from its being reshaped by technology and scientific ideas—here, mathematical and economic ideas...

Red Plenty is also (what is a rather different thing) a work of scientist fiction, about the creative travails of scientists. The early chapter, where linear programming breaks in upon the Kantorovich character, is one of the most true-to-life depictions I’ve encountered of the experiences of mathematical inspiration and mathematical work. (Nothing I will ever do will be remotely as important or beautiful as what the real Kantorovich did, of course.) An essential part of that chapter, though, is the way the thoughts of the Kantorovich character split between his profound idea, his idealistic political musings, and his scheming about how to cadge some shoes, all blind to the incongruities and ironies.

It should be clear by this point that I loved Red Plenty as a book, but I am so much in its target demographic[1] that it’s not even funny. My enthusing about it further would not therefore help others, so I will, to make better use of our limited time, talk instead about the central idea, the dream of the optimal planned economy.

That dream did not come true, but it never even came close to being implemented; strong forces blocked that, forces which Red Plenty describes vividly. But could it even have been tried? Should it have been?

 

“The Basic Problem of Industrial Planning”

Let’s think about what would have to have gone in to planning in the manner of Kantorovich.

I. We need a quantity to maximize. This objective function has to be a function of the quantities of all the different goods (and services) produced by our economic system. Here “objective” is used in the sense of “goal”, not in the sense of “factual”. In Kantorovich’s world, the objective function is linear, just a weighted sum of the output levels. Those weights tell us about trade-offs: we will accept getting one less bed-sheet (queen-size, cotton, light blue, thin, fine-weave) if it lets us make so many more diapers (cloth, unbleached, re-usable), or this many more lab coats (men’s, size XL, non-flame-retardant),or for that matter such-and-such an extra quantity of toothpaste. In other words, we need to begin our planning exercise with relative weights. If you don’t want to call these “values” or “prices”, I won’t insist, but the planning exercise has to begin with them, because they’re what the function being optimized is built from.

It’s worth remarking that in Best Use of Economic Resources, Kantorovich side-stepped this problem by a device which has “all the advantages of theft over honest toil”. Namely, he posed only the problem of maximizing the production of a “given assortment” of goods—the planners have fixed on a ratio of sheets to diapers (and everything else) to be produced, and want the most that can be coaxed out of the inputs while keeping those ratios. This doesn’t really remove the difficulty: either the planners have to decide on relative values, or they have to decide on the ratios in the “given assortment”.

Equivalently, the planners could fix the desired output, and try to minimize the resources required. Then, again, they must fix relative weights for resources (cotton fiber, blue dye #1, blue dye #2, bleach, water [potable], water [distilled], time on machine #1, time on machine #2, labor time [unskilled], labor time [skilled, sewing], electric power…). In some contexts these might be physically comparable units. (The first linear programming problem I was ever posed was to work out a diet which will give astronauts all the nutrients they need from a minimum mass of food.) In a market system these would be relative prices of factors of production. Maintaining a “given assortment” (fixed proportions) of resources used seems even less reasonable than maintaining a “given assortment” of outputs, but I suppose we could do it.

For now (I’ll come back to this), assume the objective function is given somehow, and is not to be argued with.

IIA. We need complete and accurate knowledge of all the physical constraints on the economy, the resources available to it.

IIB. We need complete and accurate knowledge of the productive capacities of the economy, the ways in which it can convert inputs to outputs.

(IIA) and (IIB) require us to disaggregate all the goods (and services) of the economy to the point where everything inside each category is substitutable. Moreover, if different parts of our physical or organizational “plant” have different technical capacities, that needs to be taken into account, or the results can be decidedly sub-optimal. (Kantorovich actually emphasizes this need to disaggregate in Best Use, by way of scoring points against Leontief. The numbers in the latter’s input-output matrices, Kantorovich says, are aggregated over huge swathes of the economy, and so far too crude to be actually useful for planning.) This is, to belabor the obvious, a huge amount of information to gather.

(It’s worth remarking at this point that “inputs” and “constraints” can be understood very broadly. For instance, there is nothing in the formalism which keeps it from including constraints on how much the production process is allowed to pollute the environment. The shadow prices enforcing those constraints would indicate how much production could be increased if marginally more pollution were allowed. This wasn’t, so far as I know, a concern of the Soviet economists, but it’s the logic behind cap-and-trade institutions for controlling pollution.) Subsequent work in optimization theory lets us get away, a bit, from requiring complete and perfectly accurate knowledge in stage (II). If our knowledge is distorted by merely unbiased statistical error, we could settle for stochastic optimization, which runs some risk of being badly wrong (if the noise is large), but at least does well on average. We still need this unbiased knowledge about everything, however, and aggregation is still a recipe for distortions. More serious is the problem that people will straight-up lie to the planners about resources and technical capacities, for reasons which Spufford dramatizes nicely. There is no good mathematical way of dealing with this.

III. For Kantorovich, the objective function from (I) and the constraints and production technology from (II) must be linear.

Nonlinear optimization is possible, and I will come back to it, but it rarely makes things easier.

IV. Computing time must be not just too cheap to meter, but genuinely immense.

It is this point which I want to elaborate on, because it is a mathematical rather than a practical difficulty.

 

“Numerical Methods for the Solution of Problems of Optimal Planning”

It was no accident that mathematical optimization went hand-in-hand with automated computing. There’s little point to reasoning abstractly about optima if you can’t actually find them, and finding an optimum is a computational task. We pose a problem (find the plan which maximizes this objective function subject to these constraints), and want not just a solution, but a method which will continue to deliver solutions even as the problem posed is varied. We need an algorithm.

Computer science, which is not really so much a science as a branch of mathematical engineering, studies questions like this. A huge and profoundly important division of computer science, the theory of computational complexity, concerns itself with understanding what resources algorithms require to work. Those resources may take many forms: memory to store intermediate results, samples for statistical problems, communication between cooperative problem-solvers. The most basic resource is time, measured not in seconds but in operations of the computer. This is something Spufford dances around, in II.2: “Here’s the power of the machine: that having broken arithmetic down into tiny idiot steps, it can then execute those steps at inhuman speed, forever.” But how many steps? If it needs enough steps, then even inhuman speed is useless for human purposes…

The way computational complexity theory works is that it establishes some reasonable measure of the size of an instance of a problem, and then asks how much time is absolutely required to produce a solution. There can be several aspects of “size”; there are three natural ones for linear programming problems. One is the number of variables being optimized over, say n. The second is the number of constraints on the optimization, say m. The third is the amount of approximation we are willing to tolerate in a solution—we demand that it come within h of the optimum, and that if any constraints are violated it is also by no more than h. Presumably optimizing many variables (n >> 1), subject to many constraints (m >> 1), to a high degree of approximation (h ~ 0), is going to take more time than optimizing a few variables (n ~ 1), with a handful of constraints (m ~ 1), and accepting a lot of slop (h ~ 1). How much, exactly?

The fastest known algorithms for solving linear programming problems are what are called “interior point” methods. These are extremely ingenious pieces of engineering, useful not just for linear programming but a wider class of problems called “convex programming”. Since the 1980s they have revolutionized numerical optimization, and are, not so coincidentally, among the intellectual children of Kantorovich (and Dantzig). The best guarantees about the number of “idiot steps” (arithmetic operations) they need to solve a linear programming problem with such algorithms is that it’s proportional to

(m+n)3/2 n^2log(1/h)

(I am simplifying just a bit; see sec. 4.6.1 of Ben-Tal and Nemirovski’s Lectures on Modern Convex Optimization [PDF].)

Truly intractable optimization problems—of which there are many—are ones where the number of steps needed grow exponentially[2]. If linear programming was in this “complexity class”, it would be truly dire news, but it’s not. The complexity of the calculation grows only polynomially with n, so it falls in the class theorists are accustomed to regarding as “tractable”. But the complexity still grows super-linearly, like n^(3.5). Where does this leave us?

A good modern commercial linear programming package can handle a problem with 12 or 13 million variables in a few minutes on a desktop machine. Let’s be generous and push this down to 1 second. (Or let’s hope that Moore’s Law rule-of-thumb has six or eight iterations left, and wait a decade.) To handle a problem with 12 or 13 billion variables then would take about 30 billion seconds, or roughly a thousand years.

Naturally, I have a reason for mentioning 12 million variables:

In the USSR at this time [1983] there are 12 million identifiably different products (disaggregated down to specific types of ball-bearings, designs of cloth, size of brown shoes, and so on). There are close to 50,000 industrial establishments, plus, of course, thousands of construction enterprises, transport undertakings, collective and state forms, wholesaling organs and retail outlets. —Alec Nove, The Economics of Feasible Socialism (p. 36 of the revised [1991] edition; Nove’s italics)

This 12 million figure will conceal variations in quality; and it is not clear to me, even after tracking down Nove’s sources, whether it included the provision of services, which are a necessary part of any economy.

Let’s say it’s just twelve million. Even if the USSR could never have invented a modern computer running a good LP solver, if someone had given it one, couldn’t Gosplan have done its work in a matter of minutes? Maybe an hour, to look at some alternative plans?

No. The difficulty is that there aren’t merely 12 million variables to optimize over, but rather many more. We need to distinguish between a “coat, winter, men’s, part-silk lining, wool worsted tricot, clothgroup 29—32” in Smolensk from one in Moscow. If we don’t “index” physical goods by location this way, our plan won’t account for the need for transport properly, and things simply won’t be where they’re needed; Kantorovich said as much under the heading of “the problem of a production complex”. (Goods which can spoil, or are needed at particular occasions and neither earlier nor later, should also be indexed by time; Kantorovich’s “dynamic problem”) A thousand locations would be very conservative, but even that factor would get us into the regime where it would take us a thousand years to work through a single plan. With 12 million kinds of goods and only a thousand locations, to have the plan ready in less than a year would need computers a thousand times faster.

This is not altogether unanticipated by Red Plenty:

A beautiful paper at the end of last year had skewered Academician Glushkov’s hypercentralized rival scheme for an all-seeing, all-knowing computer which would rule the physical economy directly, with no need for money. The author had simply calculated how long it would take the best machine presently available to execute the needful program, if the Soviet economy were taken tobe a system of equations with fifty million variables and five million constraints. Round about a hundred million years, was the answer. Beautiful.So the only game in town, now, was their own civilised, decentralized idea for optimal pricing, in which shadow prices calculated from opportunity costs would harmonise the plan without anyone needing to possess impossibly complete information. [V.2]

This alternative vision, the one which Spufford depicts those around Kantorovich as pushing, was to find the shadow prices needed to optimize, fix the monetary prices to track the shadow prices, and then let individuals or firms buy and sell as they wish, so long as they are within their budgets and adhere to those prices. The planners needn’t govern men, nor even administer things, but only set prices. Does this, however, actually set the planners a more tractable, a less computationally-complex, problem?

So far as our current knowledge goes, no. Computing optimal prices turns out to have the same complexity as computing the optimal plan itself[3]. It is(so far as I know) conceivable that there is some short-cut to computing prices alone, but we have no tractable way of doing that yet. Anyone who wants to advocate this needs to show that it is possible, not just hope piously.

How then might we escape?

It will not do to say that it’s enough for the planners to approximate the optimal plan, with some dark asides about the imperfections of actually-existing capitalism thrown into the mix. The computational complexity formula I quoted above already allows for only needing to come close to the optimum. Worse, the complexity depends only very slowly, logarithmically, on the approximation to the optimum, so accepting a bit more slop buys us only a very slight savings in computation time. (The optimistic spin is that if we can do the calculations at all, we can come quite close to the optimum.) This route is blocked.

Another route would use the idea that the formula I’ve quoted is only an upper bound, the time required to solve an arbitrary linear programming problem. The problems set by economic planning might, however, have some special structure which could be exploited to find solutions faster. What might that structure be?

The most plausible candidate is to look for problems which are “separable”, where the constraints create very few connections among the variables. If we could divide the variables into two sets which had nothing at all to do with each other, then we could solve each sub-problem separately, at tremendous savings in time. The supra-linear, n^(3.5) scaling would apply only within each sub-problem. We could get the optimal prices (or optimal plans) just by concatenating the solutions to sub-problems, with no extra work on our part.

Unfortunately, as Lenin is supposed to have said, “everything is connected to everything else”. If nothing else, labor is both required for all production, and is in finite supply, creating coupling between all spheres of the economy. (Labor is not actually extra special here, but it is traditional4.) A national economy simply does not break up into so many separate, non-communicating spheres which could be optimized independently.

So long as we are thinking like computer programmers, however, we might try a desperately crude hack, and just ignore all kinds of interdependencies between variables. If we did that, if we pretended that the over-all high-dimensional economic planning problem could be split into many separate low-dimensional problems, then we could speed things up immensely, by exploiting parallelism or distributed processing. An actually-existing algorithm, on actually-existing hardware, could solve each problem on its own, ignoring the effect on the others, in a reasonable amount of time. As computing power grows, the supra-linear complexity of each planning sub-problem becomes less of an issue, and so we could be less aggressive in ignoring couplings.

At this point, each processor is something very much like a firm, with a scope dictated by information-processing power, and the mis-matches introduced by their ignoring each other in their own optimization is something very much like “the anarchy of the market”. I qualify with “very much like”, because there are probably lots of institutional forms these could take, some of which will not look much like actually existing capitalism. (At the very least the firm-ish entities could be publicly owned, by the state, Roemeresque stock-market socialism, workers’cooperatives, or indeed other forms.)

Forcing each processor to take some account of what the others are doing, through prices and quantities in markets, removes some of the grosser pathologies. (If you’re a physicist, you think of this as weak coupling; if you’re a computer programmer, it’s a restricted interface.) But it won’t, in general, provide enough of a communication channel to actually compute the prices swiftly—at least not if we want one set of prices, available to all. Rob Axtell, in a really remarkable paper, shows that bilateral exchange can come within h of an equilibrium set of prices in a time proportional to n^2log(1/h), which is much faster than any known centralized scheme.

Now, we might hope that yet faster algorithms will be found, ones which would, say, push the complexity down from cubic in n to merely linear. There are lower bounds on the complexity of optimization problems which suggest we could never hope to push it below that. No such algorithms are known to exist, and we don’t have any good reason to think that they do. We also have no reason to think that alternative computing methods would lead to such a speed-up[5].

I said before that increasing the number of variables by a factor of 1000 increases the time needed by a factor of about 30 billion. To cancel this out would need a computer about 30 billion times faster, which would need about 35 doublings of computing speed, taking, if Moore’s rule-of-thumb continues to hold, another half century. But my factor of 1000 for prices was quite arbitrary; if it’s really more like a million, then we’re talking about increasing the computation by a factor of 1021 (a more-than-astronomical, rather a chemical, increase), which is just under 70 doublings, or just over a century of Moore’s Law.

&If someone like Iain Banks or Ken MacLeod wants to write a novel where they say that the optimal planned economy will become technically tractable sometime around the early 22nd century, then I will read it eagerly. As a serious piece of prognostication, however, this is the kind of thinking which leads to”where’s my jet-pack?” ranting on the part of geeks of a certain age.

 

Nonlinearity and Nonconvexity

In linear programming, all the constraints facing the planner, including those representing the available technologies of production, are linear. Economically, this means constant returns to scale: the factory need put no more, and no less, resources into its 10,000th pair of diapers as into its 20,000th, or its first.

Mathematically, the linear constraints on production are a special case of convex constraints. If a constraint is convex, then if we have two plans which satisfy it, so would any intermediate plan in between those extremes. (If plan A calls for 10,000 diapers and 2,000 towels, and plan B calls for 2,000 diapers and 10,000 towels, we could do half of plan A and half of plan B, make 6,000 diapers and 6,000 towels, and not run up against the constraints.) Not all convex constraints are linear; in convex programming, we relax linear programming to just require convex constraints. Economically, this corresponds to allowing decreasing returns to scale, where the 10,000 pair of diapers is indeed more expensive than the 9,999th, or the first.

Computationally, it turns out that the same “interior-point” algorithms which bring large linear-programming problems within reach also work on general convex programming problems. Convex programming is more computationally complex than linear programming, but not radically so.

Unfortunately for the planners, increasing returns to scale in production mean non-convex constraints; and increasing returns are very common, if only from fixed costs. If the plan calls for regular flights from Moscow to Novosibirsk, each flight has a fixed minimum cost, no matter how much or how little the plane carries. (Fuel; the labor of pilots, mechanics, and air-traffic controllers; wear and tear on the plane; wear and tear on runways; the lost opportunity of using the plane for something else.) Similarly for optimization software (you can’t make any copies of the program without first expending the programmers’ labor, and the computer time they need to write and debug the code). Or academic papers, or for that matter running an assembly line or a steel mill. In all of these cases, you just can’t smoothly interpolate between plans which have these outputs and ones which don’t. You must pay at least the fixed cost to get any output at all, which is non-convexity. And there are other sources of increasing returns, beyond fixed costs.

This is bad news for the planners, because there are no general-purpose algorithms for optimizing under non-convex constraints. Non-convex programming isn’t roughly as tractable as linear programming, it’s generally quite intractable. Again, the kinds of non-convexity which economic planners would confront might, conceivably, universally turn out to be especially benign, soeverything becomes tractable again, but why should we think that?

If it’s any consolation, allowing non-convexity messes up the markets-are-always-optimal theorems of neo-classical/bourgeois economics, too. (This illustrates Stiglitz’s contention that if the neo-classicals were right about how capitalism works, Kantorovich-style socialism would have been perfectly viable.) Markets with non-convex production are apt to see things like monopolies, or at least monopolistic competition, path dependence, and, actual profits and power. (My university owes its existence to Mr. Carnegie’s luck, skill, and ruthlessness in exploiting the non-convexities of making steel.) Somehow, I do not think that this will be much consolation).

 

The Given Assortment, and Planner’s Preferences

So far I have been assuming, for the sake of argument, that the planners can take their objective function as given. There does need to be some such function, because otherwise it becomes hard to impossible to chose between competing plans which are all technically feasible. It’s easy to say “more stuff is better than less stuff”, but at some point more towels means fewer diapers, and then the planners have to decide how to trade off among different goods. If we take desired output as fixed and try to minimize inputs, the same difficulty arises (is it better to use so less cotton fiber if it requires this much more plastic?), so I will just stick with the maximization version.

For the capitalist or even market-socialist firm, there is in principle a simple objective function: profit, measured in dollars, or whatever else the local unit of account is. (I say “in principle” because a firm isn’t a unified actor with coherent goals like “maximize profits”; to the extent it acts like one, that’s an achievement of organizational social engineering.) The firm can say how many extra diapers it would have to sell to be worth selling one less towel, because it can look at how much money it would make. To the extent that it can take its sales prices as fixed, and can sell as much as it can make, it’s even reasonable for it to treat its objective function as linear.

But what about the planners? Even if they wanted to just look at the profit (value added) of the whole economy, they get to set the prices of consumption goods, which in turn set the (shadow) prices of inputs to production. (The rule “maximize the objective function” does not help pick an objective function.) In any case, profits are money, i.e., claims, through exchange, on goods and services produced by others. It makes no sense for the goal of the economy, as a whole, to be to maximize its claims on itself.

As I mentioned, Kantorovich had a way of evading this, which was clever if not ultimately satisfactory. He imagined the goal of the planners to be to maximize the production of a “given assortment” of goods. This means that the desired ratio of goods to be produced is fixed (three diapers for every towel), and the planners just need to maximize production at this ratio. This only pushes back the problem by one step, to deciding on the “given assortment”.

We are pushed back, inevitably, to the planners having to make choices which express preferences or (in a different sense of the word) values. Or, said another way, there are values or preferences—what Nove called “planners’ preferences”—implicit in any choice of objective function. This raises both a cognitive or computational problem, and at least two different political problems.

The cognitive or computational problem is that of simply coming up with relative preferences or weights over all the goods in the economy, indexed by space and time. (Remember we need such indexing to handle transport and sequencing.) Any one human planner would simply have to make up most of these, or generate them according to some arbitrary rule. To do otherwise is simply beyond the bounds of humanity. A group of planners might do better, but it would still be an immense amount of work, with knotty problems of how to divide the labor of assigning values, and a large measure of arbitrariness.

Which brings us to the first of the two political problems. The objective function in the plan is an expression of values or preferences, and people have different preferences. How are these to be reconciled?

There are many institutions which try to reconcile or adjust divergent values. This is a problem of social choice, and subject to all the usual pathologies and paradoxes of social choice. There is no universally satisfactory mechanism for making such choices. One could imagine democratic debate and voting over plans, but the sheer complexity of plans, once again, makes it very hard for members of the demos to make up their minds about competing plans, or how plans might be changed. Every citizen is put in the position of the solitary planner, except that they must listen to each other.

Citizens (or their representatives) might debate about, and vote over, highly aggregated summaries of various plans. But then the planning apparatus has to dis-aggregate, has to fill in the details left unfixed by the democratic process. (What gets voted on is a compressed encoding of the actual plan, for which the apparatus is the decoder.) I am not worried so much that citizens are not therefore debating about exactly what the plan is. Under uncertainty, especially uncertainty from complexity, no decision-maker understands the full consequences of their actions. What disturbs me about this is that filling in those details in the plan is just as much driven by values and preferences as making choices about the aggregated aspects. We have not actually given the planning apparatus a tractable technical problem(cf.).

Dictatorship might seem to resolve the difficulty, but doesn’t. The dictator is, after all, just a single human being. He (and I use the pronoun deliberately) has no more ability to come up with real preferences over everything in the economy than any other person. (Thus, Ashby’s “law of requisite variety” strikes again.) He can, and must, delegate details to the planning apparatus, but that doesn’t help the planners figure out what to do. I would even contend that he is in a worse situation than the demos when it comes to designing the planning apparatus, or figuring out what he wants to decide directly, and what he wants to delegate, but that’s a separate argument. The collective dictatorship of the party, assuming anyone wanted to revive that nonsense, would only seem to give the worst of both worlds.

I do not have a knock-down proof that there is no good way of evading the problem of planners’ preferences. Maybe there is some way to improve democratic procedures or bureaucratic organization to turn the trick. But any such escape is, now, entirely conjectural. In its absence, if decisions must be made, they will get made, but through the sort of internal negotiation, arbitrariness and favoritism which Spufford depicts in the Soviet planning apparatus.

This brings us to the second political problem. Even if everyone agrees on the plan, and the plan is actually perfectly implemented, there is every reason to think that people will not be happy with the outcome. They’re making guesses about what they actually want and need, and they are making guesses about the implications of fulfilling those desires. We don’t have to go into “Monkey’s Paw” territory to realize that getting what you think you want can prove thoroughly unacceptable; it’s a fact of life, which doesn’t disappear in economics. And not everyone is going to agree on the plan, which will not be perfectly implemented. (Nothing is ever perfectly implemented.) These are all signs of how even the “optimal” plan can be improved, and ignoring them is idiotic.

We need then some systematic way for the citizens to provide feedback on the plan, as it is realized. There are many, many things to be said against the market system, but it is a mechanism for providing feedback from users to producers, and for propagating that feedback through the whole economy, without anyone having to explicitly track that information. This is a point which both Hayek, and Lange (before the war) got very much right. The feedback needn’t be just or even mainly through prices; quantities (especially inventories) can sometimes work just as well. But what sells and what doesn’t is the essential feedback.

It’s worth mentioning that this is a point which Trotsky got right.(I should perhaps write that “even Trotsky sometimes got right”.) To repeat a quotation:

The innumerable living participants in the economy, state and private, collective and individual, must serve notice of their needs and of their relative strength not only through the statistical determinations of plan commissions but by the direct pressure of supply and demand. The plan is checked and, to a considerable degree, realized through the market.

It is conceivable that there is some alternative feedback mechanism which is as rich, adaptive, and easy to use as the market but is not the market, not even in a disguised form. Nobody has proposed such a thing.

 

Errors of the Bourgeois Economists

Both neo-classical and Austrian economists make a fetish (in several senses) of markets and market prices. That this is crazy is reflected in the fact that even under capitalism, immense areas of the economy are not coordinated through the market. There is a great passage from Herbert Simon in 1991 which is relevant here:

Suppose that [“a mythical visitor from Mars”] approaches the Earth from space, equipped with a telescope that revels social structures. The firms reveal themselves, say, as solid green areas with faint interior contours marking out divisions and departments. Market transactions show as red lines connecting firms, forming a network in the spaces between them. Within firms (and perhaps even between them) the approaching visitor also sees pale blue lines, the lines of authority connecting bosses with various levels of workers. As our visitors looked more carefully at the scene beneath, it might see one of the green masses divide, as a firm divested itself of one of its divisions. Or it might see one green object gobble up another. At this distance, the departing golden parachutes would probably not be visible.

No matter whether our visitor approached the United States or the Soviet Union, urban China or the European Community, the greater part of the space below it would be within green areas, for almost all of the inhabitants would be employees, hence inside the firm boundaries. Organizations would be the dominant feature of the landscape. A message sent back home, describing the scene, would speak of “large green areas interconnected by red lines.” It would not likely speak of “a network of red lines connecting green spots.”[6]

This is not just because the market revolution has not been pushed far enough. (“One effort more, shareholders, if you would be libertarians!”) The conditions under which equilibrium prices really are all a decision-maker needsto know, and really are sufficient for coordination, are so extreme as to be absurd.(Stiglitz is good on some of the failure modes.) Even if they hold, the market only lets people “serve notice of their needs and of their relative strength” up to a limit set by how much money they have. This is why careful economists talk about balancing supply and “effective” demand, demand backed by money.

This is just as much an implicit choice of values as handing the planners an objective function and letting them fire up their optimization algorithm. Those values are not pretty. They are that the whims of the rich matter more than the needs of the poor; that it is more important to keep bond traders in strippers and cocaine than feed hungry children. At the extreme, the market literally starves people to death, because feeding them is a less”efficient” use of food than helping rich people eat more.

I don’t think this sort of pathology is intrinsic to market exchange; it comes from market exchange plus gross inequality. If we want markets to signal supply and demand (not just tautological “effective demand”), then we want to ensure not just that everyone has access to the market, but also that they have (roughly) comparable amounts of money to spend. There is, in other words, a strong case to be made for egalitarian distributions of resources being a complement to market allocation. Politically, however, good luck getting those to go together.

We are left in an uncomfortable position. Turning everything over to the market is not really an option. Beyond the repulsiveness of the values it embodies, markets in areas like healthcare or information goods are always inefficient (over and above the usual impossibility of informationally-efficient prices). Moreover, working through the market imposes its own costs (time and effort in searching out information about prices and qualities, negotiating deals, etc.), and these costs can be very large. This is one reason (among others) why Simon’s Martian sees such large green regions in the capitalist countries—why actually-existing capitalism is at least as much an organizational as a market economy.

Planning is certainly possible within limited domains—at least if we can get good data to the planners—and those limits will expand as computing power grows. But planning is only possible within those domains because making money gives firms (or firm-like entities) an objective function which is both unambiguous and blinkered. Planning for the whole economy would, under the most favorable possible assumptions, be intractable for the foreseeable future, and deciding on a plan runs into difficulties we have no idea how to solve. The sort of efficient planned economy dreamed of by the characters in Red Plenty is something we have no clue of how to bring about, even if we were willing to accept dictatorship to do so.

That planning is not a viable alternative to capitalism (as opposed to a tool within it) should disturb even capitalism’s most ardent partisans. It means that their system faces no competition, nor even any plausible threat of competition. Those partisans themselves should be able to say what will happen then: the masters of the system, will be tempted, and more than tempted, to claim more and more of what it produces as monopoly rents. This does not end happily.

*nbsp;

Calling the Tune for the Dance of Commodities

There is a passage in Red Plenty which is central to describing both the nightmare from which we are trying to awake, and vision we are trying to awake into. Henry has quoted it already, but it bears repeating.

Marx had drawn a nightmare picture of what happened to human life under capitalism, when everything was produced only in order to be exchanged; when true qualities and uses dropped away, and the human power of making and doing itself became only an object to be traded. Then the makers and the things made turned alike into commodities, and the motion of society turned into a kind of zombie dance, a grim cavorting whirl in which objects and people blurred together till the objects were half alive and the people were half dead. Stock-market prices acted back upon the world as if they were independent powers, requiring factories to be opened or closed, real human beings to work or rest, hurry or dawdle; and they, having given the transfusion that made the stock prices come alive, felt their flesh go cold and impersonal on them, mere mechanisms for chunking out the man-hours. Living money and dying humans, metal as tender as skin and skin as hard as metal, taking hands, and dancing round, and round, and round, with no way ever of stopping; the quickened and the deadened, whirling on.… And what would be the alternative? The consciously arranged alternative? A dance of another nature, Emil presumed. A dance to the music of use, where every step fulfilled some real need, did some tangible good, and no matter how fast the dancers spun, they moved easily, because they moved to a human measure, intelligible to all, chosen by all.

There is a fundamental level at which Marx’s nightmare vision is right: capitalism, the market system, whatever you want to call it, is a product of humanity, but each and every one of us confronts it as an autonomous and deeply alien force. Its ends, to the limited and debatable extent that it can even be understood as having them, are simply inhuman. The ideology of the market tell us that we face not something inhuman but superhuman, tells us to embrace our inner zombie cyborg and loose ourselves in the dance. One doesn’t know whether to laugh or cry or running screaming.

But, and this is I think something Marx did not sufficiently appreciate, human beings confront all the structures which emerge from our massed interactions in this way. A bureaucracy, or even a thoroughly democratic polity of which one is a citizen, can feel, can be, just as much of a cold monster as the market. We have no choice but to live among these alien powers which we create, and to try to direct them to human ends. It is beyond us, it is even beyond all of us, to find “a human measure, intelligible to all, chosen by all”, which says how everyone should go. What we can do is try to find the specific ways in which these powers we have conjured up are hurting us, and use them to check each other, or deflect them into better paths. Sometimes this will mean more use of market mechanisms, sometimes it will mean removing some goods and services from market allocation, either through public provision[7] or through other institutional arrangements[8]. Sometimes it will mean expanding the scope of democratic decision-making (for instance, into the insides of firms), and sometimes it will mean narrowing its scope (for instance, not allowing the demos to censor speech it finds objectionable). Sometimes it will mean leaving some tasks to experts, deferring to the internal norms of their professions, and sometimes it will mean recognizing claims of expertise to be mere assertions of authority, to be resisted or countered.

These are all going to be complex problems, full of messy compromises. Attaining even second best solutions is going to demand “bold, persistent experimentation”, coupled with a frank recognition that many experiments will just fail, and that even long-settled compromises can, with the passage of time, become confining obstacles. We will not be able to turn everything over to the wise academicians, or even to their computers, but we may, if we are lucky and smart, be able, bit by bit, make a world fit for human beings to live in.


Notes:

[1]: Vaguely lefty? Check. Science fiction reader? Check. Interested in economics? Check. In fact: family tradition of socialism extending to having a relative whose middle name was “Karl Marx”? Check. Gushing Ken MacLeod fan? Check. Learned linear programming at my father’s knee as a boy? Check. ^

[2]: More exactly, many optimization problems have the property that we can check a proposed solution in polynomial time (these are the class “NP”), but no one has a polynomial-time way to work out a solution from the problem statement (which would put them in the class “P”). If a problem is in NP but not in P, we cannot do drastically better than just systematically go through candidate solutions and check them all. (We can often do a bit better, especially on particular cases, but not drastically better.) Whether there are any such problems, that is whether NP=P, is not known, but it sure seems like it. So while most common optimization problems are in NP, linear and even convex programming are in P.^

[3]: Most of the relevant work has been done under a slightly different cover—not determining shadow prices in an optimal plan, but equilibrium prices in Arrow-Debreu model economies. But this is fully applicable to determining shadow prices in the planning system.(Bowles and Gintis: “The basic problem with the Walrasian model in this respect is that it is essentially about allocations and only tangentially about markets—as one of us (Bowles) learned when he noticed that the graduate microeconomics course that he taught at Harvard was easily repackaged as ‘The Theory of Economic Planning’ at the University of Havana in 1969.”) Useful references here are Deng, Papadimitriou and Safra’s “On the Complexity of Price Equilibria” [STOC’02. preprint], Condenotti and Varadarajan’s “Efficient Computation of Equilibrium Prices for Markets with Leontief Utilities”, and Ye’s “A path to the Arrow-Debreu competitive market equilibrium”. ^

[4]: In the mathematical appendix to Best Use, Kantorovich goes to some length to argue that his objectively determined values are compatible with the labor theory of value, by showing that the o.d. values are proportional to the required labor in the optimal plan. (He begins by assuming away the famous problem of equating different kinds of labor.) A natural question is how seriously this was meant. I have no positive evidence that it wasn’t sincere. But, carefully examined, all that he proves is proportionality between o.d. values and the required consumption of the first component of the vector of inputs—and the ordering of inputs is arbitrary. Thus the first component could be any input to the production process, and the same argument would go through, leading to many parallel “theories of value”. (There is a certain pre-Socratic charm to imagining proponents of the labor theory of value arguing it out with the water-theorists or electricity-theorists.) It is hard for me to believe that a mathematician of Kantorovich’s skill did not see this, suggesting that the discussion was mere ideological cover. It would be interesting to know at what stage in the book’s “adventures” this part of the appendix was written.^

[5]: In particular, there’s no reason to think that building a quantum computer would help. This is because, as some people have to keep pointing out, quantum computers don’t provide a generalexponential speed-up over classical ones. ^

[6]: I strongly recommend reading the whole of this paper, if these matters are at all interesting. One of the most curious features of this little parable was that Simon was red-green color-blind.^

[7]: Let me be clear about the limits of this. Already, in developed capitalism, such public or near-public goods as the protection of the police and access to elementary schooling are provided universally and at no charge to the user. (Or they are supposed to be, anyway.) Access to these is not regulated by the market. But the inputs needed to provide them are all bought on the market, the labor of teachers and cops very much included. I cannot improve on this point on the discussion in Lindblom’s The Market System, so I will just direct you to that(i, ii).^

[8]: To give a concrete example, neither scientific research nor free software are produced for sale on the market. (This disappoints some aficionados of both.) Again, the inputs are obtained from markets, including labor markets, but the outputs are not sold on them. How far this is a generally-viable strategy for producing informational goods is a very interesting question, which it is quite beyond me to answer.^..."

——

#shouldread
#weekendreading
#theshoresofutopia

Socialism with German Nationalist Characteristics: Sheri Berman: Weekend Reading

Il Quarto Stato

Weekend Reading/Hoisted from 2006: Socialism with German Nationalist Characteristics: "I was supposed to contribute to Crooked Timber's seminar http://crookedtimber.org/category/sheri-berman-seminar/ on Sheri Berman (2006), The Primacy of Politics: Social Democracy and the Making of Europe's Twentieth Century (Cambridge: Cambridge University Press: 0521521106). But it never happened: I never produced anything I was happy with.

So let me, instead, point you over to the ongoing debate and post my favorite passage...

Continue reading "Socialism with German Nationalist Characteristics: Sheri Berman: Weekend Reading" »


Two and a half years after Jared wrote this, and there is still no sign that the economy has reached "full employment", or that the pace of wage and price growth is even beginning to spiral upwards. Thus he Federal Reserve continues to work with a model of the economy in which we should have very little confidence, if any: Jared Bernstein (2016): Important new findings on inflation and unemployment from the new ERP: "The 'Phillips curve'... negative correlation between inflation and unemployment...

Continue reading "" »


F--- you, @jack. Use Twitter if and only if you find it useful. But everyone is now under a moral obligation to diminish its profits and hasten its obsolescence: Steve Randy Waldman: "F--- @twitter for killing their third-party clients: " for the second time. I wish it were credible for me to say I’m leaving the platform. It’s not credible. Twitter has market power, I will use the platform, but increasingly I wish the company ill...

Continue reading "" »