More on TIT-FOR-TAT
Social Security Private Accounts: An Add-On, Not a Carve-Out

The Symmetry Argument for Cooperation in One-Shot Prisoner's Dilemma

A commenter writes:

http://delong.typepad.com/sdj/2007/02/tom_slee_tells_.html: The claims made by Axelrod in favor of tit-for-tat are wildly overblown, and frequently just plain wrong. Anatol Rapoport cannot really be blamed for his sloppiness, although he did (re)invent the Symmetry Fallacy that purports to demonstrate that it is rational to co-operate in the one-shot Prisoners' Dilemma. A good place to read a game theorist's reaction to all of this is in Ken Binmore's "Playing Fair: Game Theory and the Social Contract I," Chapter 3, (MIT Press, 1994).

The Symmetry Argument is not the Symmetry Fallacy. I think that it's remarkably deep and subtle, raising many of the issues that arise in Newcomb's Problem.

To review the one-shot Prisoner's Dilemma game. A and B play a one-shot game. Each has two strategies: C(ooperate) and D(efect). A and B are identical. Each is a self-interested being, caring only about his or her own payoffs--they are neither altruistic nor envious. Each is a logical being, understanding the structure of the game and capable of following the strategic logic to its conclusion. The payoffs to this one-shot game are as follows:


Basic Prisoner's Dilemma

B CooperatesB Defects
A Cooperates(2,2)(-5,3)
A Defects(3,-5)(-4,-4)

Here is the traditional argument for the traditional dominant-strategy equilibrium:

A thinks: "Whatever B does, I am better off doing strategy D. Moreover, whatever I do, B is better off doing strategy D. It would be best for us both if we both did strategy C. But I cannot afford to do C--I would do better doing D, and B knows I would do better doing D. For both of us, strategy D strictly dominates. So I would have to conclude that B was insane or irrational to expect B to do strategy C--and even if B does, I am still better off doing D than C." B thinks the same.

Here is the Symmetry Argument for doing strategy C:

A thinks:

B is identical to me. I am a logical thinker. B is therefore a logical thinker, who will think the same thoughts I think. There are, however random elements--what B had for breakfast, for example--that may lead B to choose a different strategy than me. Let's model those random elements by a random variable b drawn uniformly from [0, 1]. And let's model the logical, systematic part of B's deliberations by having B choose a number p, so that B chooses strategy C if b < p and chooses strategy D otherwise.

What number p will the logical part of B choose?

Well, the best model I have of the logical part of B is what number p I have chosen. Viewing myself as an agent whose logical thought is identical to that of B, what number p will I choose? Let me calculate my expected return as a function of p. It is:

2p2 - 2p(1-p) - 4(1-p)2

This is maximized for p = 1. So I should choose p = 1--should always cooperate--and expect a payoff of 2.

Should I take the problem one step further? Should I reason that since B is choosing p = 1, I should choose to defect, and thus get an expected return of 3? Ah. But if I choose to defect, I am choosing my own p = 0, and my forecast of B's choice of p changes as well, and my expected return is not 2 but -4.

A has to confront two truths. On the one hand, A's payoff is greater by one if A chooses to defect rather than to cooperate. On the other hand, if A plays the dominant strategy of "defect" then A's expectation of their payoff drops by 6. +1 or -6? The issue of which truth to act on is, I think, the same issue we find in Newcomb's Problem.

I am a dominant-strategy guy. If you find the Symmetry Argument convincing--well, Grasshopper, you have once again failed to snatch the pebble from my hand. But I feel the force of the other side: If you find the Symmetry Argument an obvious fallacy--well, Grasshopper, you have once again failed to snatch the pebble from my hand.

If you set up as an axiom of rationality that a rational, logical agent must always choose to play a dominant over a dominated strategy--well, Grasshopper, you have begged the question, and you have to answer the next order question: why you think that your rational, logical agents are smart?

Comments