‘Tit for Tat’ Dethroned as Optimal Prisoner’s Dilemma Strategy

Slashdot reports:

Tit for Tat, the reigning champion of the Iterated Prisoner's Dilemma Competition, has been defeated by a group of cooperating programs from the University of Southampton. The Prisoner's Dilemma is a game with two players and two possible moves: cooperate or defect. If the two players cooperate, they both have small wins. If one player cooperates and the other defects, the cooperator has a big loss and the defector has a big win. If both players defect, they both have small losses. Tit for Tat cooperates in the first round and imitates its opponent's previous move for the rest of the game. Tit for Tat is similar to the Mutual Assured Destruction strategy used by the two nuclear superpowers during the Cold War. Southampton's programs executed a known series of 5 to 10 moves which allowed them to recognize each other. After recognition, the two Southampton programs became 'master and slave': one program would keep defecting and the other would keep cooperating. If a Southampton program determined that another program was non-Southampton, it would defect.”

Haven't read the paper yet, but this sounds like a significant result, as the empircal superiority of 'tit for tat' is received wisdom in most accounts of applied game theory I've ever read. 'Ascription is an Anathema to any Enthusiasm' calls it delightful and suggests that,

This is the classic model of all game theory! And even in this tiny little dishpan model collaborative groups form and once they form they out compete the players that fail to collaborate. As Dave Weinberger once pointed out, we are a species that will form communities even if it means tapping out the alphabet on the wall of our cell.

I wonder if real-life applications may be limited by the difficulty of the earlier game to determine who gets which role…

This entry was posted in Readings. Bookmark the permalink.

6 Responses to ‘Tit for Tat’ Dethroned as Optimal Prisoner’s Dilemma Strategy

  1. I’m not really sure how significant this finding is. It seems to me that the designers of the slave/master program have only figured out how to game the prisoner’s dilemma contest, not the prisoner’s dilemma itself.

    For example, the Prisoner’s dilemma is based on the idea that the only information you have about your partner is past history. Sending a code is communication outside this channel.

    Moreover, success is based on individual performance. Cooperation among performers to artificially boost performance for one individual at the expense of others will obviously win. Perhaps the proper metric for success is tally the success of all cooperating programs. In such a case, I expect that tit-for-tat would still win.

  2. J Monroe says:

    Part of the definition of Prisoner’s Dielemna is that mutual co-operation is better than the average of the “master” and “sucker” payouts. This is most obviously important in the case where two players both playing Tit for Tat-like strategies alternately retaliate against each other – they each get the “master” payoff half the time and the “sucker” payoff the other half. This is not as good as continuous co-operation – if it was then “nice” strategies which never defect first would be a lot weaker.

    So we know that on average the Southampton strategies cannot do as well as Tit for Tat. The interesting question is what fraction of the field needs to be “slaves” before the “master” can beat Tit for Tat.

  3. Pingback: Wortfeld: Anderswo im Netz

  4. cw says:

    Actually, even Axelrod and others have gone back to the original competition and found that the result is very sensitive to the precise mix of opponents each program faces. It’s not news that Tit for Tat doesn’t dominate in all situations.

    Nonetheless, the strong intuitive appeal of T for T hasn’t diminished. It’s a powerful idea about how cooperation is sustained despite the many incentives that work against it. There’s been some biological evidence that simple organisms like fish even follow an interaction pattern that matches T for T.

    The abstract seems to be pitching another finding, that cooperative behavior might emerge as a maximizing strategy to defeat Tit for Tat. It’s like saying that con artists were the first form of organized cooperation. Similar to Mancur Olson’s idea that political entities emerged when roving bands of thugs settled down to prey off just one group. Then they became symbiotic – the powerful wouldn’t destroy economic activity by over-taxing because they depended on it for next year’s income. It’s got an amusing counter-intuitive appeal, which often is the basis of a theoretical breakthrough.

    Of course this computational, applied approach to game theory is just a very small branch compared to the armies of economists devising analytic models that are solvable. But there’s been damn-little progress in actually testing those models. Now _that_ would be a major breakthrough.

  5. Mojo says:

    To me it doesn’t look like this shows anything about tendencies to cooperation. There is absolutely no free will involved in the Southampton model and, in fact, any success it achieves is predicated on the idea that none of the members can ever deviate from the script. If you took a group of Southampton programs and then reprogrammed one of them to exploit the mandated patterns, it would kick the other Southampton programs’ virtual butts.

    —–

  6. Hannah says:

    I’m trying to find a list of the strategies used in the original competition, but I can’t seem to find them. Does anyone know where they are published?

Comments are closed.