TIT FOR TAT in sticklebacks and the evolution of cooperation

Nature ◽  
1987 ◽  
Vol 325 (6103) ◽  
pp. 433-435 ◽  
Author(s):  
Manfred Milinski
Games ◽  
2018 ◽  
Vol 9 (4) ◽  
pp. 100 ◽  
Author(s):  
Shun Kurokawa ◽  
Joe Yuichiro Wakano ◽  
Yasuo Ihara

Evolution of cooperation by reciprocity has been studied using two-player and n-player repeated prisoner’s dilemma games. An interesting feature specific to the n-player case is that players can vary in generosity, or how many defections they tolerate in a given round of a repeated game. Reciprocators are quicker to detect defectors to withdraw further cooperation when less generous, and better at maintaining a long-term cooperation in the presence of rare defectors when more generous. A previous analysis on a stochastic evolutionary model of the n-player repeated prisoner’s dilemma has shown that the fixation probability of a single reciprocator in a population of defectors can be maximized for a moderate level of generosity. However, the analysis is limited in that it considers only tit-for-tat-type reciprocators within the conventional linear payoff assumption. Here we extend the previous study by removing these limitations and show that, if the games are repeated sufficiently many times, considering non-tit-for-tat type strategies does not alter the previous results, while the introduction of non-linear payoffs sometimes does. In particular, under certain conditions, the fixation probability is maximized for a “paradoxical” strategy, which cooperates in the presence of fewer cooperating opponents than in other situations in which it defects.


Author(s):  
Jeremy Bowling

AbstractThe evolution of cooperation scholarship develops evolutionary stable theories that explain the presence of cooperation when there are many reasons to defect from cooperation. In this analysis, these theories are tested using the relations between states. Focusing on the direct reciprocity strategies of Tit-for-Tat and Win-stay/Lose-shift and the indirect reciprocity strategies of Cooperative Reputation and Tag, Tit-for-Tat and Cooperative Reputation are found to be robust, while Tags have mixed results. In the end, it is the direct cooperative action by states and their cooperative reputation and not shared characteristics that are most likely to elicit cooperative action in return.


2010 ◽  
Vol 365 (1553) ◽  
pp. 2699-2710 ◽  
Author(s):  
Sarah F. Brosnan ◽  
Lucie Salwiczek ◽  
Redouan Bshary

Cooperation often involves behaviours that reduce immediate payoffs for actors. Delayed benefits have often been argued to pose problems for the evolution of cooperation because learning such contingencies may be difficult as partners may cheat in return. Therefore, the ability to achieve stable cooperation has often been linked to a species' cognitive abilities, which is in turn linked to the evolution of increasingly complex central nervous systems. However, in their famous 1981 paper, Axelrod and Hamilton stated that in principle even bacteria could play a tit-for-tat strategy in an iterated Prisoner's Dilemma. While to our knowledge this has not been documented, interspecific mutualisms are present in bacteria, plants and fungi. Moreover, many species which have evolved large brains in complex social environments lack convincing evidence in favour of reciprocity. What conditions must be fulfilled so that organisms with little to no brainpower, including plants and single-celled organisms, can, on average, gain benefits from interactions with partner species? On the other hand, what conditions favour the evolution of large brains and flexible behaviour, which includes the use of misinformation and so on? These questions are critical, as they begin to address why cognitive complexity would emerge when ‘simple’ cooperation is clearly sufficient in some cases. This paper spans the literature from bacteria to humans in our search for the key variables that link cooperation and deception to cognition.


1995 ◽  
Vol 348 (1326) ◽  
pp. 393-404 ◽  

The pioneering work by Trivers (1971), Axelrod (1984) and Axelrod & Hamilton (1981) has stimulated continuing interest in explaining the evolution of cooperation by game theory, in particular, the iterated prisoner’s dilemma and the strategy of tit-for-tat. However these models suffer from a lack of biological reality, most seriously because it is assumed that players meet opponents at random from the population and, unless the population is very small, this excludes the repeated encounters necessary for tit-for-tat to prosper. To meet some of the objections, we consider a model with two types of players, defectors (D) and tit-for-tat players (T), in a spatially homogeneous environment with player densities varying continuously in space and time. Players only encounter neighbours but move at random in space. The analysis demonstrates major new conclusions, the three most important being as follows. First, stable coexistence with constant densities of both players is possible. Second, stable coexistence in a pattern (a spatially inhomogeneous stationary state) may be possible when it is impossible for constant distributions (even unstable ones) to exist. Third, invasion by a very small number of T-players is sometimes possible (in contrast with the usual predictions) and so a mutation to tit-for-tat may lead to a population of defectors being displaced by the T-players.


Games ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 16
Author(s):  
Fabio Della Rossa ◽  
Fabio Dercole ◽  
Anna Di Meglio

Network reciprocity has been successfully put forward (since M. A. Nowak and R. May’s, 1992, influential paper) as the simplest mechanism—requiring no strategical complexity—supporting the evolution of cooperation in biological and socioeconomic systems. The mechanism is actually the network, which makes agents’ interactions localized, while network reciprocity is the property of the underlying evolutionary process to favor cooperation in sparse rather than dense networks. In theoretical models, the property holds under imitative evolutionary processes, whereas cooperation disappears in any network if imitation is replaced by the more rational best-response rule of strategy update. In social experiments, network reciprocity has been observed, although the imitative behavior did not emerge. What did emerge is a form of conditional cooperation based on direct reciprocity—the propensity to cooperate with neighbors who previously cooperated. To resolve this inconsistency, network reciprocity has been recently shown in a model that rationally confronts the two main behaviors emerging in experiments—reciprocal cooperation and unconditional defection—with rationality introduced by extending the best-response rule to a multi-step predictive horizon. However, direct reciprocity was implemented in a non-standard way, by allowing cooperative agents to temporarily cut the interaction with defecting neighbors. Here, we make this result robust to the way cooperators reciprocate, by implementing direct reciprocity with the standard tit-for-tat strategy and deriving similar results.


2007 ◽  
Vol 274 (1620) ◽  
pp. 1861-1865 ◽  
Author(s):  
Sabin Lessard

In the context of the finitely repeated Prisoner's Dilemma with the possibility of cooperating or defecting each time, the strategy tit-for-tat (TFT) consists in cooperating the first time and copying the strategy previously used by the opponent the next times. Assuming random pairwise interactions in a finite population of always defecting individuals, TFT can be favoured by selection to go to fixation following its introduction as a mutant strategy. We deduce the condition for this to be the case under weak selection in the framework of a general reproduction scheme in discrete time. In fact, we show when and why the one-third rule for the evolution of cooperation holds, and how it extends to a more general rule. The condition turns out to be more stringent when the numbers of descendants left by the individuals from one time-step to the next may substantially differ. This suggests that the evolution of cooperation is made more difficult in populations with a highly skewed distribution of family size. This is illustrated by two examples.


2020 ◽  
Vol 34 (02) ◽  
pp. 2268-2275
Author(s):  
Shiheng Wang ◽  
Fangzhen Lin

The Iterated Prisoner's Dilemma (IPD) is a well-known benchmark for studying the long term behaviours of rational agents. Many well-known strategies have been studied, from the simple tit-for-tat (TFT) to more involved ones like zero determinant and extortionate strategies studied recently by Press and Dyson. In this paper, we consider what we call invincible strategies. These are ones that will never lose against any other strategy in terms of average payoff in the limit. We provide a simple characterization of this class of strategies, and show that invincible strategies can also be nice. We discuss its relationship with some important strategies and generalize our results to some typical repeated 2x2 games. It's known that experimentally, nice strategies like the TFT and extortionate ones can act as catalysts for the evolution of cooperation. Our experiments show that this is also the case for some invincible strategies that are neither nice nor extortionate.


Sign in / Sign up

Export Citation Format

Share Document