Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2684574

Formats

Article sections

Authors

Related links

J Theor Biol. Author manuscript; available in PMC 2010 June 21.

Published in final edited form as:

Published online 2009 February 24. doi: 10.1016/j.jtbi.2009.02.010

PMCID: PMC2684574

NIHMSID: NIHMS98458

Tibor Antal,^{a} Arne Traulsen,^{b} Hisashi Ohtsuki,^{c} Corina E. Tarnita,^{a} and Martin A. Nowak^{a}

The publisher's final edited version of this article is available at J Theor Biol

See other articles in PMC that cite the published article.

In evolutionary games the fitness of individuals is not constant but depends on the relative abundance of the various strategies in the population. Here we study general games among *n* strategies in populations of large but finite size. We explore stochastic evolutionary dynamics under weak selection, but for any mutation rate. We analyze the frequency dependent Moran process in well-mixed populations, but almost identical results are found for the Wright-Fisher and Pairwise Comparison processes. Surprisingly simple conditions specify whether a strategy is more abundant on average than 1/*n*, or than another strategy, in the mutation-selection equilibrium. We find one condition that holds for low mutation rate and another condition that holds for high mutation rate. A linear combination of these two conditions holds for any mutation rate. Our results allow a complete characterization of *n* × *n* games in the limit of weak selection.

Evolutionary game theory is the study of frequency dependent selection (Maynard Smith and Price, 1973; Maynard Smith, 1982; Hofbauer and Sigmund, 1998, 2003; Nowak and Sigmund, 2004). The individuals of a population can adopt one of several strategies, which can be seen as genotypes or phenotypes. The payoff for each strategy is a linear function of the relative frequencies of all strategies. The coefficients of this linear function are the entries of the payoff matrix. Payoff is interpreted as fitness: individuals reproduce at rates that are proportional to their payoff. Reproduction can be genetic or cultural.

Evolutionary game theory provides a theoretical foundation for understanding human and animal behavior (Schelling, 1980; Maynard Smith, 1982; Fudenberg and Tirole, 1991; Binmore, 1994; Aumann and Maschler, 1995; Samuelson, 1997). Applications of evolutionary game theory include games among viruses (Turner and Chao, 1999, 2003) and bacteria (Kerr et al., 2002) as well as host-parasite interactions (Nowak and May, 1994). Cellular interactions within the human body can also be evolutionary games. As an example we mention the combat between the immune system and virus infected cells (Nowak et al., 1991; May and Nowak, 1995; Bonhoeffer and Nowak, 1995). The ubiquity of evolutionary game dynamics is not surprising, because evolutionary game theory provides a fairly general approach to evolutionary dynamics (Nowak, 2006). There is also an equivalence between fundamental equations of ecology (May, 1973) and those of evolutionary game theory (Hofbauer and Sigmund, 1998).

Let us consider a game with *n* strategies. The payoff values are given by the *n* × *n* payoff matrix *A* = [*a _{ij}*] This means that an individual using strategy

The traditional approach to evolutionary game dynamics uses well-mixed populations of infinite size. In this case the deterministic selection dynamics can be described by the replicator equation, which is an ordinary differential equation defined on the simplex *S _{n}* (Taylor and Jonker, 1978; Weibull, 1995). Many interesting properties of this equation are described in the book by Hofbauer and Sigmund (1998).

More recently there have been efforts to study evolutionary game dynamics in populations of finite size (Riley, 1979; Schaffer, 1988; Kandori et al., 1993; Kandori and Rob, 1995; Fogel et al., 1998; Ficici and Pollack, 2000; Schreiber, 2001; Nowak et al., 2004; Taylor et al., 2004; Wild and Taylor, 2004; Traulsen et al., 2005). For finite populations a stochastic description is necessary. Of particular interest is the fixation probability of a strategy (Nowak et al., 2004; Antal and Scheuring, 2006; Lessard and Ladret, 2007): the probability that a single mutant strategy overtakes a homogeneous population which uses another strategy. When only two strategies are involved, the strategy with higher fixation probability is considered to be more ‘favored’ by selection. We can take a game of *n* strategies and analyze all pairwise fixation probabilities to find which strategies are favored by selection (Imhof and Nowak, 2006). This concept, in some way, compares strategies at all relative frequencies during the fixation process, as opposed to the Nash and ESS conditions.

The study of fixation probabilities, however, is only conclusive for small mutation rates, which means most of the time all players use the same strategy. In this paper, we propose a more general way of identifying the strategy most favored by selection: it is the strategy with the highest average frequency in the long time average. For brevity we call throughout this paper the average frequency of a strategy in the stationary state its *abundance*. The criteria for higher abundance can be used for arbitrary mutation rates. Moreover, for small mutation rates this criteria can be formulated in terms of pairwise fixation probabilities.

In particular, we focus on stochastic evolutionary dynamics in populations of finite size *N*, although for simplicity we shall consider the large (but still finite) population size limit. Evolutionary updating occurs according to the frequency dependent Moran process (Nowak et al., 2004; Taylor et al., 2004), but the Wright Fisher process (Imhof and Nowak, 2006) and the Pairwise Comparison process (Szabó and Toke, 1998; Traulsen et al., 2007) are also discussed; the details of these processes are explained in the next sections. In addition, we assume that individuals reproduce proportional to their payoffs but subject to mutation with probability *u* > 0. With probability 1 − *u* the imitator (or offspring) adopts the strategy of the teacher (or parent); with probability *u* one of the *n* strategies is chosen at random.

We study the case of weak selection. For the frequency dependent Moran process, the payoff of strategy *i* is given by *f _{i}* = 1 +

In this paper we study *n*-strategy games in a well mixed population of *N* players. We consider that selection favors a strategy if its abundance (average frequency) exceeds 1/*n*. Conversely, selection opposes a strategy, if its abundance is less than 1/*n*. We establish the following results. For low mutation probability (*u* 1/*N*), we find that selection favors strategy *k* if

$${L}_{k}=\frac{1}{n}\sum _{i=1}^{n}({a}_{kk}+{a}_{ki}-{a}_{ik}-{a}_{ii})>0.$$

(1)

For high mutation probability (*u* 1/*N*), selection favors strategy *k* if

$${H}_{k}=\frac{1}{{n}^{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}({a}_{kj}-{a}_{ij})>0.$$

(2)

For arbitrary mutation probability the general expression for selection to favor strategy *k* is

$${L}_{k}+Nu{H}_{k}>0.$$

(3)

Strategy *k* is more abundant than strategy *j* if

$${L}_{k}+Nu{H}_{k}>{L}_{j}+Nu{H}_{j}.$$

(4)

All these results hold for large but finite population size, 1 *N* 1/*δ*. They allow a complete characterization of *n* × *n* games in the limit of weak selection. The equilibrium frequencies of each strategy are also given in the paper.

We can gain some qualitative understanding of our low (1) and high (2) mutation rate results. For low mutation rates, most of the time, all players use the same strategy until another strategy takes over. There are only two strategies involved in a takeover. A single *k* player fixates in all *i* players with a higher probability than a single *i* player into *k* players, if *a _{kk}* +

The rest of the paper is structured as follows. In Section 2, we derive the general conditions for strategy abundance for any mutation rates. Section 3 provides three concrete examples. Possible extensions of our method to strong selection, more general mutation rates, the Wright-Fisher and the Pairwise Comparison processes are discussed in Section 4. We summarize our results in Section 5.

Let us consider a well mixed population of *N* players. Each of them plays one of the *n* ≥ 2 strategies. The state of the system is described by the *n*-dimensional column vector **X**, where *X _{i}* is the number of players using strategy

The dynamics of the system is given by the frequency dependent Moran process. In each time step a randomly chosen individual is replaced by a copy of an individual chosen with probability proportional to its fitness. The offspring inherits the parent’s strategy with probability 1 − *u*, or adopts a random strategy with probability *u* > 0.

We shall show below that the condition for strategy *k* to be more abundant than the average 1/*n* is equivalent to having a positive average change of its frequency during a single update step. Hence we start deriving this latter quantity. In state **X**, the average number of offspring (fitness) of a *k*-player due to selection is *ω _{k}* = 1 − 1/

$${\omega}_{k}=1+\delta {N}^{-1}[{(\mathbf{Ax})}_{k}-{\mathbf{x}}^{T}\mathbf{Ax}]+\mathcal{O}({\delta}^{2}{N}^{-1}),$$

(5)

In one update step, the frequency of *k*-players changes on average due to selection by

$$\mathrm{\Delta}{x}_{k}^{\text{sel}}={x}_{k}{\omega}_{k}-{x}_{k}=\delta \mathrm{\Delta}{x}_{k}^{(1)}[1+\mathcal{O}(\delta )],$$

(6)

where the first derivative with respect to *δ* is

$$\mathrm{\Delta}{x}_{k}^{(1)}={N}^{-1}{x}_{k}[{(\mathbf{Ax})}_{k}-{\mathbf{x}}^{T}\mathbf{Ax}].$$

(7)

The state of the system, **X**, changes over time due to selection and mutation. In the stationary state of the Moran process we find the system in state **X** with probability *P _{δ}*(

Hence by averaging
$\mathrm{\Delta}{x}_{k}^{\text{sel}}$ in the stationary state, in the leading order in *δ* we obtain

$${\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}\equiv \sum _{\mathbf{X}}\mathrm{\Delta}{x}_{k}^{\text{sel}}{P}_{\delta}(\mathbf{X})=\delta \sum _{\mathbf{X}}\mathrm{\Delta}{x}_{k}^{(1)}{P}_{\delta =0}(\mathbf{X})\times [1+\mathcal{O}(\delta )].$$

(8)

Thus, we can describe the stationary state of the system for small *δ* by using the stationary distribution in the absence of selection, *δ* = 0. Since the correction term is independent of *N*, the above formula remains valid even in the large population size limit. Using expression (7) for
$\mathrm{\Delta}{x}_{k}^{\text{(1)}}$, the average change due to selection in the leading order can be written as

$$\begin{array}{l}{\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}=\delta {N}^{-1}\langle {x}_{k}[{(\mathbf{Ax})}_{k}-{\mathbf{x}}^{T}\mathbf{Ax}]\rangle \\ =\delta {N}^{-1}\left(\sum _{j}{a}_{kj}\langle {x}_{k}{x}_{j}\rangle -\sum _{i,j}{a}_{ij}\langle {x}_{k}{x}_{i}{x}_{j}\rangle \right),\end{array}$$

(9)

where · denotes the average in the neutral stationary state (*δ* = 0).

So far we have only considered selection. By taking into account mutation as well, the expected total change of frequency in state **X** during one update step can be written as

$$\mathrm{\Delta}{x}_{k}^{\text{tot}}=\mathrm{\Delta}{x}_{k}^{\text{sel}}(1-u)+\frac{u}{N}\left(\frac{1}{n}-{x}_{k}\right).$$

(10)

The first term on the right hand side describes the change in the absence of mutation, which happens with probability 1 − *u*. The second term stands for the change due to mutation, which happens with probability *u*. In this latter case the frequency *x _{k}* increases by 1/

$${\langle {x}_{k}\rangle}_{\delta}=\frac{1}{n}+N\frac{1-u}{u}{\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}.$$

(11)

We emphasize that this relationship is valid at any intensity of selection, although we are going to use it only in the weak selection limit. From (11) it follows that the condition *x _{k}*

$${\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}>0.$$

(12)

That is, for strategy *k* to be more abundant than the average, the change due to selection must be positive in the stationary state. Hence, as we claimed, instead of computing the mean frequency, we can now concentrate on the average change (9) during a single update step.

To evaluate (9) we need to calculate averages of the form *x _{k} x_{j}* and

$$\begin{array}{c}\langle {x}_{1}\rangle =\langle {x}_{i}\rangle \\ \langle {x}_{1}{x}_{1}\rangle =\langle {x}_{i}{x}_{i}\rangle \\ \langle {x}_{1}{x}_{2}\rangle =\langle {x}_{i}{x}_{j}\rangle \\ \langle {x}_{1}{x}_{1}{x}_{1}\rangle =\langle {x}_{i}{x}_{i}{x}_{i}\rangle \\ \langle {x}_{1}{x}_{2}{x}_{2}\rangle =\langle {x}_{i}{x}_{j}{x}_{j}\rangle \\ \langle {x}_{1}{x}_{2}{x}_{3}\rangle =\langle {x}_{i}{x}_{j}{x}_{k}\rangle \end{array}$$

(13)

for all *k* ≠ *i* ≠ *j* ≠ *k*. Equation (9) then takes the form

$$\begin{array}{l}N{\delta}^{-1}{\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}=\langle {x}_{1}{x}_{1}\rangle {a}_{kk}+\langle {x}_{1}{x}_{2}\rangle \sum _{i,i\ne k}{a}_{ki}-\langle {x}_{1}{x}_{1}{x}_{1}\rangle {a}_{kk}\\ -\langle {x}_{1}{x}_{2}{x}_{2}\rangle \sum _{i,i\ne k}({a}_{ki}+{a}_{ii}+{a}_{ik})-\langle {x}_{1}{x}_{2}{x}_{3}\rangle \sum _{\underset{k\ne i\ne j\ne k}{i,j}}{a}_{ij}.\end{array}$$

(14)

Note that *x*_{1}*x*_{2}*x*_{3} is not defined for *n* = 2, but in that case the last sum in (14) is zero anyway. Hence the following derivation is valid even for *n* = 2. By removing the restrictions from the summations in (14), we can rearrange this expression into

$$\begin{array}{l}N{\delta}^{-1}{\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}={a}_{kk}\left(\langle {x}_{1}{x}_{1}\rangle -\langle {x}_{1}{x}_{2}\rangle -\langle {x}_{1}{x}_{1}{x}_{1}\rangle +3\langle {x}_{1}{x}_{2}{x}_{2}\rangle -2\langle {x}_{1}{x}_{2}{x}_{3}\rangle \right)\\ +\langle {x}_{1}{x}_{2}\rangle \sum _{i}{a}_{ki}+\left(\langle {x}_{1}{x}_{2}{x}_{3}\rangle -\langle {x}_{1}{x}_{2}{x}_{2}\rangle \right)\sum _{i}({a}_{ki}+{a}_{ii}+{a}_{ik})\\ -\langle {x}_{1}{x}_{2}{x}_{3}\rangle \sum _{i,j}{a}_{ij}.\end{array}$$

(15)

Let us now interpret these average quantities. We draw *j* players at random from the population in the neutral stationary state, and define *s _{j}* as the probability that all of them have the same strategy. We have

$$\begin{array}{l}\langle {x}_{1}{x}_{2}\rangle =\langle (1-\sum _{2\le i\le n}{x}_{i}){x}_{2}\rangle =\langle {x}_{1}\rangle -\langle {x}_{1}{x}_{1}\rangle -(n-2)\langle {x}_{1}{x}_{2}\rangle \\ \langle {x}_{1}{x}_{2}{x}_{2}\rangle =\langle (1-\sum _{2\le i\le n}{x}_{i}){x}_{2}{x}_{2}\rangle =\langle {x}_{1}{x}_{1}\rangle -\langle {x}_{1}{x}_{1}{x}_{1}\rangle -(n-2)\langle {x}_{1}{x}_{2}{x}_{2}\rangle \\ \langle {x}_{1}{x}_{2}{x}_{3}\rangle =\langle (1-\sum _{2\le i\le n}{x}_{i}){x}_{2}{x}_{3}\rangle =\langle {x}_{1}{x}_{2}\rangle -2\langle {x}_{1}{x}_{2}{x}_{2}\rangle -(n-3)\langle {x}_{1}{x}_{2}{x}_{3}\rangle \end{array}$$

where we used the normalization condition Σ* _{i} x_{i}* = 1, and the symmetry relations (13). Thus, we can express all the averages in (13) in terms of only two probabilities,

$$\begin{array}{l}\langle {x}_{1}\rangle =\frac{1}{n}\\ \langle {x}_{1}{x}_{1}\rangle =\frac{{s}_{2}}{n}\\ \langle {x}_{1}{x}_{2}\rangle =\frac{1-{s}_{2}}{n(n-1)}\\ \langle {x}_{1}{x}_{1}{x}_{1}\rangle =\frac{{s}_{3}}{n}\\ \langle {x}_{1}{x}_{2}{x}_{2}\rangle =\frac{{s}_{2}-{s}_{3}}{n(n-1)}\\ \langle {x}_{1}{x}_{2}{x}_{3}\rangle =\frac{1-3{s}_{2}+2{s}_{3}}{n(n-1)(n-2)}.\end{array}$$

(16)

We note again that for *n* = 2 the last expression is ill defined, but it is not needed in that case.

Up to this point everything was calculated for finite *N*. Although further discussion for finite *N* is possible, it becomes quite unwieldy; hence for simplicity we consider only the large *N* limit from here on. In Appendix A we calculate the values of *s*_{2} and *s*_{3} for *N* 1, which are given by (A.3) and (A.7), respectively. By substituting these expressions into (16) we arrive at

$$\begin{array}{l}\langle {x}_{1}{x}_{1}\rangle =n(2+\mu )(n+\mu )C\\ \langle {x}_{1}{x}_{2}\rangle =\mu (2+\mu )nC\\ \langle {x}_{1}{x}_{1}{x}_{1}\rangle =(n+\mu )(2n+\mu )C\\ \langle {x}_{1}{x}_{2}{x}_{2}\rangle =\mu (n+\mu )C\\ \langle {x}_{1}{x}_{2}{x}_{3}\rangle ={\mu}^{2}C,\end{array}$$

(17)

where *C* = [*Nn*^{3}(1 + *μ*)(2 + *μ*)]^{−1} and *μ* = *N u* is the rescaled mutation rate. With these correlations, (15) takes the form

$$\frac{{\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}}{C\delta}=\mu {n}^{2}{a}_{kk}+\mu (2+\mu )n\sum _{i}{a}_{ki}-\mu n\sum _{i}({a}_{ki}+{a}_{ii}+{a}_{ik})-{\mu}^{2}\sum _{i,j}{a}_{ij},$$

where rearranging the terms leads to

$$\frac{{\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}}{C\delta}={\mu}^{2}\left(n\sum _{i}{a}_{ki}-\sum _{i,j}{a}_{ij}\right)+\mu n\sum _{i}({a}_{kk}+{a}_{ki}-{a}_{ik}-{a}_{ii}).$$

By defining

$$\begin{array}{l}{L}_{k}=\frac{1}{n}\sum _{i}({a}_{kk}+{a}_{ki}-{a}_{ik}-{a}_{ii})\\ {H}_{k}=\frac{1}{{n}^{2}}\sum _{i,j}({a}_{ki}-{a}_{ij}),\end{array}$$

(18)

we finally arrive at our main result

$${\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}=\frac{\delta \mu \phantom{\rule{0.16667em}{0ex}}({L}_{k}+\mu {H}_{k})}{nN(1+\mu )(2+\mu )}.$$

(19)

This expression is valid in the limit of large population size *N* 1, for weak selection *Nδ* 1, with *μ* = *N u* being constant. Condition (12) for strategy *k* to be more abundant than the average 1/*n* is simply *L _{k}* +

By substituting (19) into (11) we obtain the abundances (average frequencies) in the weak selection stationary state

$${\langle {x}_{k}\rangle}_{\delta}=\frac{1}{n}\left[1+\delta N(1-u)\frac{{L}_{k}+Nu{H}_{k}}{(1+Nu)(2+Nu)}\right].$$

(20)

This expression becomes exact in the *N* → ∞, *Nδ* → 0 limit, if *N u* = *μ* is kept constant. It becomes clear at this point, that although we only used *δ* 1 to derive (19), we actually need *δN* 1 to have frequencies close to 1/*n* in (20).

For only two strategies (*n* = 2) the general formula (19) leads to

$${\langle \mathrm{\Delta}{x}_{1}^{\text{sel}}\rangle}_{\delta}=\frac{\delta u}{8(1+Nu)}({a}_{11}+{a}_{12}-{a}_{21}-{a}_{22}).$$

(21)

The peculiarity of the two strategy case is that the condition for higher abundance (mean frequency) (12) of strategy one

$${a}_{11}+{a}_{12}-{a}_{21}-{a}_{22}>0$$

(22)

does not depend on the mutation probability *u*. It has been shown in (Antal et al., 2008a) that very similar conditions hold for finite population size. With self interaction we obtain the same result, but when self interaction is excluded, the condition becomes

$$({a}_{11}+{a}_{12}-{a}_{21}-{a}_{22})N-2{a}_{11}+2{a}_{22}>0$$

(23)

This condition does not depend on the mutation probability *u* either. Moreover, the above conditions are also valid for arbitrary strength of selection for a general class of models, in particular for the Moran model with exponential payoff functions or for the Pairwise Comparison process (Antal et al., 2008a). Note that this law is well known for several models in the *low mutation rate* limit (Kandori et al., 1993; Nowak et al., 2004).

There is an intimate relationship between our conditions for high abundance and fixation probabilities for low mutation rates *μ* 1. In this limit, most of the time all players follow the same strategy, and rarely a single mutant takes over the entire homogeneous population (fixates). During fixation only two types of players are present. The fixation probability *ρ _{ij}* is the probability that a single

Let us first consider *n* = 2 strategy games, where we label the two strategies as *k* and *i*. In the stationary state there are rare transitions between pure *k*-player and pure *i*-player states, and *x _{k}*

$$\langle {x}_{k}\rangle =\frac{1}{2}\left[1+\frac{N}{2}({\rho}_{ki}-{\rho}_{ik})\right]$$

(24)

since all fixation probabilities are 1/*N* in the leading order of *δ*. On the other hand, the abundance (20) for two strategies and low mutations becomes

$$\langle {x}_{k}\rangle =\frac{1}{2}\left(1+\frac{N}{2}\delta {L}_{k}\right)$$

(25)

Consequently, we can express *δL _{k}* as

$$\frac{\delta}{2}({a}_{kk}+{a}_{ki}-{a}_{ik}-{a}_{ii})={\rho}_{ki}-{\rho}_{ik}.$$

(26)

This equality can also be derived independently from the exact expression of the fixation probability (Nowak et al., 2004)

$${\rho}_{ki}=\frac{1}{N}\left[1+\frac{\delta N}{6}({a}_{kk}+2{a}_{ki}-{a}_{ik}-2{a}_{ii})\right]$$

(27)

For *n* strategies, by using (1) and (26), we can express *L _{k}* with pairwise fixation probabilities as

$$\sum _{i}{\rho}_{ki}>\sum _{i}{\rho}_{ik}$$

(28)

This condition can be interpreted as follows: strategy *k* is more abundant than 1/*n* in the low mutation rate limit if the average fixation probability of a single *k* player into other pure strategy states is larger than the average fixation probability of other strategies into a pure strategy *k* population. For these averages we take all strategies with the same weights.

Here we provide three applications of our results for three strategy games. First in 3.1 we study the effect of Loners on Cooperators and Defectors. Then in 3.2 we show how mutation alone can make a strategy more abundant. Finally in 3.3 we study the repeated Prisoner’s Dilemma game.

To see the difference between our weak selection and a traditional game-theoretic approach, let us consider the following example. We start with a Prisoner Dilemma game between cooperators () and defectors (), given by the payoff matrix

$$\begin{array}{c}\phantom{\rule{0.16667em}{0ex}}\\ \mathcal{C}\\ \mathcal{D}\end{array}\left(\begin{array}{cc}\mathcal{C}& \mathcal{D}\\ 10& 1\\ 11& 2\end{array}\right).$$

(29)

Clearly, defectors dominate cooperators, so we expect that defectors are more abundant in a stationary state. Indeed, from condition (22) we obtain

$${a}_{11}+{a}_{12}-{a}_{21}-{a}_{22}=-2<0.$$

(30)

Thus strategy is more abundant than for any mutation rate.

Surprisingly, the introduction of loners (), which do not participate in the game (Hauert et al., 2002), can dramatically change the balance between and . Consider the following game:

$$\begin{array}{c}\phantom{\rule{0.16667em}{0ex}}\\ \mathcal{C}\\ \mathcal{D}\\ \mathcal{L}\end{array}\left(\begin{array}{ccc}\mathcal{C}& \mathcal{D}& \mathcal{L}\\ 10& 1& 0\\ 11& 2& 0\\ 0& 0& 0\end{array}\right).$$

(31)

Loners are dominated by cooperators and defectors. Elimination of the dominated strategy leads to a game between and , in which is winning. Thus, standard game theoretic arguments predict that strategy is the most abundant. However, these arguments fail for weak selection, where it is not enough to know that a strategy dominates another, but also how strong this dominance is. In pairwise interactions, the advantage of over is significantly larger than that of over as can be seen from the matrices:

$$\begin{array}{c}\phantom{\rule{0.16667em}{0ex}}\\ \mathcal{C}\\ \mathcal{L}\end{array}\left(\begin{array}{cc}\mathcal{C}& \mathcal{L}\\ 10& 0\\ 0& 0\end{array}\right)\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\begin{array}{c}\phantom{\rule{0.16667em}{0ex}}\\ \mathcal{D}\\ \mathcal{L}\end{array}\left(\begin{array}{cc}\mathcal{D}& \mathcal{L}\\ 2& 0\\ 0& 0\end{array}\right).$$

(32)

This advantage of can overcompensate the disadvantage it has against , therefore the abundance of can be the highest.

Indeed, the relevant quantities for low mutation rates are

$${L}_{\mathcal{C}}=\frac{8}{3},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{L}_{\mathcal{D}}\frac{4}{3},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{L}_{\mathcal{L}}=-4.$$

(33)

Thus, both and have larger abundance than the neutral value 1/3. But since *L*_{} > *L*_{}, strategy has the highest abundance. The introduction of loners causes the reversal of abundance between and when the mutation rates are small. In other words we can say the loners favor cooperators.

For high mutation rates the relevant quantities are

$${H}_{\mathcal{C}}=1,\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{H}_{\mathcal{D}}=\frac{5}{3},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{H}_{\mathcal{L}}=-\frac{8}{3}.$$

(34)

Hence, according to (3), both and have an abundance larger than 1/3 for any mutation rate. For high mutation rates, however, since *H*_{}< *H*_{}, strategy becomes the most abundant. In fact, is the most abundant for *μ* < *μ*^{*} 2, but it is for *μ* > *μ*^{*}.

As a second example, we address the game

$$\begin{array}{c}\phantom{\rule{0.16667em}{0ex}}\\ {S}_{1}\\ {S}_{2}\\ {S}_{3}\end{array}\left(\begin{array}{ccc}{S}_{1}& {S}_{2}& {S}_{3}\\ 1& 0& 13\\ 0& \lambda & 8\\ 0& 7& 9\end{array}\right),$$

(35)

where λ is a free parameter. For λ < 7, *S*_{2} is dominated by *S*_{3}. Eliminating *S*_{2} leads to a game in which *S*_{1} dominates *S*_{3}. Thus, classical game theoretic analysis shows that for *λ* < 7, all players should choose *S*_{1}. It turns out that this state is also the only stable fixed point of the replicator equation for *λ* < 7.

However, the above reasoning does not apply for weak selection. The relevant quantities for low mutation rates are

$${L}_{1}=\frac{6-\lambda}{3},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{L}_{2}=\frac{2\lambda -9}{3},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{L}_{3}=\frac{3-\lambda}{3},$$

(36)

and for high mutation rates they are

$${H}_{1}=\frac{4-\lambda}{9},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{H}_{2}=\frac{2\lambda -14}{9},\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{H}_{3}=\frac{10-\lambda}{9}.$$

(37)

Thus, we expect thresholds where the abundance of a strategy crosses 1/3 at *λ* = 3, *λ* = 4.5, and *λ* = 6 for small mutation rates and at *λ* = 4, *λ* = 7, and *λ* = 10 for high mutation rates. For each mutation rate and each value of *λ*, our conditions determine the order of strategies. Fig. 1 shows the change of these thresholds with the mutation rate. There are six possibilities for ordering of these three strategies. In each of these cases, there can be one or two strategies with an abundance larger than 1/3. Therefore, there are 12 ways for ordering the strategies relative to 1/3. In this concrete example, all of these 12 regions can be obtained by varying the parameter *λ* and the mutation rate *μ*. For example if we fix *λ* = 4.6, just by changing the rescaled mutation rate, we obtain six different orderings of the strategies relative to 1/3, as one can see in Fig. 1.

In order to verify our results we performed simulations of the Moran model with the payoff matrix (35), at *λ* = 4.6. In figure 2, we compare the simulated frequencies of strategies to the theoretical frequencies given by (20). The theory becomes exact in the *N* → ∞, *N δ* →0, and *μ* = *N u* constant limit. As shown in figure 2, already at *N* = 30, and *δ* = 0.003, which corresponds to *N δ* = 0.09, we find an excellent agreement with the theory.

As a third example, we discuss the interaction of ‘always cooperate’ (AllC), ‘always defect’ (AllD), and ‘tit-for-tat’ (TFT) strategies in the *repeated* Prisoner’s Dilemma game (Nowak and Sigmund, 1989; Imhof et al., 2005). Each pair of players plays *m* ≥ 2 rounds. TFT follows its opponent strategy in the previous round, but cooperates in the first round. Acting as a cooperator costs *c* for a player, but one gets benefit *b* from playing with a cooperator. Hence, the payoff matrix is given by

$$\begin{array}{c}\phantom{\rule{0.16667em}{0ex}}\\ \text{AllC}\\ \text{AllD}\\ \text{TFT}\end{array}\left(\begin{array}{ccc}\text{AllC}& \text{AllD}& \text{TFT}\\ (b-c)m& -cm& (b-c)m\\ bm& 0& b\\ (b-c)m& -c& (b-c)m\end{array}\right).$$

(38)

For low mutation rates, the relevant quantities are

$$\begin{array}{l}{L}_{\text{AllC}}=-\frac{2cm}{3}\\ {L}_{\text{AllD}}=\frac{-b(m-1)+c(3m+1)}{3}\\ {L}_{\text{TFT}}=\frac{b(m-1)-c(m+1)}{3}.\end{array}$$

(39)

The most apparent consequence is that for low mutation rates cooperators never exceed the abundance of 1/3. This is not surprising, since AllC is a fairly dull strategy: the mean AllD and the cleverer TFT is expected to perform better. As we increase the benefit to cost ratio *b*/*c*, the order of abundance of these strategies change at several particular values. For
${\scriptstyle \frac{b}{c}}<{\scriptstyle \frac{m+1}{m-1}}$, only the abundance of AllD is larger than 1/3. For
${\scriptstyle \frac{m+1}{m-1}}<{\scriptstyle \frac{b}{c}}<{\scriptstyle \frac{2m+1}{m-1}}$, the abundance of both AllD and TFT is above 1/3, with AllD still dominating TFT. For
${\scriptstyle \frac{b}{c}}>{\scriptstyle \frac{2m+1}{m-1}}$ TFT becomes more abundant than AllD, for
${\scriptstyle \frac{b}{c}}>{\scriptstyle \frac{3m+1}{m-1}}$ the abundance of AllD drops below 1/3, and for
${\scriptstyle \frac{b}{c}}>{\scriptstyle \frac{5m+1}{m-1}}$, it is even smaller than the abundance of AllC.

For high mutation rates, the relevant quantities are

$$\begin{array}{l}{H}_{\text{AllC}}=\frac{b(m-1)-c(4m-1)}{9}\\ {H}_{\text{AllD}}=\frac{-2b(m-1)+c(5m+1)}{9}\\ {H}_{\text{TFT}}=\frac{b(m-1)-c(m+2)}{9}.\end{array}$$

(40)

Surprisingly, now the abundance of AllC can exceed 1/3 for high mutation rates. Again, as we increase the benefit to cost ratio *b*/*c*, the abundances change order at particular *b*/*c* values, which values are different for the high and low mutation rate limits. For high mutation rates, when
${\scriptstyle \frac{b}{c}}<{\scriptstyle \frac{m+2}{m-1}}$, only the abundance of AllD exceeds 1/3. For
${\scriptstyle \frac{m+2}{m-1}}<{\scriptstyle \frac{b}{c}}<{\scriptstyle \frac{2m+1}{m-1}}$, also the abundance of TFT is larger than 1/3, but does not exceed the abundance of AllD. For
${\scriptstyle \frac{2m+1}{m-1}}<{\scriptstyle \frac{b}{c}}<{\scriptstyle \frac{5m+1}{2(m-1)}}$, AllD is less abundant than TFT. At
${\scriptstyle \frac{b}{c}}={\scriptstyle \frac{5m+1}{2(m-1)}}$, the abundance of AllD drops below 1/3 and it becomes identical to the abundance of AllC at
${\scriptstyle \frac{b}{c}}={\scriptstyle \frac{3m}{m-1}}$. Finally, for
${\scriptstyle \frac{b}{c}}>{\scriptstyle \frac{4m-1}{m-1}}$, even the abundance of AllC exceeds 1/3, but it always remains below the abundance of TFT. The relations between the strategies and these thresholds are depicted in Fig. 3.

Strategy abundance in the interaction between AllC, AllD, and TFT in the probability simplex *S*_{3}. Dark areas are inaccessible to the evolutionary dynamics. Red lines show thresholds where a strategy abundance crosses 1/3, the thresholds are given in terms **...**

The most interesting region is ${\scriptstyle \frac{b}{c}}>{\scriptstyle \frac{4m-1}{m-1}}$, where the abundance of AllC exceeds 1/3 (the yellow region in Fig 3b). This is not possible for low mutation rates. High mutation rates and the TFT strategy can facilitate AllC to increase its abundance above average.

In this section we discuss possible extensions and limitations of our method. First in 4.1 we address the strong selection limit. Then in 4.2 we consider more general mutation rates. Finally in 4.3 two alternative dynamics are studied.

Can we say something without the weak selection assumption? As we mentioned in Section 2.2, for only two strategies condition (19) is valid for any intensity of selection in a wide class of models (Antal et al., 2008a). We can also argue that our condition (2) is valid for very high mutation probabilities, namely for *u* → 1, for arbitrary strength of selection. In this case players pick random strategies most of the time, hence the frequencies of all strategies are close to 1/*n*. This implies that the payoff of a *k*-player is approximately *f _{k}* = (1/

The situation is more complex in the low mutation rate limit for arbitrary strength of selection. If the mutation rate is sufficiently small we can assume that there are at most two strategies present in the system at any given time (Fudenberg and Imhof, 2006). Then we can use the fixation probabilities, or their large *N* asymptotic values (Antal and Scheuring, 2006; Traulsen et al., 2006), and describe the system effectively as a Markov process on *n* homogeneous strategy states. This description, however, can lead to very different conditions for arbitrary selection and for weak selection. Note also that if two strategies *j* and *k* tend to coexist, *a _{jj}* <

Throughout this paper we have considered uniform mutations: each strategy mutates with the same probability *u* to a random strategy. In this section we intend to generalize our method to a more general class of mutation rates. For uniform mutation rates strategies have equal abundances in the absence of selection, and we have studied the effect of selection on this uniform distribution. Conversely, for non-uniform mutation rates strategies typically have different abundances already in the absence of selection. It can be still of interest to study whether selection increases or decreases these neutral abundances. In principal the perturbation theory presented in this paper can be repeated for general mutation probabilities, the discussion however becomes unwieldy.

Here we present an easy generalization to a specific class of mutation rates. Imagine that each player mutates with probability *u*, but instead of uniformly adopting a new strategy, it adopts strategy *j* with probability *p _{j}* > 0. We can approximate these probabilities (up to arbitrary precision) by rational numbers

Although we have focused on the Moran model in this paper, the results are almost identical for the Wright-Fisher (W-F) process and for the Pairwise Comparison process. In the W-F model, each player of a new (non-overlapping) generation chooses a parent from the previous generation with probability (abbreviated as w.p.) proportional to the parent’s payoff. The offspring inherits the parent’s strategy w.p. 1 − *u*, or adopts a random strategy w.p. *u*.

The expected number of offspring of a *k*-player in the next generation due to selection is *ω _{k}* =

$${\omega}_{k}=1+\delta [{(\mathbf{Ax})}_{k}-{\mathbf{x}}^{T}\mathbf{Ax}].$$

(41)

This is the same as the analog expression (5) for the Moran process, apart from the extra *N* factor. That *N* factor is due to the definition of time: time is measured in single player update steps in the Moran model, while in generations in the W-F model. For the neutral correlations, the only difference between the two models in the large *N* limit is that in the W-F model both linages can have mutations in each step. Hence all the neutral correlations *s*_{2} and *s*_{3} are the same as in the Moran model of appendix A, provided we use *μ* = 2*N u*. Consequently,
${\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}$ becomes *N* times larger than for the Moran process (19), and *μ* = 2*N u*.

Taking into account mutations as well, the expected total change of frequency in one generation is

$$\mathrm{\Delta}{x}_{k}^{\text{tot}}=\mathrm{\Delta}{x}_{k}^{\text{sel}}(1-u)+u\left(\frac{1}{n}-{x}_{k}\right),$$

(42)

similarly to (10). Hence the average frequency of *k*-players in the stationary state is

$${\langle {x}_{k}\rangle}_{\delta}=\frac{1}{n}+\frac{1-u}{u}{\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta},$$

(43)

which is identical to (11) apart from an extra *N* factor. Since we also have an extra *N* factor in
${\langle \mathrm{\Delta}{x}_{k}^{\text{sel}}\rangle}_{\delta}$ for the W-F process, these factors cancel out, and we obtain the same stationary density (20) as for the Moran process but with 2*N u* instead of *N u* (similarly to Antal et al. (2008b)). This also implies that the condition for greater abundance (3) becomes *L _{k}* + 2

Conversely, the results are identical for the Moran and the Pairwise Comparison process. In this latter model we pick randomly a pair of players, say a type *j* and a type *k*. The *j* player then adopts strategy *k* w.p. (*f _{j}* −

Let us calculate directly the change of the frequency of *k* players due to selection
$\mathrm{\Delta}{x}_{k}^{\text{sel}}$ in state **X**. The number of *k* players changes if we pick a *k* player and a *j* = ≠ *k* player, which happens w.p. 2*x _{k}x_{j}*. Then the frequency

$$\mathrm{\Delta}{x}_{k}^{\text{sel}}=\frac{2{x}_{k}}{N}\sum _{j\ne k}{x}_{j}[\mathcal{F}({f}_{j}-{f}_{k})-\mathcal{F}({f}_{k}-{f}_{j})]$$

(44)

which, in the leading order of small *δ*, becomes

$$\mathrm{\Delta}{x}_{k}^{\text{sel}}=\frac{\delta {x}_{k}}{N}\sum _{j\ne k}{x}_{j}({f}_{k}-{f}_{j})=\frac{\delta {x}_{k}}{N}({f}_{k}-\sum _{j}{x}_{j}{f}_{j}).$$

(45)

With the above definition of fitness we arrive at the same expression we obtained for the Moran process (6) and (7). Since without selection this model is equivalent to the Moran model, all neutral correlations *s*_{2} and *s*_{3} are also the same. Mutations in this model have the same effect as in the Moran model (10). Consequently all results we obtained for the Moran model are valid for the Pairwise Comparison process as well.

We have studied evolutionary game dynamics in well-mixed populations with *n* strategies. We derive simple linear conditions which hold for the limit of weak selection but for any mutation rate. These conditions specify whether a strategy is more or less abundant than 1/*n* in the mutation-selection equilibrium. In the absence of selection, the equilibrium abundance of each strategy is 1/*n*. An abundance greater than 1/*n* means that selection favors this strategy. An abundance less than 1/*n* means that selection opposes this strategy. We find that selection favors strategy *k* if *L _{k}* +

The traditional approach to study deterministic game dynamics in large populations is based on the replicator equation (Hofbauer and Sigmund, 1998), which describes selection dynamics of the average frequencies of strategies. (Note the formal similarity between (7) and the replicator equation). This method, however, neglects fluctuations around the averages. In this paper we have taken into account stochastic fluctuations, and derived exact results in the limit of weak selection. We find the average frequencies of strategies in the stationary state, and conditions for a strategy to be more abundant than another strategy. Our conditions are valid for arbitrary values of the mutation rates. For small mutation rates these conditions describe which strategy has higher fixation probability (Nowak et al., 2004).

Throughout the paper we have considered large population size, *N,* in order to simplify the presentation. But in principle all calculations can be performed for any given population size *N* and mutation probability *u* (see for example Antal et al. (2008b)). This finite *N* calculation, however, is much easier for the Wright-Fisher than for the Moran process for technical reasons. The mutation probability is a parameter between 0 and 1. In a social context, mutation can also mean ‘exploration’: people explore the strategy space by experimenting with new strategies (Traulsen et al., 2009). A high mutation probability seems to be appropriate for social evolutionary dynamics. Our conditions can be applied for the initial analysis of any evolutionary game that is specified by an *n* × *n* payoff matrix.

We are grateful for support from the John Templeton Foundation, the NSF/NIH (R01GM078986) joint program in mathematical biology, the Bill and Melinda Gates Foundation (Grand Challenges grant 37874), the Emmy-Noether program of the DFG, the Japan Society for the Promotion of Science, and J. Epstein.

This section is valid for any number *n* ≥ 1 of strategies. We calculate the probabilities *s*_{2} and *s*_{3} in the neutral (*δ* = 0) stationary state. First consider the simpler *s*_{2}, that is the probability that two randomly chosen players have the same strategy. We shall use the Moran model and apply coalescent ideas (Kingman, 1982a, b, 2000; Haubold and Wiehe, 2006; Antal et al., 2008b). Coalescence means that different family lines collide in the past. A key fact behind this idea is that there is always a common ancestor of multiple individuals in finite populations. In the absence of mutations, any two players have the same strategy in the stationary state, because they both inherit their strategy from their common ancestor. In the presence of mutations, two players may have different strategies due to mutations after the branching of their ancestral lineage. Therefore, tracing the lineage of two players backward in time and finding the most recent common ancestor, from which two family lines branch, enable us to estimate the similarity of two players in strategies.

Consider two different individuals and let us trace their lineages backward in time. In the neutral Moran process, two lineages coalesce in an elementary step of update (i.e. two players share the same parent) with probability 2/*N*^{2}. Here and thereafter we assume that the population size is large, hence we can use a continuous time description, where the rescaled time is *τ* = *t*/(*N*^{2}/2). In the rescaled time, the trajectories of two players coalesce at rate 1. Following the trajectory of an individual back in time, we see that mutations happen at rate *μ*/2 = *N u*/2 to each trajectory.

The coalescence time *τ*_{2} is described by the density function

$${f}_{2}({\tau}_{2})={e}^{-{\tau}_{2}}.$$

(A.1)

Immediately after the coalescence of two players we have two players of the same strategy. What is the probability *s*_{2}(*τ*) that after a fixed time *τ* they have again the same strategy? With probability (abbreviated as w.p.) *e*^{−}* ^{μτ}* none of them mutated, so they still have the same strategy. Otherwise at least one of them mutated, hence they have the same strategy w.p. 1/

$${s}_{2}(\tau )={e}^{-\mu \tau}+\frac{1-{e}^{-\mu \tau}}{n}.$$

(A.2)

Now we obtain the stationary probability *s*_{2} by integrating this expression with the coalescent time density of (A.1) as

$${s}_{2}={\int}_{0}^{\infty}{s}_{2}(\tau ){f}_{2}(\tau )d\tau =\frac{n+\mu}{n(1+\mu )}.$$

(A.3)

Next we calculate the probability *s*_{3} that three randomly chosen players have the same strategy. Any two trajectories of three players coalesce at rate 1, hence there is a coalescence at rate 3. The coalescence of two out of the three trajectories then happens at time *τ*_{3}, described by the density function

$${f}_{3}({\tau}_{3})=3{e}^{-3{\tau}_{3}}.$$

(A.4)

The remaining two trajectories then coalesce at time *τ*_{2} earlier, with density function (A.1). Before the first coalescence at time *τ*_{3} backward, the two players have the same strategy w.p. *s*_{2}, and of course they are different w.p. 1 − *s*_{2}, where *s*_{2} is given by (A.3). Hence just after this coalescence event we have either three identical players w.p. *s*_{2}, or two identical and one different player otherwise. Now we shall see what happens in these two scenarios.

If we have three identical players then they are also identical after time *τ* w.p.

$${s}_{3}^{\ast}(\tau )=\frac{1}{{n}^{2}}\left[1+3(n-1){e}^{-\mu \tau}+(n-1)(n-2){e}^{-{\scriptstyle \frac{3}{2}}\mu \tau}\right].$$

(A.5)

To derive this expression note that w.p.
${e}^{-{\scriptstyle \frac{3}{2}}\mu \tau}$ none of the players have mutated, hence they have the same strategy. Then w.p.
$3(1-{e}^{-{\scriptstyle \frac{\mu}{2}}\tau}){e}^{-\mu \tau}$ one of them has mutated, hence they are the same w.p. 1/*n*. Otherwise at least two of them mutated hence they are the same w.p. 1/*n*^{2}. By collecting these terms one obtains (A.5).

Similarly, if after the first coalescence only two players share the same strategy and one has a different strategy, the probability of all three having the same strategy after time *τ* is

$${s}_{3}^{\ast \ast}(\tau )=\frac{1}{{n}^{2}}\left[1+(n-3){e}^{-\mu \tau}-(n-2){e}^{-{\scriptstyle \frac{3}{2}}\mu \tau}\right].$$

(A.6)

Now we can simply obtain *s*_{3} by first integrating over the coalescent time distribution (A.4) for the two different initial conditions, and then weighting them with the probabilities of the initial conditions, namely

$${s}_{3}={s}_{2}{\int}_{0}^{\infty}{s}_{3}^{\ast}(\tau ){f}_{3}(\tau )d\tau +(1-{s}_{2}){\int}_{0}^{\infty}{s}_{3}^{\ast \ast}(\tau ){f}_{3}(\tau )d\tau =\frac{(n+\mu )\phantom{\rule{0.16667em}{0ex}}(2n+\mu )}{{n}^{2}(1+\mu )\phantom{\rule{0.16667em}{0ex}}(2+\mu )}.$$

(A.7)

**Publisher's Disclaimer: **This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

- Antal T, Nowak MA, Traulsen A. Strategy abundance in 2×2 games for arbitrary mutation rates. 2008a e-print arXiv:0809.2804. [PMC free article] [PubMed]
- Antal T, Ohtsuki H, Wakeley J, Taylor PD, Nowak MA. Evolutionary game dynamics in phenotype space. 2008b e-print arXiv:0806.2636.
- Antal T, Scheuring I. Fixation of strategies for an evolutionary game in finite populations. Bull Math Biol. 2006;68:1923–1944. [PubMed]
- Aumann RJ, Maschler M. Repeated Games with Incomplete Information. Cambridge: MIT press; 1995.
- Binmore K. Game theory and social contract. Cambridge: MIT press; 1994.
- Bonhoeffer S, Nowak MA. Mutation and the evolution of parasite virulence. Proc Royal Soc Lond B. 1995;258:133–140.
- Cressman R. The stability concept of evolutionary game theory. Lecture Notes in Biomathematics. 1992:94.
- Ficici S, Pollack J. Effects of finite populations on evolutionary stable strategies. In: Whitley D, Goldberg D, Cantu-Paz E, Spector L, Parmee I, Beyer H-G, editors. Proceedings GECCO. Morgan-Kaufmann; San Francisco: 2000. pp. 927–934.
- Fogel G, Andrews P, Fogel D. On the instability of evolutionary stable strategies in small populations. Ecol Model. 1998;109:283–294.
- Fudenberg D, Imhof LA. Imitation processes with small mutations. J Econ Theor. 2006;131:251–262.
- Fudenberg D, Tirole J. Game theory. Cambridge: MIT press; 1991.
- Haubold B, Wiehe T. Introduction to Computational Biology: An evolutionary approach. Birkhäuser 2006
- Hauert Ch, De Monte S, Hofbauer J, Sigmund K. Volunteering as Red Queen Mechanism for Cooperation in Public Goods Game. Science. 2002;296:1129–1132. [PubMed]
- Hofbauer J, Sigmund K. Evolutionary Games and Population Dynamics. Cambridge University Press; Cambridge: 1998.
- Hofbauer J, Sigmund K. Evolutionary game dynamics B. Am Math Soc. 2003;40:479–519.
- Imhof LA, Nowak MA. Evolutionary game dynamics in a Wright-Fisher process. J Math Biol. 2006;52:667–681. [PMC free article] [PubMed]
- Imhof LA, Fudenberg D, Nowak MA. Evolutionary cycles of cooperation and defection. PNAS. 2005;102:1079710800. [PubMed]
- Kandori M, Rob R. Evolution of equilibria in the long run: A general theory and applications. J Econ Theor. 1995;65:383–414.
- Kandori M, Mailath GJ, Rob R. Learning, mutation, and long run equilibria in games. Econometrica. 1993;61:29–56.
- Kerr B, Riley MA, Feldman MW, Bohannan BJM. Local dispersal promotes biodiversity in a real-life game of rock-paper-scissors. Nature. 2002;418:171–174. [PubMed]
- Kingman JFC. The coalescent. Stochastic Processes and Their Applications. 1982a;13:235–248.
- Kingman JFC. On the genealogy of large populations. J Appl Probability. 1982b;19A:27–43.
- Kingman JFC. Origins of the coalescent. 1974–1982. Genetics. 2000;156(4):1461–1463. [PubMed]
- Lessard S, Ladret V. The probability of fixation of a single mutant in an exchangeable selection model. J Math Biol. 2007;54:721–744. [PubMed]
- May RM. Stability and Complexity in Model Ecosystems. Princeton Univ. Press; 1973.
- May RM, Nowak MA. Coinfection and the evolution of parasite virulence. Proc Royal Soc Lond B. 1995;261:209–215. [PubMed]
- Maynard Smith J. The theory of games and the evolution of animal conflicts. J Theor Biol. 1974;47:209–221. [PubMed]
- Maynard Smith J. Evolution and the Theory of Games. Cambridge University Press; Cambridge: 1982.
- Maynard Smith J, Price GR. The logic of animal conflict. Nature. 1973;246:15–18.
- Nash JF. Equilibrium points in
*n*-person games. PNAS. 1950;36:48–49. [PubMed] - Nowak MA. Evolutionary Dynamics. Harvard University Press; Cambridge, MA: 2006.
- Nowak MA, Anderson RM, McLean AR, Wolfs T, Goudsmit J, May RM. Antigenic diversity thresholds and the development of aids. Science. 1991;254:963–969. [PubMed]
- Nowak MA, May RM. Superinfection and the evolution of parasite virulence. Proc Royal Soc Lond B. 1994;255:81–89. [PubMed]
- Nowak MA, Sasaki A, Taylor C, Fudenberg D. Emergence of cooperation and evolutionary stability in finite populations. Nature. 2004;428:646–650. [PubMed]
- Nowak MA, Sigmund K. Game-dynamical aspects of the prisoner’s dilemma. Appl Math Comp. 1989;30:191–213.
- Nowak MA, Sigmund K. Evolutionary dynamics of biological games. Science. 2004;303:793–799. [PubMed]
- Riley JG. Evolutionary equilibrium strategies. J Theor Biol. 1979;76:109–123. [PubMed]
- Rousset F. Genetic structure and selection in subdivided populations. Princeton University Press; 2004.
- Samuelson L. Evolutionary games and equilibrium selection. Cambridge: MIT press; 1997.
- Schaffer M. Evolutionary stable strategies for a finite population and variable contest size. J Theor Biol. 1988;132:469–478. [PubMed]
- Schelling TC. The Strategy of Conflict. Harvard University Press; 1980.
- Schreiber S. Urn models, replicator processes, and random genetic drift. Siam J Appl Math. 2001;61:2148–2167.
- Szabó G, Tőke C. Evolutionary Prisoner’s Dilemma game on a square lattice. Phys Rev E. 1998;58:69.
- Taylor C, Fudenberg D, Sasaki A, Nowak MA. Evolutionary game dynamics in finite populations. Bull Math Biol. 2004;66:1621–1644. [PubMed]
- Taylor PD, Jonker L. Evolutionary stable strategies and game dynamics. Math Biosci. 1978;40:145–156.
- Traulsen A, Claussen JC, Hauert C. Coevolutionary dynamics: From finite to infinite populations. Phys Rev Lett. 2005;95:238701. [PubMed]
- Traulsen A, Nowak MA, Pacheco JM. Stochastic dynamics of invasion and fixation. Phys Rev E. 2006;74:011909. [PMC free article] [PubMed]
- Traulsen A, Pacheco JM, Nowak MA. Pairwise comparison and selection temperature in evolutionary game dynamics. J Theor Biol. 2007;246:522–529. [PMC free article] [PubMed]
- Traulsen A, Hauert C, Hannelore DS, Nowak MA, Sigmund K. Exploration dynamics in evolutionary games. PNAS. 2009;106:709–712. [PubMed]
- Turner PE, Chao L. Prisoner’s Dilemma in an RNA virus. Nature. 1999;398:441–443. [PubMed]
- Turner PE, Chao L. Escape from prisoner’s dilemma in RNA phage
*ϕ*6. Am Nat. 2003;161:497–505. [PubMed] - van Kampen NG. Stochastic Processes in Physics and Chemistry. 2nd ed. North-Holland, Amsterdam: 1997.
- Weibull JW. Evolutionary game theory. Cambridge: MIT press; 1995.
- Wild G, Taylor PD. Fitness and evolutionary stability in game theoretic models of finite populations. Proc Roy Soc Lond B. 2004;271:2345–2349. [PMC free article] [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |