The present column is devoted to Game Theory.

I Six new problems – solutions solicited

Solutions will appear in a subsequent issue.

245

We consider a setting where there is a set of mm candidates

C={c1,,cm},m2,C=\{c_{1},\dots,c_{m}\},\quad m\geq 2,

and a set of nn voters [n]={1,,n}[n]=\{1,\dots,n\}. Each voter ranks all candidates from the most preferred one to the least preferred one; we write aiba\succ_{i}b if voter ii prefers candidate aa to candidate bb. A collection of all voters’ rankings is called a preference profile. We say that a preference profile is single-peaked if there is a total order \vartriangleleft on the candidates (called the axis) such that for each voter ii the following holds: if ii’s most preferred candidate is cc and abca\vartriangleleft b\vartriangleleft c or cbac\vartriangleleft b\vartriangleleft a, then biab\succ_{i}a. That is, each ranking has a single ‘peak’, and then ‘declines’ in either direction from that peak.

(i) In general, if we aggregate voters’ preferences over candidates, the resulting majority relation may have cycles: e.g., if a1b1ca\succ_{1}b\succ_{1}c, b2c2ab\succ_{2}c\succ_{2}a and c3a3bc\succ_{3}a\succ_{3}b, then a strict majority (2 out of 3) voters prefer aa to bb, a strict majority prefer bb to cc, yet a strict majority prefer cc to aa. Argue that this cannot happen if the preference profile is single-peaked. That is, prove that if a profile is single-peaked, a strict majority of voters prefer aa to bb, and a strict majority of voters prefer bb to cc, then a strict majority of voters prefer aa to cc.

(ii) Suppose that nn is odd and voters’ preferences are known to be single-peaked with respect to an axis \vartriangleleft. Consider the following voting rule: we ask each voter ii to report their top candidate t(i)t(i), find a median voter ii^{*}, i.e.

{i:t(i)t(i)}<n2and{i:t(i)t(i)}<n2,\lvert\{i:t(i)\vartriangleleft t(i^{*})\}\rvert<\frac{n}{2}\quad\text{and}\quad\lvert\{i:t(i^{*})\vartriangleleft t(i)\}\rvert<\frac{n}{2},

and output t(i)t(i^{*}). Argue that under this voting rule no voter can benefit from voting dishonestly, if a voter ii reports some candidate at(i)a\neq t(i) instead of t(i)t(i), this either does not change the outcome or results in an outcome that ii likes less than the outcome of the truthful voting.

(iii) We say that a preference profile is 1D-Euclidean if each candidate cjc_{j} and each voter ii can be associated with a point in R\mathbb{R} so that the preferences are determined by distances, i.e. there is an embedding x ⁣:C[n]Rx\colon C\cup[n]\to\mathbb{R} such that for all a,bCa,b\in C and i[n]i\in[n] we have aiba\succ_{i}b if and only if x(i)x(a)<x(i)x(b)\lvert x(i)-x(a)\rvert<\lvert x(i)-x(b)\rvert. Argue that a 1D-Euclidean profile is necessarily single-peaked. Show that the converse is not true, i.e. there exists a single-peaked profile that is not 1D-Euclidean.

(iv) Let PP be a single-peaked profile, and let LL be the set of candidates ranked last by at least one voter. Prove that L2\lvert L\rvert\leq 2.

(v) Consider an axis c1cmc_{1}\vartriangleleft\dots\vartriangleleft c_{m}. Prove that there are exactly 2m12^{m-1} distinct votes that are single-peaked with respect to this axis. Explain how to sample from the uniform distribution over these votes.

These problems are based on references [4 H. Moulin, Axioms of Cooperative Decision Making. Cambridge University Press, Cambridge (1991) ] (parts (i) and (ii)), [2 C. H. Coombs. Psychological scaling without a unit of measurement. Psychological Review57, 145 (1950) ] (part (iii)) and [1 V. Conitzer, Eliciting single-peaked preferences using comparison queries. J. Artificial Intelligence Res.35, 161–191 (2009) , 5 T. Walsh, Generating single peaked votes. arXiv:1503.02766 (2015) ] (part (v)); part (iv) is folklore. See also the survey [3 E. Elkind, M. Lackner and D. Peters. Structured preferences. In Trends in Computational Social Choice, edited by U. Endriss, Chapter 10, AI Access, 187–207 (2017) ].

Edith Elkind (University of Oxford, UK)

246

Consider a standard prisoners’ dilemma game described by the following strategic form, with δ>β>0>γ\delta>\beta>0>\gamma:

Assume that any given agent either plays CC or DD and that agents reproduce at a rate determined by their payoff from the strategic form of the game plus a constant ff. Suppose that members of an infinite population are assorted into finite groups of size nn. Let qq denote the proportion of agents playing strategy CC (“altruists”) in the population as a whole and qiq_{i} denote the proportion of agents playing CC in group ii. We assume that currently q(0,1)q\in(0,1).

The process of assortment is abstract, but we assume that it has finite expectation E[qi]=qE[q_{i}]=q and variance Var[qi]=σ2\operatorname{Var}[q_{i}]=\sigma^{2}. Members within each group are then randomly paired off to play one iteration of the prisoners’ dilemma against another member of their group. All agents then return to the overall population.

  1. Find a condition relating qq, σ2\sigma^{2}, β\beta, γ\gamma, δ\delta and nn under which the proportion of altruists in the overall population rises after a round of play.

  2. Now interpret this game as one where each player can confer a benefit bb upon the other player by individually incurring a cost cc, with b>c>0b>c>0, so that β=bc\beta=b-c, δ=b\delta=b and γ=c\gamma=-c. Prove that, as long as (i) there is some positive assortment in group formation and (ii) the ratio cb\frac{c}{b} is low enough, then the proportion of altruists in the overall population will rise after a round of play.

Richard Povey (Hertford College and St Hilda’s College, University of Oxford, UK)

247

Consider a village consisting of n farmers who live along a circle of length nn. The farmers live at positions 1,2,,n1,2,\ldots,n. Each of them is friends with the person to the left and right of them, and each friendship has capacity mm where mm is a non-negative integer. At the end of the year, each farmer does either well (her wealth is +1+1 dollars) or not well (her wealth is 1-1 dollars) with equal probability. Farmers’ wealth realizations are independent of each other. Hence, for a large circle the share of farmers in each state is on average 11.

The farmers share risk by transferring money to their direct neighbors. The goal of risk-sharing is to create as many farmers with OK wealth (0 dollars) as possible. Transfers have to be in integer dollars and cannot exceed the capacity of each link (which is mm).

A few examples with a village of size n=4n=4 serve to illustrate risk-sharing.

  • Consider the case where farmers 1 to 4 have wealth

    (+1,1,+1,1).(+1,-1,+1,-1).

    In that case, we can share risk completely with farmer 11 sending a dollar to agent 22 and farmer 33 sending a dollar to farmer 44. This works for any m1m\geq 1.

  • Consider the case where farmers 11 to 44 have wealth

    (+1,+1,1,1).(+1,+1,-1,-1).

    In that case, we can share risk completely with farmer 11 sending a dollar to farmer 22, farmer 22 sending two dollars to farmer 33 and farmer 33 sending one dollar to farmer 44. In this case, we need m2m\geq 2. If m=1m=1, we can only share risk among half the people in the village.

Show that for any wealth realization an optimal risk-sharing arrangement can be found as the solution to a maximum flow problem.

Tanya Rosenblat (School of Information and Department of Economics, University of Michigan, USA)

248

This exercise is a continuation of Problem 247 where we studied risk-sharing among farmers who live on a circle village and are friends with their direct neighbors to the left and right with friendships of a certain capacity. Assume that for any realization of wealth levels the best possible risk-sharing arrangement is implemented and denote the expected share of unmatched farmers with U(n,m)U(n,m). Show that U(n,m)12m+1U(n,m)\to\frac{1}{2m+1} as nn\to\infty.

Tanya Rosenblat (School of Information and Department of Economics, University of Michigan, USA)

249

In a combinatorial auction there are mm items for sale to nn buyers. Each buyer ii has some valuation function vi()v_{i}(\cdot) which takes as input a set SS of items and outputs that bidder’s value for that set. These functions will always be monotone (vi(ST)vi(S)v_{i}(S\cup T)\geq v_{i}(S) for all S,TS,T), and satisfy vi()=0v_{i}(\emptyset)=0.

Definition 1 (Walrasian equilibrium). A price vector pR0m\vec{p}\in\mathbb{R}_{\geq 0}^{m} and a list B1,,BnB_{1},\ldots,B_{n} of subsets of [m][m] form a Walrasian equilbrium for v1,,vnv_{1},\ldots,v_{n} if the following two properties hold:

  • Each BiargmaxS{vi(S)jSpj}B_{i}\in\arg\max_{S}\{v_{i}(S)-\sum_{j\in S}p_{j}\}.

  • The sets BiB_{i} are disjoint, and iBi=[m]\mathop{\bigcup}_{i}B_{i}=[m].

Prove that a Walrasian equilibrium exists for v1,,vnv_{1},\ldots,v_{n} if and only if there exists an integral1That is, a point such that each xi,S{0,1}x_{i,S}\in\{0,1\}. optimum to the following linear program:

maximize iSvi(S)xi,S\displaystyle\text{maximize}\ \sum_{i}\sum_{S}v_{i}(S)\cdot x_{i,S}
such that,for all i,Sxi,S=1,for all j,Sjixi,S1,for all i,S,xi,S0.\displaystyle\begin{aligned} \text{such that},\,&\text{for all}\ i,&\sum_{S}x_{i,S}&=1,\\ &\text{for all}\ j,&\sum_{S\ni j}\sum_{i}x_{i,S}&\leq 1,\\[-3.0pt] &\text{for all}\ i,S,&x_{i,S}&\geq 0.\end{aligned}

Hint. Take the dual, and start from there.

Matt Weinberg (Computer Science, Princeton University, USA)

250

Consider a game played on a network and a finite set of players N={1,2,,n}\mathcal{N}=\{1,2,\ldots,n\}. Each node in the network represents a player and edges capture their relationships. We use G=(gij)1i,jn\mathbf{G}=(g_{ij})_{1\leq i,j\leq n} to represent the adjacency matrix of a undirected graph/network, i.e. gij=gji{0,1}g_{ij}=g_{ji}\in\{0,1\}. We assume gii=0g_{ii}=0. Thus, G\mathbf{G} is a zero-diagonal, squared and symmetric matrix. Each player, indexed by ii, chooses an action xiRx_{i}\in\mathbb{R} and obtains the following payoff:

πi(x1,x2,,xn)=xi12xi2+δjNgijxixj.\pi_{i}(x_{1},x_{2},\ldots,x_{n})=x_{i}-{\frac{1}{2}}x_{i}^{2}+\delta\sum_{j\in\mathcal{N}}g_{ij}x_{i}x_{j}.

The parameter δ>0\delta>0 captures the strength of the direct links between different players. For simplicity, we assume 0<δ<1n10<\delta<\frac{1}{n-1}.

A Nash equilibrium is a profile x=(x1,,xn)\mathbf{x}^{*}=(x_{1}^{*},\ldots,x_{n}^{*}) such that, for any i=1,,ni=1,\ldots,n,

πi(x1,,xn)πi(x1,,xi1,xi,xi+1,,xn)for any xiR.\pi_{i}(x_{1}^{*},\ldots,x_{n}^{*})\geq\pi_{i}(x_{1}^{*},\ldots,x_{i-1}^{*},x_{i},x_{i+1}^{*},\ldots,x_{n}^{*})\quad\text{for any}\ x_{i}\in\mathbb{R}.

In other words, at a Nash equilibrium, there is no profitable deviation for any player ii choosing xix_{i}^{*}.

Let w=(w1,w2,,wn)\mathbf{w}=(w_{1},w_{2},\ldots,w_{n})^{\prime}, wi>0w_{i}>0 for all ii (the transpose of a vector w\mathbf{w} is denoted by w\mathbf{w}^{\prime}), and In\mathbf{I}_{n} the n×nn\times n identity matrix. Define the weighted Katz–Bonacich centrality vector as

b(G,w)=[InδG]1w.\mathbf{b}(\mathbf{G,w})=[\mathbf{I}_{n}-\delta\mathbf{G}]^{-1}\mathbf{w}.

Here M[IδG]1\mathbf{M}\coloneqq[\mathbf{I}-\delta\mathbf{G}]^{-1} denote the inverse Leontief matrix associated with network G\mathbf{G}, while mijm_{ij} denote its ijij entry, which is equal to the discounted number of walks from ii to jj with decay factor δ\delta. Let 1n=(1,1,,1)\mathbf{1}_{n}=(1,1,\ldots,1)^{\prime} be a vector of 11s. Then the unweighted Katz–Bonacich centrality vector can be defined as

b(G,1)=[IδG]11n.\mathbf{b}(\mathbf{G,1})=[\mathbf{I}-\delta\mathbf{G}]^{-1}\mathbf{1}_{n}.

  1. Show that this network game has a unique Nash equilibrium x(G)\mathbf{x}^{*}(\mathbf{G}). Can you link this equilibrium to the Katz–Bonacich centrality vector defined above?

  2. Let x(G)=i=1nxi(G)x^{*}(\mathbf{G})=\sum_{i=1}^{n}x_{i}^{*}(\mathbf{G}) denote the sum of actions (total activity) at the unique Nash equilibrium in part 1. Now suppose that you can remove a single node, say ii, from the network. Which node do you want to remove such that the sum of effort at the new Nash equilibrium is reduced the most? (Note that, after the deletion of node ii, we remove all the links of node ii, and the remaining network, denoted by Gi\mathbf{G}_{-i}, can be obtained by deleting the ii-th row and ii-th column of G\mathbf{G}.)

    Mathematically, you need to solve the key player problem

    maxiN(x(G)x(Gi)).\max_{i\in\mathcal{N}}\bigl(x^{*}(\mathbf{G})-x^{*}(\mathbf{G}_{-i})\bigr).

    In other words, you want to find a player who, once removed, leads to the highest reduction in total action in the remaining network.

    Hint. You may come up with an index cic_{i} for each ii such that the key player is the one with the highest cic_{i}. This cic_{i} should be expressed using the Katz–Bonacich centrality vector defined above.

  3. Now instead of deleting a single node, we can delete any pair of nodes from the network. Can you identify the key pair, that is, the pair of nodes that, once removed, reduces total activity the most?

Yves Zenou (Monash University, Australia) and Junjie Zhou (National University of Singapore)

II Open problem

Equilibrium in Quitting Games

by Eilon Solan (School of Mathematical Sciences, Tel Aviv University, Israel)2The author thanks János Flesch, Ehud Lehrer, and Abraham Neyman for commenting on earlier versions of the text, and acknowledges the support of the Israel Science Foundation, Grant #217/17.

Alaya, Black, and Catherine are involved in an endurance match, where each player has to decide if and when to quit, and the outcome depends on the set of players whose choice is larger than the minimum of the three choices. Formally, each of the three has to select an element of N{}\mathbb{N}\cup\{\infty\}: the choice \infty corresponds to the decision to never quit, and the choice nNn\in\mathbb{N} corresponds to the decision to quit the match in round nn. Denote by nAn_{A} (resp. nBn_{B}, nCn_{C}) Alaya’s (resp. Black’s, Catherine’s) choice, and by nmin{nA,nB,nC}n_{*}\coloneqq\min\{n_{A},n_{B},n_{C}\}. As a result of their choices, the players receive payoffs, which are determined by the set {i{A,B,C} ⁣:ni>n}\bigl\{i\in\{A,B,C\}\colon n_{i}>n_{*}\bigr\} and on whether n<n_{*}<\infty. As a concrete example, suppose that if n=n_{*}=\infty, the payoff of each player is 0, and if n<n_{*}<\infty, the payoffs are given by the table in Figure 1.

Figure 1. The payoffs to the players in the game when n<n_{*}<\infty. In red, purple, and green the choices and payoffs of respectively Alaya, Black, and Catherine. Alaya chooses a row, Black a column, and Catherine a matrix.

Each entry in the figure represents one possible outcome. For example, when n=nA=nB<nCn_{*}=n_{A}=n_{B}<n_{C}, the payoffs of the three players are (1,0,1)(1,0,1): the left-most number in each entry is the payoff to Alaya, the middle number is the payoff to Black, and the right-most number is the payoff to Catherine. This game is an instance of a class of games that are known as quitting games.

How should the players act in this game? To provide an answer, we formalize the concepts of strategy and equilibrium. As the choice of each participant may be random, a strategy for a player is a probability distribution over N{}\mathbb{N}\cup\{\infty\}. Denote a strategy of Alaya (resp. Black, Catherine) by σA\sigma_{A} (resp. σB\sigma_{B}, σC\sigma_{C}), and by γi(σA,σB,σC)\gamma_{i}(\sigma_{A},\sigma_{B},\sigma_{C}) the expected payoff to player ii under the vector of strategies (σA,σB,σC)(\sigma_{A},\sigma_{B},\sigma_{C}). A vector of strategies (σA,σB,σC)(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C}) is an equilibrium if no player can increase her or his expected payoff by adopting another strategy while the other two stick to their strategies:

γA(σA,σB,σC)γA(σA,σB,σC)\gamma_{A}(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C})\geq\gamma_{A}(\sigma_{A},\sigma^{*}_{B},\sigma^{*}_{C})

for every strategy σA\sigma_{A} of Alaya, and analogous inequalities hold for Black and Catherine.

The three-player quitting game with payoffs as described above was studied by Flesch, Thuijsman, and Vrieze [15 J. Flesch, F. Thuijsman and K. Vrieze, Cyclic Markov equilibria in stochastic games. Internat. J. Game Theory26, 303–314 (1997) ] who proved that the following vector of strategies (σA,σB,σC)(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C}) is an equilibrium:

123456789σA:1200140018000σB:0120014001800σC:0012001400180\begin{array}{cccccccccccc}&1&2&3&4&5&6&7&8&9&\ldots&\infty\\ \hline\cr\sigma^{*}_{A}:&\frac{1}{2}&0&0&\frac{1}{4}&0&0&\frac{1}{8}&0&0&\ldots&0\\ \sigma^{*}_{B}:&0&\frac{1}{2}&0&0&\frac{1}{4}&0&0&\frac{1}{8}&0&\ldots&0\\ \sigma^{*}_{C}:&0&0&\frac{1}{2}&0&0&\frac{1}{4}&0&0&\frac{1}{8}&\ldots&0\\ \end{array}

Under (σA,σB,σC)(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C}), with probability 1 the minimum nn_{*} is the choice of exactly one player: n=nAn_{*}=n_{A} with probability 47\frac{4}{7}, n=nBn_{*}=n_{B} with probability 27\frac{2}{7}, and n=nCn_{*}=n_{C} with probability 17\frac{1}{7}. It follows that the vector of expected payoffs under (σA,σB,σC)(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C}) is

γ(σA,σB,σC)=47(1,3,0)+27(0,1,3)+17(3,0,1)=(1,2,1).\begin{split}\gamma(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C})&=\frac{4}{7}\cdot(1,3,0)+\frac{2}{7}\cdot(0,1,3)+\frac{1}{7}\cdot(3,0,1)\\ &=(1,2,1).\end{split}

Can a player profit by adopting a strategy different than σA\sigma^{*}_{A}, σB\sigma^{*}_{B}, or σC\sigma^{*}_{C}, assuming the other two stick to their prescribed strategies? It is a bit tedious, but not too difficult, to verify that this is not the case, hence (σA,σB,σC)(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C}) is indeed an equilibrium.

In fact, Flesch, Thuijsman, and Vrieze [15 J. Flesch, F. Thuijsman and K. Vrieze, Cyclic Markov equilibria in stochastic games. Internat. J. Game Theory26, 303–314 (1997) ] proved that under all equilibria of the game, with probability 1 the minimum nn_{*} coincides with the choice of exactly one player. Moreover, a vector of strategies is an equilibrium if and only if the set N\mathbb{N} can be partitioned into blocks of consecutive numbers, and up to circular permutations of the players, the support of the strategy of Alaya (which is a probability distribution over N{}\mathbb{N}\cup\{\infty\}) is contained in blocks number 1,4,7,1,4,7,\ldots, and the total probability that nAn_{A} is in block 3k23k-2 is 12k\frac{1}{2^{k}} (for each kNk\in\mathbb{N}), the support of the strategy of Black (resp. Catherine) is contained in blocks number 2,5,8,2,5,8,\ldots (resp. 3,6,9,3,6,9,\ldots), and the total probability that nBn_{B} (resp. nCn_{C}) is in block 3k13k-1 (resp. 3k3k) is 12k\frac{1}{2^{k}} (for each kNk\in\mathbb{N}).

Does an equilibrium exist if the payoffs are not given by the table in Figure 1, but rather by other numbers? Solan [19 E. Solan, The dynamics of the Nash correspondence and n-player stochastic games. Int. Game Theory Rev.3, 291–299 (2001) ] showed that this is not the case. He studied a three-player quitting game that differs from the game of [15 J. Flesch, F. Thuijsman and K. Vrieze, Cyclic Markov equilibria in stochastic games. Internat. J. Game Theory26, 303–314 (1997) ] in three payoffs:

  • the payoffs in the entry n=nA=nB<nCn_{*}=n_{A}=n_{B}<n_{C} are (1+η,0,1)(1+\eta,0,1),

  • the payoffs in the entry n=nA=nC<nBn_{*}=n_{A}=n_{C}<n_{B} are (0,1,1+η)(0,1,1+\eta),

  • the payoffs in the entry n=nB=nC<nAn_{*}=n_{B}=n_{C}<n_{A} are (1,1+η,0)(1,1+\eta,0);

and showed that provided η\eta is sufficiently small, the game has no equilibrium. For example, the strategy vector (σA,σB,σC)(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C}) described above is no longer an equilibrium, because Catherine is better off selecting nC=1n_{C}=1 with probability 1, thereby obtaining expected payoff 121+12(1+η)=1+η2\frac{1}{2}\cdot 1+\frac{1}{2}\cdot(1+\eta)=1+\frac{\eta}{2}, which is higher than her expected payoff under (σA,σB,σC)(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C}) (that is still 1).

Yet in Solan’s variation [19 E. Solan, The dynamics of the Nash correspondence and n-player stochastic games. Int. Game Theory Rev.3, 291–299 (2001) ], for every ε>0\varepsilon>0 there is an ε\varepsilon-equilibrium: a vector of strategies such that no player can profit more than ε\varepsilon by deviating to another strategy, in other words,

γA(σA,σB,σC)γA(σA,σB,σC)ε,\gamma_{A}(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C})\geq\gamma_{A}(\sigma_{A},\sigma^{*}_{B},\sigma^{*}_{C})-\varepsilon,

for every strategy σA\sigma_{A} of Alaya, and analogous inequalities hold for Black and Catherine. Indeed, given a positive integer mm, consider the following variation of (σA,σB,σC)(\sigma^{*}_{A},\sigma^{*}_{B},\sigma^{*}_{C}), denoted (σ^A,σ^B,σ^C)(\hat{\sigma}_{A},\hat{\sigma}_{B},\hat{\sigma}_{C}), where the set N\mathbb{N} is partitioned into blocks of size mm: block kk contains the integers {(k1)m+1,(k1)m+2,,km}\{(k-1)m+1,(k-1)m+2,\ldots,km\}, for each kNk\in\mathbb{N}. σ^A\hat{\sigma}_{A} is the probability distribution that assigns to each integer in block 3k23k-2 the probability 1m2k\frac{1}{m\cdot 2^{k}}, for every kNk\in\mathbb{N}. Similarly, σ^B\hat{\sigma}_{B} (resp. σ^C\hat{\sigma}_{C}) is the probability distribution that assigns to each integer in block 3k13k-1 (resp. 3k3k) the probability 1m2k\frac{1}{m\cdot 2^{k}}, for every kNk\in\mathbb{N}. As mentioned above, the strategy vector (σ^A,σ^B,σ^C)(\hat{\sigma}_{A},\hat{\sigma}_{B},\hat{\sigma}_{C}) is an equilibrium of the game whose payoff function is given in Figure 1, and one can verify that provided m1εm\geq\frac{1}{\varepsilon}, it is an ε\varepsilon-equilibrium of Solan’s variation [19 E. Solan, The dynamics of the Nash correspondence and n-player stochastic games. Int. Game Theory Rev.3, 291–299 (2001) ].

It follows from [18 E. Solan, Three-player absorbing games. Math. Oper. Res.24, 669–698 (1999) ] that an ε\varepsilon-equilibrium exists in every three-player quitting game, for every ε>0\varepsilon>0, regardless of the payoffs. One of the most challenging problems in game theory to date is the following.

251*

Does an ε\varepsilon-equilibrium exist in quitting games that include more than three players, for every ε>0\varepsilon>0?

For partial results, see [21E. Solan and N. Vieille, Quitting games. Math. Oper. Res.26, 265–285 (2001) , 22 E. Solan and N. Vieille, Quitting games – an example. Internat. J. Game Theory31, 365–381 (2002) , 16R. S. Simon, The structure of non-zero-sum stochastic games. Adv. in Appl. Math.38, 1–26 (2007) , 17 R. S. Simon, A topological approach to quitting games. Math. Oper. Res.37, 180–195 (2012) , 20 E. Solan and O. N. Solan, Quitting games and linear complementarity problems. Math. Oper. Res.45, 434–454 (2020) , 14 G. Ashkenazi-Golan, I. Krasikov, C. Rainer and E. Solan, Absorption paths and equilibria in quitting games. arXiv:2012.04369 (2021) ], which use different tools to study the problem: dynamical systems, algebraic topology, and linear complementarity problems. The open problem is a step in solving several other well-known open problems in game theory: the existence of ε\varepsilon-equilibria in stopping games, the existence of uniform equilibria in stochastic games, and the existence of ε\varepsilon-equilibria in repeated games with Borel-measurable payoffs.

It is interesting to note that if we defined

nmax{1{nA<}nA,1{nB<}nB,1{nC<}nC},n_{*}\coloneqq\max\{\mathbf{1}_{\{n_{A}<\infty\}}\cdot n_{A},\mathbf{1}_{\{n_{B}<\infty\}}\cdot n_{B},\mathbf{1}_{\{n_{C}<\infty\}}\cdot n_{C}\},

then an ε\varepsilon-equilibrium need not exist for small ε>0\varepsilon>0. Indeed, with this definition, the three-player game in which the payoff of player ii is 1 if >ni=n>nj\infty>n_{i}=n_{*}>n_{j} for each jij\neq i, and 0 otherwise, has no ε\varepsilon-equilibrium for ε(0,23)\varepsilon\in(0,\frac{2}{3}).

III Solutions

237

We take for our probability space (X,m)(X,m): the unit interval X=[0,1]X=[0,1] equipped with Lebesgue measure mm defined on B(X)\mathcal{B}(X), the Borel subsets of XX and let (X,m,T)(X,m,T) be an invertible measure preserving transformation, that is T ⁣:X0X0T\colon X_{0}\to X_{0} is a bimeasurable bijection of some Borel set X0B(X)X_{0}\in\mathcal{B}(X) of full measure so that and m(TA)=m(T1A)=m(A)m(TA)=m(T^{-1}A)=m(A) for every AB(X)A\in\mathcal{B}(X).

Suppose also that TT is ergodic in the sense that the only TT-invariant Borel sets have either zero- or full measure (AB(X)A\in\mathcal{B}(X), TA=Am(A)=0,1TA=A\Rightarrow m(A)=0,1).

Birkhoff’s ergodic theorem says that for every integrable function f ⁣:XRf\colon X\to\mathbb{R},

1nk=0n1fTknE(f)Xfdm a.s.\frac{1}{n}\sum_{k=0}^{n-1}f\circ T^{k}\xrightarrow[n\to\infty]{}\mathbb{E}(f)\coloneqq\int_{X}\,f\,dm\ \text{a.s.}

The present exercise is concerned with the possibility of generalizing this. Throughout, (X,m,T)(X,m,T) is an arbitrary ergodic, measure preserving transformation as above.

Warm-up 1. Show that if f ⁣:XRf\colon X\to\mathbb{R} is measurable, and

m([limnk=0n1fTk<])>0,m\Biggl(\Biggl[\varlimsup_{n\to\infty}\Biggl|\sum_{k=0}^{n-1}f\circ T^{k}\Biggr|<\infty\Biggr]\Biggr)>0,

then 1nk=0n1fTk\frac{1}{n}\sum_{k=0}^{n-1}f\circ T^{k} converges in R\mathbb{R} a.s.

Warm-up 1 is [23 P. Hagelstein, D. Herden and A. Stokolos, A theorem of Besicovitch and a generalization of the Birkhoff ergodic theorem. Proc. Amer. Math. Soc. Ser. B8, 52–59 (2021) , Lemma 1]. For a multidimensional version, see [23 P. Hagelstein, D. Herden and A. Stokolos, A theorem of Besicovitch and a generalization of the Birkhoff ergodic theorem. Proc. Amer. Math. Soc. Ser. B8, 52–59 (2021) , Conjecture 3].

Warm-up 2. Show that if f ⁣:XRf\colon X\to\mathbb{R} is as in Warm-up 1, there exist g,h ⁣:XRg,h\colon X\to\mathbb{R} measurable with hh bounded so that f=h+ggTnf=h+g-g\circ T^{n}.

Warm-up 2 is established by adapting the proof of [25 D. Volný and B. Weiss, Coboundaries in L0∞. Ann. Inst. H. Poincaré Probab. Statist.40, 771–778 (2004) , Theorem A].

Problem. Show that there is a measurable function f ⁣:XRf\colon X\to\mathbb{R} satisfying E(f)=\mathbb{E}(\lvert f\rvert)=\infty so that

1nk=0n1fTk\frac{1}{n}\mathop{\smash[b]{\sum_{k=0}^{n-1}}}f\circ T^{k}

converges in R\mathbb{R} a.s.

The existence of such ff for a specially constructed ergodic measure preserving transformation is shown in [24 D. Tanny, A zero-one law for stationary sequences. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete30, 139–148 (1974) , Example b]. The point here is to prove it for an arbitrary ergodic measure preserving transformation of (X,m)(X,m).

Jon Aaronson (Tel Aviv University, Israel)

Solution by the proposer

We’ll fix sequences εk,Mk>0\varepsilon_{k},M_{k}>0, NkNN_{k}\in\mathbb{N} (k1k\geq 1). For each ε,M>0\varepsilon,M>0, N1N\geq 1, we’ll construct a small coboundary f(ε,M,N)f^{(\varepsilon,M,N)}. The desired function will be of the form Fk1f(εk,Mk,Nk)F\coloneqq\sum_{k\geq 1}f^{(\varepsilon_{k},M_{k},N_{k})} for a suitable choice of εk,Mk>0\varepsilon_{k},M_{k}>0, NkNN_{k}\in\mathbb{N} (k1k\geq 1).

To construct f(ε,M,N)f^{(\varepsilon,M,N)}, choose, using Rokhlin’s lemma, a set BBB\in\nobreak\mathcal{B} such that {TkB:k2N}\{T^{k}B:\lvert k\rvert\leq 2N\} are disjoint and m(A)=εm(A)=\varepsilon where Ak2NTkBA\coloneqq\text{⨃}_{\lvert k\rvert\leq 2N}T^{k}B. Let

f=f(ε,M,N)Mk=12N(1)k1TkB.f=f^{(\varepsilon,M,N)}\coloneqq M\sum_{k=1}^{2N}(-1)^{k}1_{T_{k}B}.

It follows that

Snf(x){0,M,M}for all n1,xX;\displaystyle S_{n}f(x)\in\{0,M,-M\}\quad\text{for all}\ n\geq 1,\,x\in X;
Snf(x)=0for all 1nN,xA;\displaystyle S_{n}f(x)=0\quad\text{for all}\ 1\leq n\leq N,\,x\notin A;
E(f)=Mm(j=12NTjB)=Mε2N4N+1>Mε3.\displaystyle E(\lvert f\rvert)=Mm\Biggl(\text{⨃}_{j=1}^{2N}T^{j}B\Biggr)=\frac{M\varepsilon 2N}{4N+1}>\frac{M\varepsilon}{3}.

Set εk15k\varepsilon_{k}\coloneqq\frac{1}{5^{k}}, Mk=6kM_{k}=6^{k}, Nk=7kN_{k}=7^{k}, and define F(k)f(εk,Mk,Nk)F^{(k)}\coloneqq f^{(\varepsilon_{k},M_{k},N_{k})} as above.

Since

k1m([F(k)0])k1εk<,\sum_{k\geq 1}m([F^{(k)}\neq 0])\leq\sum_{k\geq 1}\varepsilon_{k}<\infty,

this is a finite sum and so

Fk1F(k) ⁣:XR.F\coloneqq\sum_{k\geq 1}F^{(k)}\colon X\to\mathbb{R}.

Proof that E(F)=E(\lvert F\rvert)=\infty. For each K1K\geq 1,

FF(K)+1jK1F(j)1[F(k)=0 k>K](F(K)1jK1F(j))1[F(k)=0 k>K](MK1jK1Mj)1[F(K)0 & F(k)=0 k>K]45MK1[F(K)0 & F(k)=0 k>K]\begin{split}\lvert F\rvert&\geq\Biggl|F^{(K)}+\sum_{1\leq j\leq K-1}F^{(j)}\Biggr|1_{[F^{(k)}=0\ \forall\,k>K]}\\ &\geq\Biggl(\lvert F^{(K)}\rvert-\sum_{1\leq j\leq K-1}\lvert F^{(j)}\rvert\Biggr)1_{[F^{(k)}=0\ \forall\,k>K]}\\ &\geq\Biggl(M_{K}-\sum_{1\leq j\leq K-1}M_{j}\Biggr)1_{[F^{(K)}\neq 0\ \&\ F^{(k)}=0\ \forall\,k>K]}\\ &\geq\smash{\frac{4}{5}}M_{K}1_{[F^{(K)}\neq 0\ \&\ F^{(k)}=0\ \forall\,k>K]}\end{split}

and

E(F)45MKm([F(K)0 & F(k)=0 k>K]).E(\lvert F\rvert)\geq\frac{4}{5}M_{K}m([F^{(K)}\neq 0\ \&\ F^{(k)}=0\ \forall\,k>K]).

Next,

EK[F(k)=0 k>K]c=kK+11j2NkTkBk\mathcal{E}_{K}\coloneqq[F^{(k)}=0\ \forall\,k>K]^{c}=\mathop{\smash[b]{\bigcup_{k\geq K+1}\bigcup_{1\leq j\leq 2N_{k}}}}T^{k}B_{k}

whence

m(EK)kK+1εk2=12kK+115k=εK40.m(\mathcal{E}_{K})\leq\sum_{k\geq K+1}\frac{\varepsilon_{k}}{2}=\frac{1}{2}\sum_{k\geq K+1}\frac{1}{5^{k}}=\frac{\varepsilon_{K}}{40}.

It follows that

m([F(K)0]EK)=m(j=12NKTjBKEK)>εK3εK40=37εK120,m([F^{(K)}\neq 0]\setminus\mathcal{E}_{K})=m\Biggl(\mathop{\smash[b]{\text{⨃}_{j=1}^{2N_{K}}}}T^{j}B_{K}\setminus\mathcal{E}_{K}\Biggr)>\frac{\varepsilon_{K}}{3}-\frac{\varepsilon_{K}}{40}=\frac{37\varepsilon_{K}}{120},

whence

E(F)45MKm([F(K)0]EK)>37εKMK150K.E(\lvert F\rvert)\geq\frac{4}{5}M_{K}m([F^{(K)}\neq 0]\setminus\mathcal{E}_{K})>\frac{37\varepsilon_{K}M_{K}}{150}\xrightarrow[K\to\infty]{}\infty.

Proof that SnF=o(n)S_{n}F=o(n) a.s.. There is a function κ ⁣:XN\kappa\colon X\to\mathbb{N} so that for a.s. xXx\in X, xAkcx\in A_{k}^{c} for all kκ(x)k\geq\kappa(x). Suppose that kκ(x)k\geq\kappa(x) and 2Nkn<2Nk+12N_{k}\leq n<2N_{k+1}, then

SnF(x)=j=1kSnF(j)(x)j=1kMj<65(67)kNk\lvert S_{n}F(x)\rvert=\Biggl|\mathop{\smash[b]{\sum_{j=1}^{k}}}S_{n}F^{(j)}(x)\Biggr|\leq\mathop{\smash[b]{\sum_{j=1}^{k}}}M_{j}<\frac{6}{5}\cdot\biggl(\frac{6}{7}\biggr)^{k}\cdot N_{k}

and

SnF(x)nn0 a.s.\smash{\frac{\lvert S_{n}F(x)\rvert}{n}}\xrightarrow[n\to\infty]{}0\ \text{a.s.}

238

Let (Ω,F,P)(\Omega,\mathcal{F},\mathbb{P}) be a probability space and {Xn:n1}\{X_{n}:n\geq 1\} be a sequence of independent and identically distributed (i.i.d.) random variables on Ω\Omega. Assume that there exists a sequence of positive numbers {bn:n1}\{b_{n}:n\geq 1\} such that bnnbn+1n+1\frac{b_{n}}{n}\leq\frac{b_{n+1}}{n+1} for every n1n\geq 1, limnbnn=\lim_{n\to\infty}\frac{b_{n}}{n}=\infty, and n=1P(Xnbn)<\sum_{n=1}^{\infty}\mathbb{P}(\lvert X_{n}\rvert\geq b_{n})<\infty. Prove that, if Snj=1nXjS_{n}\coloneqq\sum_{j=1}^{n}X_{j} for each n1n\geq 1, then

limnSnbn=0 almost surely.\lim_{n\to\infty}\frac{S_{n}}{b_{n}}=0\ \text{almost surely}.

Comment. The desired statement says that, if such a sequence {bn:n1}\{b_{n}:n\geq 1\} exists, then {Xn:n1}\{X_{n}:n\geq 1\} satisfies the (generalized) Strong Law of Large Numbers (SLLN) when averaged by {bn:n1}\{b_{n}:\nobreak n\geq\nobreak 1\}. If XnL1(P)X_{n}\in L^{1}(\mathbb{P}) for every n1n\geq 1, then the desired statement follows trivially from Kolmogorov’s SLLN, since in which case, with probability one,

limnSnn=E[X1],\smash{\lim_{n\to\infty}\frac{S_{n}}{n}}=\mathbb{E}[X_{1}],

and hence

Snbn=Snnnbn\smash[t]{\frac{S_{n}}{b_{n}}=\frac{S_{n}}{n}\cdot\frac{n}{b_{n}}}

must converge to 0 under the assumptions on {bn:n1}\{b_{n}:n\geq 1\}. Therefore, the desired statement can be viewed as an alternative to Kolmogorov’s SLLN for i.i.d. random variables that are not integrable.

Linan Chen (McGill University, Montreal, Quebec, Canada)

Solution by the proposer

As explained above, we will need to prove the desired statement without assuming integrability of XnX_{n}’s. For every n1n\geq 1, we truncate XnX_{n} at the level bnb_{n} by defining Yn=XnY_{n}=X_{n} if Xn<bn\lvert X_{n}\rvert<b_{n}, and Yn=0Y_{n}=0 if Xnbn\lvert X_{n}\rvert\geq b_{n}. Then, {Yn:n1}\{Y_{n}:n\geq 1\} is again a sequence of independent random variables. It follows from the assumption on {bn:n1}\{b_{n}:n\geq 1\} that

n=1P(XnYn)=n=1P(Xnbn)<,\sum_{n=1}^{\infty}\mathbb{P}(X_{n}\neq Y_{n})=\sum_{n=1}^{\infty}\mathbb{P}(\lvert X_{n}\rvert\geq b_{n})<\infty,

which, by the Borel–Cantelli lemma, implies that the sequence of the truncated random variables {Yn:n1}\{Y_{n}:n\geq 1\} is equivalent to the original sequence {Xn:n1}\{X_{n}:n\geq 1\} in the sense that

P(XnYn infinitely often)=0,or equivalently,\displaystyle\mathbb{P}(X_{n}\neq Y_{n}\ \text{infinitely often})=0,\,\text{or equivalently},
P(Xn=Yn eventually always)=1.\displaystyle\mathbb{P}(X_{n}=Y_{n}\ \text{eventually always})=1.

Next, by setting b0=0b_{0}=0, we have that

n=1P(Xnbn)=n=1kk=n+1P(bk1X1<bk)=k=2n=1kk1P(bk1X1<bk)=k=2(k1)P(bk1X1<bk)=k=1kP(bk1X1<bk)1,\begin{split}\sum_{n=1}^{\infty}\mathbb{P}(\lvert X_{n}\rvert\geq b_{n})&=\sum_{n=1\vphantom{k}}^{\infty}\sum_{k=n+1}^{\infty}\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k})\\ &=\sum_{k=2}^{\infty}\sum_{n=1\vphantom{k}}^{k-1}\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k})\\ &=\sum_{k=2}^{\infty}(k-1)\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k})\\ &=\sum_{k=1}^{\infty}k\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k})-1,\end{split}

and hence the assumption on {bn:n1}\{b_{n}:n\geq 1\} implies that

k=1kP(bk1X1<bk)<.\sum_{k=1}^{\infty}k\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k})<\infty.

Our next goal is to establish the desired SLLN statement for {Yn:n1}\{Y_{n}:n\geq 1\}. To be specific, we want to show that if Tnj=1nYjT_{n}\coloneqq\sum_{j=1}^{n}Y_{j} for each n1n\geq 1, then limnTnbn=0\lim_{n\to\infty}\frac{T_{n}}{b_{n}}=0 almost surely. We will achieve this goal in two steps.

Step 1 is to treat the convergence of E[Tn]bn\frac{\mathbb{E}[T_{n}]}{b_{n}}. To this end, we derive an upper bound for this term as

E[Tn]bn1bnj=1nE[Yj]=1bnj=1n{X1<bj}X1dP=1bnj=1knk=1j{bk1X1<bk}X1dP1bnk=1n(nk+1)bkP(bk1X1<bk)2nbnk=1nbkP(bk1X1<bk).\begin{split}\frac{\mathbb{E}[\lvert T_{n}\rvert]}{b_{n}}&\leq\frac{1}{b_{n}}\sum_{j=1}^{n}\mathbb{E}[\lvert Y_{j}\rvert]=\frac{1}{b_{n}}\sum_{j=1}^{n}\int_{\{\lvert X_{1}\rvert<b_{j}\}}\lvert X_{1}\rvert\,d\mathbb{P}\\ &=\frac{1}{b_{n}}\sum_{j=1\vphantom{k}}^{n}\mathop{\smash[t]{\sum_{k=1}^{j}}}\int_{\{b_{k-1}\leq\lvert X_{1}\rvert<b_{k}\}}\lvert X_{1}\rvert\,d\mathbb{P}\\ &\leq\frac{1}{b_{n}}\sum_{k=1}^{n}(n-k+1)b_{k}\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k})\\ &\leq\frac{2n}{b_{n}}\sum_{k=1}^{n}b_{k}\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k}).\end{split}

Then (2) implies that

k=1bkP(bk1X1<bk)(bk/k)=k=1kP(bk1X1<bk)<,\sum_{k=1}^{\infty}\frac{b_{k}\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k})}{(b_{k}/k)}=\sum_{k=1}^{\infty}k\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k})<\infty,

which, by Kronecker’s lemma, leads to

limnnbnk=1nbkP(bk1X1<bk)=0.\lim_{n\to\infty}\frac{n}{b_{n}}\sum_{k=1}^{n}b_{k}\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k})=0.

Hence, we conclude that limnE[Tn]bn=0\lim_{n\to\infty}\frac{\mathbb{E}[\lvert T_{n}\rvert]}{b_{n}}=0.

Step 2 is to establish the convergence of TnE[Tn]bn\frac{T_{n}-\mathbb{E}[T_{n}]}{b_{n}}, for which we will use a martingale convergence argument. We note that if

Mnj=1nYjE[Yj]bjM_{n}\coloneqq\sum_{j=1}^{n}\frac{Y_{j}-\mathbb{E}[Y_{j}]}{b_{j}}

for each n1n\geq 1, then {Mn:n1}\{M_{n}:n\geq 1\} is a martingale (with respect to the natural filtration) and for each n1n\geq 1,

E[Mn2]j=1nE[Yj2]bj2=j=1n1bj2{X1<bj}X12dPj=1knk=1jbk2bj2P(bk1X1<bk)k=1n(j=kn1j2)k2P(bk1X1<bk)Ck=1nkP(bk1X1<bk),\begin{split}\mathbb{E}[M_{n}^{2}]&\leq\sum_{j=1}^{n}\frac{\mathbb{E}[Y_{j}^{2}]}{b_{j}^{2}}=\sum_{j=1}^{n}\frac{1}{b_{j}^{2}}\int_{\{\lvert X_{1}\rvert<b_{j}\}}X_{1}^{2}\,d\mathbb{P}\\ &\leq\sum_{j=1\vphantom{k}}^{n}\mathop{\smash[t]{\sum_{k=1}^{j}}}\frac{b_{k}^{2}}{b_{j}^{2}}\mathbb{P}(b_{k-1}\leq\lvert X_{1}\rvert<b_{k})\\ &\leq\sum_{k=1}^{n}\Biggl(\,\sum_{j=k}^{n}\frac{1}{j