The present column is devoted to Game Theory.

## I Six new problems – solutions solicited

Solutions will appear in a subsequent issue.

### 245

We consider a setting where there is a set of $m$ candidates

and a set of $n$ voters $[n]={1,…,n}$.
Each voter ranks all candidates from the most preferred one to the least preferred one; we write $a≻_{i}b$ if voter $i$ prefers candidate $a$ to candidate $b$.
A collection of all voters’ rankings is called a *preference profile*.
We say that a preference profile is *single-peaked* if there is a total order $⊲$ on the candidates (called the *axis*) such that for each voter $i$ the following holds: if $i$’s most preferred candidate is $c$ and $a⊲b⊲c$ or $c⊲b⊲a$, then $b≻_{i}a$.
That is, each ranking has a single ‘peak’, and then ‘declines’ in either direction from that peak.

(i) In general, if we aggregate voters’ preferences over candidates, the resulting majority relation may have cycles: e.g., if $a≻_{1}b≻_{1}c$, $b≻_{2}c≻_{2}a$ and $c≻_{3}a≻_{3}b$, then a strict majority (2 out of 3) voters prefer $a$ to $b$, a strict majority prefer $b$ to $c$, yet a strict majority prefer $c$ to $a$. Argue that this cannot happen if the preference profile is single-peaked. That is, prove that if a profile is single-peaked, a strict majority of voters prefer $a$ to $b$, and a strict majority of voters prefer $b$ to $c$, then a strict majority of voters prefer $a$ to $c$.

(ii) Suppose that $n$ is odd and voters’ preferences are known to be single-peaked with respect to an axis $⊲$. Consider the following voting rule: we ask each voter $i$ to report their top candidate $t(i)$, find a median voter $i_{∗}$, i.e.

and output $t(i_{∗})$. Argue that under this voting rule no voter can benefit from voting dishonestly, if a voter $i$ reports some candidate $a=t(i)$ instead of $t(i)$, this either does not change the outcome or results in an outcome that $i$ likes less than the outcome of the truthful voting.

(iii) We say that a preference profile is *1D-Euclidean* if each candidate $c_{j}$ and each voter $i$ can be associated with a point in $R$ so that the preferences are determined by distances, i.e. there is an embedding $x:C∪[n]→R$ such that for all $a,b∈C$ and $i∈[n]$ we have $a≻_{i}b$ if and only if $∣x(i)−x(a)∣<∣x(i)−x(b)∣$.
Argue that a 1D-Euclidean profile is necessarily single-peaked.
Show that the converse is not true, i.e. there exists a single-peaked profile that is not 1D-Euclidean.

(iv) Let $P$ be a single-peaked profile, and let $L$ be the set of candidates ranked last by at least one voter. Prove that $∣L∣≤2$.

(v) Consider an axis $c_{1}⊲⋯⊲c_{m}$. Prove that there are exactly $2_{m−1}$ distinct votes that are single-peaked with respect to this axis. Explain how to sample from the uniform distribution over these votes.

These problems are based on references [4 H. Moulin, Axioms of Cooperative Decision Making. Cambridge University Press, Cambridge (1991) ] (parts (i) and (ii)), [2 C. H. Coombs. Psychological scaling without a unit of measurement. Psychological Review57, 145 (1950) ] (part (iii)) and [1 V. Conitzer, Eliciting single-peaked preferences using comparison queries. J. Artificial Intelligence Res.35, 161–191 (2009) , 5 T. Walsh, Generating single peaked votes. arXiv:1503.02766 (2015) ] (part (v)); part (iv) is folklore. See also the survey [3 E. Elkind, M. Lackner and D. Peters. Structured preferences. In Trends in Computational Social Choice, edited by U. Endriss, Chapter 10, AI Access, 187–207 (2017) ].

*Edith Elkind (University of Oxford, UK)*

### 246

Consider a standard prisoners’ dilemma game described by the following strategic form, with $δ>β>0>γ$:

Assume that any given agent either plays $C$ or $D$ and that agents reproduce at a rate determined by their payoff from the strategic form of the game plus a constant $f$. Suppose that members of an infinite population are assorted into finite groups of size $n$. Let $q$ denote the proportion of agents playing strategy $C$ (“altruists”) in the population as a whole and $q_{i}$ denote the proportion of agents playing $C$ in group $i$. We assume that currently $q∈(0,1)$.

The process of assortment is abstract, but we assume that it has finite expectation $E[q_{i}]=q$ and variance $Var[q_{i}]=σ_{2}$. Members within each group are then randomly paired off to play one iteration of the prisoners’ dilemma against another member of their group. All agents then return to the overall population.

Find a condition relating $q$, $σ_{2}$, $β$, $γ$, $δ$ and $n$ under which the proportion of altruists in the overall population rises after a round of play.

Now interpret this game as one where each player can confer a benefit $b$ upon the other player by individually incurring a cost $c$, with $b>c>0$, so that $β=b−c$, $δ=b$ and $γ=−c$. Prove that, as long as (i) there is some positive assortment in group formation and (ii) the ratio $bc $ is low enough, then the proportion of altruists in the overall population will rise after a round of play.

*Richard Povey (Hertford College and St Hilda’s College, University of Oxford, UK)*

### 247

Consider a village consisting of n farmers who live along a circle of length $n$. The farmers live at positions $1,2,…,n$. Each of them is friends with the person to the left and right of them, and each friendship has capacity $m$ where $m$ is a non-negative integer. At the end of the year, each farmer does either well (her wealth is $+1$ dollars) or not well (her wealth is $−1$ dollars) with equal probability. Farmers’ wealth realizations are independent of each other. Hence, for a large circle the share of farmers in each state is on average $1$.

The farmers share risk by transferring money to their direct neighbors. The goal of risk-sharing is to create as many farmers with OK wealth (0 dollars) as possible. Transfers have to be in integer dollars and cannot exceed the capacity of each link (which is $m$).

A few examples with a village of size $n=4$ serve to illustrate risk-sharing.

Consider the case where farmers 1 to 4 have wealth

$(+1,−1,+1,−1).$In that case, we can share risk completely with farmer $1$ sending a dollar to agent $2$ and farmer $3$ sending a dollar to farmer $4$. This works for any $m≥1$.

Consider the case where farmers $1$ to $4$ have wealth

$(+1,+1,−1,−1).$In that case, we can share risk completely with farmer $1$ sending a dollar to farmer $2$, farmer $2$ sending two dollars to farmer $3$ and farmer $3$ sending one dollar to farmer $4$. In this case, we need $m≥2$. If $m=1$, we can only share risk among half the people in the village.

Show that for any wealth realization an optimal risk-sharing arrangement can be found as the solution to a maximum flow problem.

*Tanya Rosenblat (School of Information and Department of Economics, University of Michigan, USA)*

### 248

This exercise is a continuation of Problem 247 where we studied risk-sharing among farmers who live on a circle village and are friends with their direct neighbors to the left and right with friendships of a certain capacity. Assume that for any realization of wealth levels the best possible risk-sharing arrangement is implemented and denote the expected share of unmatched farmers with $U(n,m)$. Show that $U(n,m)→2m+11 $ as $n→∞$.

*Tanya Rosenblat (School of Information and Department of Economics, University of Michigan, USA)*

### 249

In a *combinatorial auction* there are $m$ items for sale to $n$ buyers.
Each buyer $i$ has some valuation function $v_{i}(⋅)$ which takes as input a set $S$ of items and outputs that bidder’s value for that set.
These functions will always be monotone ($v_{i}(S∪T)≥v_{i}(S)$ for all $S,T$), and satisfy $v_{i}(∅)=0$.

**Definition 1** (Walrasian equilibrium)**.** A price vector $p ∈R_{≥0}$ and a list $B_{1},…,B_{n}$ of subsets of $[m]$ form a *Walrasian equilbrium* for $v_{1},…,v_{n}$ if the following two properties hold:

Each $B_{i}∈gmax_{S}{v_{i}(S)−∑_{j∈S}p_{j}}$.

The sets $B_{i}$ are disjoint, and $⋃_{i}B_{i}=[m]$.

Prove that a Walrasian equilibrium exists for $v_{1},…,v_{n}$ if and only if there exists an integral^{1}That is, a point such that each $x_{i,S}∈{0,1}$. optimum to the following linear program:

*Hint.*
Take the dual, and start from there.

*Matt Weinberg (Computer Science, Princeton University, USA)*

### 250

Consider a game played on a network and a finite set of players $N={1,2,…,n}$. Each node in the network represents a player and edges capture their relationships. We use $G=(g_{ij})_{1≤i,j≤n}$ to represent the adjacency matrix of a undirected graph/network, i.e. $g_{ij}=g_{ji}∈{0,1}$. We assume $g_{ii}=0$. Thus, $G$ is a zero-diagonal, squared and symmetric matrix. Each player, indexed by $i$, chooses an action $x_{i}∈R$ and obtains the following payoff:

The parameter $δ>0$ captures the strength of the direct links between different players. For simplicity, we assume $0<δ<n−11 $.

A Nash equilibrium is a profile $x_{∗}=(x_{1},…,x_{n})$ such that, for any $i=1,…,n$,

In other words, at a Nash equilibrium, there is no profitable deviation for any player $i$ choosing $x_{i}$.

Let $w=(w_{1},w_{2},…,w_{n})_{′}$, $w_{i}>0$ for all $i$ (the transpose of a vector $w$ is denoted by $w_{′}$), and $I_{n}$ the $n×n$ identity matrix.
Define the *weighted* Katz–Bonacich centrality vector as

Here $M:=[I−δG]_{−1}$ denote the inverse Leontief matrix associated with network $G$, while $m_{ij}$ denote its $ij$ entry, which is equal to the discounted number of walks from $i$ to $j$ with decay factor $δ$.
Let $1_{n}=(1,1,…,1)_{′}$ be a vector of $1$s.
Then the *unweighted* Katz–Bonacich centrality vector can be defined as

Show that this network game has a unique Nash equilibrium $x_{∗}(G)$. Can you link this equilibrium to the Katz–Bonacich centrality vector defined above?

Let $x_{∗}(G)=∑_{i=1}x_{i}(G)$ denote the sum of actions (total activity) at the unique Nash equilibrium in part 1. Now suppose that you can remove a single node, say $i$, from the network. Which node do you want to remove such that the sum of effort at the new Nash equilibrium is reduced the most? (Note that, after the deletion of node $i$, we remove all the links of node $i$, and the remaining network, denoted by $G_{−i}$, can be obtained by deleting the $i$-th row and $i$-th column of $G$.)

Mathematically, you need to solve the

*key player problem*$i∈Nmax (x_{∗}(G)−x_{∗}(G_{−i})).$In other words, you want to find a player who, once removed, leads to the highest reduction in total action in the remaining network.

*Hint.*You may come up with an index $c_{i}$ for each $i$ such that the key player is the one with the highest $c_{i}$. This $c_{i}$ should be expressed using the Katz–Bonacich centrality vector defined above.Now instead of deleting a single node, we can delete any pair of nodes from the network. Can you identify the key pair, that is, the pair of nodes that, once removed, reduces total activity the most?

*Yves Zenou (Monash University, Australia) and
Junjie Zhou (National University of Singapore)*

## II Open problem

Equilibrium in Quitting Games

*by Eilon Solan (School of Mathematical Sciences, Tel Aviv University, Israel) ^{2}The author thanks János Flesch, Ehud Lehrer, and Abraham Neyman for commenting on earlier versions of the text, and acknowledges the support of the Israel Science Foundation, Grant #217/17.*

Alaya, Black, and Catherine are involved in an endurance match, where each player has to decide if and when to quit, and the outcome depends on the set of players whose choice is larger than the minimum of the three choices. Formally, each of the three has to select an element of $N∪{∞}$: the choice $∞$ corresponds to the decision to never quit, and the choice $n∈N$ corresponds to the decision to quit the match in round $n$. Denote by $n_{A}$ (resp. $n_{B}$, $n_{C}$) Alaya’s (resp. Black’s, Catherine’s) choice, and by $n_{∗}:=min{n_{A},n_{B},n_{C}}$. As a result of their choices, the players receive payoffs, which are determined by the set ${i∈{A,B,C}:n_{i}>n_{∗}}$ and on whether $n_{∗}<∞$. As a concrete example, suppose that if $n_{∗}=∞$, the payoff of each player is 0, and if $n_{∗}<∞$, the payoffs are given by the table in Figure 1.

Each entry in the figure represents one possible outcome.
For example, when $n_{∗}=n_{A}=n_{B}<n_{C}$, the payoffs of the three players are $(1,0,1)$: the left-most number in each entry is the payoff to Alaya, the middle number is the payoff to Black, and the right-most number is the payoff to Catherine.
This game is an instance of a class of games that are known as *quitting games*.

How should the players act in this game?
To provide an answer, we formalize the concepts of *strategy* and *equilibrium*.
As the choice of each participant may be random, a *strategy* for a player is a probability distribution over $N∪{∞}$.
Denote a strategy of Alaya (resp. Black, Catherine) by $σ_{A}$ (resp. $σ_{B}$, $σ_{C}$), and by $γ_{i}(σ_{A},σ_{B},σ_{C})$ the expected payoff to player $i$ under the vector of strategies $(σ_{A},σ_{B},σ_{C})$.
A vector of strategies $(σ_{A},σ_{B},σ_{C})$ is an *equilibrium* if no player can increase her or his expected payoff by adopting another strategy while the other two stick to their strategies:

for every strategy $σ_{A}$ of Alaya, and analogous inequalities hold for Black and Catherine.

The three-player quitting game with payoffs as described above was studied by Flesch, Thuijsman, and Vrieze [15 J. Flesch, F. Thuijsman and K. Vrieze, Cyclic Markov equilibria in stochastic games. Internat. J. Game Theory26, 303–314 (1997) ] who proved that the following vector of strategies $(σ_{A},σ_{B},σ_{C})$ is an equilibrium:

Under $(σ_{A},σ_{B},σ_{C})$, with probability 1 the minimum $n_{∗}$ is the choice of exactly one player: $n_{∗}=n_{A}$ with probability $74 $, $n_{∗}=n_{B}$ with probability $72 $, and $n_{∗}=n_{C}$ with probability $71 $. It follows that the vector of expected payoffs under $(σ_{A},σ_{B},σ_{C})$ is

Can a player profit by adopting a strategy different than $σ_{A}$, $σ_{B}$, or $σ_{C}$, assuming the other two stick to their prescribed strategies? It is a bit tedious, but not too difficult, to verify that this is not the case, hence $(σ_{A},σ_{B},σ_{C})$ is indeed an equilibrium.

In fact, Flesch, Thuijsman, and Vrieze [15
J. Flesch, F. Thuijsman and K. Vrieze,
Cyclic Markov equilibria in stochastic games.
Internat. J. Game Theory26, 303–314 (1997)
] proved that under *all* equilibria of the game, with probability 1 the minimum $n_{∗}$ coincides with the choice of exactly one player.
Moreover, a vector of strategies is an equilibrium if and only if the set $N$ can be partitioned into blocks of consecutive numbers, and up to circular permutations of the players, the support of the strategy of Alaya (which is a probability distribution over $N∪{∞}$) is contained in blocks number $1,4,7,…$, and the total probability that $n_{A}$ is in block $3k−2$ is $2_{k}1 $ (for each $k∈N$), the support of the strategy of Black (resp. Catherine) is contained in blocks number $2,5,8,…$ (resp. $3,6,9,…$), and the total probability that $n_{B}$ (resp. $n_{C}$) is in block $3k−1$ (resp. $3k$) is $2_{k}1 $ (for each $k∈N$).

Does an equilibrium exist if the payoffs are not given by the table in Figure 1, but rather by other numbers? Solan [19 E. Solan, The dynamics of the Nash correspondence and n-player stochastic games. Int. Game Theory Rev.3, 291–299 (2001) ] showed that this is not the case. He studied a three-player quitting game that differs from the game of [15 J. Flesch, F. Thuijsman and K. Vrieze, Cyclic Markov equilibria in stochastic games. Internat. J. Game Theory26, 303–314 (1997) ] in three payoffs:

the payoffs in the entry $n_{∗}=n_{A}=n_{B}<n_{C}$ are $(1+η,0,1)$,

the payoffs in the entry $n_{∗}=n_{A}=n_{C}<n_{B}$ are $(0,1,1+η)$,

the payoffs in the entry $n_{∗}=n_{B}=n_{C}<n_{A}$ are $(1,1+η,0)$;

and showed that provided $η$ is sufficiently small, the game has no equilibrium. For example, the strategy vector $(σ_{A},σ_{B},σ_{C})$ described above is no longer an equilibrium, because Catherine is better off selecting $n_{C}=1$ with probability 1, thereby obtaining expected payoff $21 ⋅1+21 ⋅(1+η)=1+2η $, which is higher than her expected payoff under $(σ_{A},σ_{B},σ_{C})$ (that is still 1).

Yet in Solan’s variation [19
E. Solan,
The dynamics of the Nash correspondence and n-player stochastic games.
Int. Game Theory Rev.3, 291–299 (2001)
], for every $ε>0$ there is an *$ε$-equilibrium*: a vector of strategies such that no player can profit more than $ε$ by deviating to another strategy, in other words,

for every strategy $σ_{A}$ of Alaya, and analogous inequalities hold for Black and Catherine. Indeed, given a positive integer $m$, consider the following variation of $(σ_{A},σ_{B},σ_{C})$, denoted $(σ^_{A},σ^_{B},σ^_{C})$, where the set $N$ is partitioned into blocks of size $m$: block $k$ contains the integers ${(k−1)m+1,(k−1)m+2,…,km}$, for each $k∈N$. $σ^_{A}$ is the probability distribution that assigns to each integer in block $3k−2$ the probability $m⋅2_{k}1 $, for every $k∈N$. Similarly, $σ^_{B}$ (resp. $σ^_{C}$) is the probability distribution that assigns to each integer in block $3k−1$ (resp. $3k$) the probability $m⋅2_{k}1 $, for every $k∈N$. As mentioned above, the strategy vector $(σ^_{A},σ^_{B},σ^_{C})$ is an equilibrium of the game whose payoff function is given in Figure 1, and one can verify that provided $m≥ε1 $, it is an $ε$-equilibrium of Solan’s variation [19 E. Solan, The dynamics of the Nash correspondence and n-player stochastic games. Int. Game Theory Rev.3, 291–299 (2001) ].

It follows from [18
E. Solan,
Three-player absorbing games.
Math. Oper. Res.24, 669–698 (1999)
]
that an $ε$-equilibrium exists in *every* three-player quitting game, for every $ε>0$, regardless of the payoffs.
One of the most challenging problems in game theory to date is the following.

### 251*

Does an $ε$-equilibrium exist in quitting games that include more than three players, for every $ε>0$?

For partial results, see [21E. Solan and N. Vieille, Quitting games. Math. Oper. Res.26, 265–285 (2001) , 22 E. Solan and N. Vieille, Quitting games – an example. Internat. J. Game Theory31, 365–381 (2002) , 16R. S. Simon, The structure of non-zero-sum stochastic games. Adv. in Appl. Math.38, 1–26 (2007) , 17 R. S. Simon, A topological approach to quitting games. Math. Oper. Res.37, 180–195 (2012) , 20 E. Solan and O. N. Solan, Quitting games and linear complementarity problems. Math. Oper. Res.45, 434–454 (2020) , 14 G. Ashkenazi-Golan, I. Krasikov, C. Rainer and E. Solan, Absorption paths and equilibria in quitting games. arXiv:2012.04369 (2021) ], which use different tools to study the problem: dynamical systems, algebraic topology, and linear complementarity problems. The open problem is a step in solving several other well-known open problems in game theory: the existence of $ε$-equilibria in stopping games, the existence of uniform equilibria in stochastic games, and the existence of $ε$-equilibria in repeated games with Borel-measurable payoffs.

It is interesting to note that if we defined

then an $ε$-equilibrium need not exist for small $ε>0$. Indeed, with this definition, the three-player game in which the payoff of player $i$ is 1 if $∞>n_{i}=n_{∗}>n_{j}$ for each $j=i$, and 0 otherwise, has no $ε$-equilibrium for $ε∈(0,32 )$.

## III Solutions

### 237

We take for our probability space $(X,m)$: the unit interval $X=[0,1]$ equipped with Lebesgue measure $m$ defined on $B(X)$, the Borel subsets of $X$ and let $(X,m,T)$ be an invertible measure preserving transformation, that is $T:X_{0}→X_{0}$ is a bimeasurable bijection of some Borel set $X_{0}∈B(X)$ of full measure so that and $m(TA)=m(T_{−1}A)=m(A)$ for every $A∈B(X)$.

Suppose also that $T$ is ergodic in the sense that the only $T$-invariant Borel sets have either zero- or full measure ($A∈B(X)$, $TA=A⇒m(A)=0,1$).

Birkhoff’s ergodic theorem says that for every integrable function $f:X→R$,

The present exercise is concerned with the possibility of generalizing this. Throughout, $(X,m,T)$ is an arbitrary ergodic, measure preserving transformation as above.

**Warm-up 1****.** Show that if $f:X→R$ is measurable, and

then $n1 ∑_{k=0}f∘T_{k}$ converges in $R$ a.s.

Warm-up 1 is [23 P. Hagelstein, D. Herden and A. Stokolos, A theorem of Besicovitch and a generalization of the Birkhoff ergodic theorem. Proc. Amer. Math. Soc. Ser. B8, 52–59 (2021) , Lemma 1]. For a multidimensional version, see [23 P. Hagelstein, D. Herden and A. Stokolos, A theorem of Besicovitch and a generalization of the Birkhoff ergodic theorem. Proc. Amer. Math. Soc. Ser. B8, 52–59 (2021) , Conjecture 3].

**Warm-up 2****.** Show that if $f:X→R$ is as in Warm-up 1, there exist $g,h:X→R$ measurable with $h$ bounded so that $f=h+g−g∘T_{n}$.

Warm-up 2 is established by adapting the proof of [25 D. Volný and B. Weiss, Coboundaries in L0∞. Ann. Inst. H. Poincaré Probab. Statist.40, 771–778 (2004) , Theorem A].

**Problem****.** Show that there is a measurable function $f:X→R$ satisfying $E(∣f∣)=∞$ so that

converges in $R$ a.s.

The existence of such $f$ for a specially constructed ergodic measure preserving transformation is shown in [24 D. Tanny, A zero-one law for stationary sequences. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete30, 139–148 (1974) , Example b]. The point here is to prove it for an arbitrary ergodic measure preserving transformation of $(X,m)$.

*Jon Aaronson (Tel Aviv University, Israel)*

#### Solution by the proposer

We’ll fix sequences $ε_{k},M_{k}>0$, $N_{k}∈N$ ($k≥1$). For each $ε,M>0$, $N≥1$, we’ll construct a small coboundary $f_{(ε,M,N)}$. The desired function will be of the form $F:=∑_{k≥1}f_{(ε_{k},M_{k},N_{k})}$ for a suitable choice of $ε_{k},M_{k}>0$, $N_{k}∈N$ ($k≥1$).

To construct $f_{(ε,M,N)}$, choose, using Rokhlin’s lemma, a set $B∈B$ such that ${T_{k}B:∣k∣≤2N}$ are disjoint and $m(A)=ε$ where $A:=⨃_{∣k∣≤2N}T_{k}B$. Let

It follows that

Set $ε_{k}:=5_{k}1 $, $M_{k}=6_{k}$, $N_{k}=7_{k}$, and define $F_{(k)}:=f_{(ε_{k},M_{k},N_{k})}$ as above.

Since

this is a finite sum and so

Proof that $E(∣F∣)=∞$. For each $K≥1$,

and

Next,

whence

It follows that

whence

∎

Proof that $S_{n}F=o(n)$ a.s.. There is a function $κ:X→N$ so that for a.s. $x∈X$, $x∈A_{k}$ for all $k≥κ(x)$. Suppose that $k≥κ(x)$ and $2N_{k}≤n<2N_{k+1}$, then

and

∎

### 238

Let $(Ω,F,P)$ be a probability space and ${X_{n}:n≥1}$ be a sequence of independent and identically distributed (i.i.d.) random variables on $Ω$. Assume that there exists a sequence of positive numbers ${b_{n}:n≥1}$ such that $nb_{n} ≤n+1b_{n+1} $ for every $n≥1$, $lim_{n→∞}nb_{n} =∞$, and $∑_{n=1}P(∣X_{n}∣≥b_{n})<∞$. Prove that, if $S_{n}:=∑_{j=1}X_{j}$ for each $n≥1$, then

*Comment.*
The desired statement says that, if such a sequence ${b_{n}:n≥1}$ exists, then ${X_{n}:n≥1}$ satisfies the (generalized) Strong Law of Large Numbers (SLLN) when averaged by ${b_{n}:n≥1}$.
If $X_{n}∈L_{1}(P)$ for every $n≥1$, then the desired statement follows trivially from Kolmogorov’s SLLN, since in which case, with probability one,

and hence

must converge to 0 under the assumptions on ${b_{n}:n≥1}$. Therefore, the desired statement can be viewed as an alternative to Kolmogorov’s SLLN for i.i.d. random variables that are not integrable.

*Linan Chen (McGill University, Montreal, Quebec, Canada)*

#### Solution by the proposer

As explained above, we will need to prove the desired statement without assuming integrability of $X_{n}$’s. For every $n≥1$, we truncate $X_{n}$ at the level $b_{n}$ by defining $Y_{n}=X_{n}$ if $∣X_{n}∣<b_{n}$, and $Y_{n}=0$ if $∣X_{n}∣≥b_{n}$. Then, ${Y_{n}:n≥1}$ is again a sequence of independent random variables. It follows from the assumption on ${b_{n}:n≥1}$ that

which, by the Borel–Cantelli lemma, implies that the sequence of the truncated random variables ${Y_{n}:n≥1}$ is *equivalent* to the original sequence ${X_{n}:n≥1}$ in the sense that

Next, by setting $b_{0}=0$, we have that

and hence the assumption on ${b_{n}:n≥1}$ implies that

Our next goal is to establish the desired SLLN statement for ${Y_{n}:n≥1}$. To be specific, we want to show that if $T_{n}:=∑_{j=1}Y_{j}$ for each $n≥1$, then $lim_{n→∞}b_{n}T_{n} =0$ almost surely. We will achieve this goal in two steps.

Step 1 is to treat the convergence of $b_{n}E[T_{n}] $. To this end, we derive an upper bound for this term as

Then (2) implies that

which, by Kronecker’s lemma, leads to

Hence, we conclude that $lim_{n→∞}b_{n}E[∣T_{n}∣] =0$.

Step 2 is to establish the convergence of $b_{n}T_{n}−E[T_{n}] $, for which we will use a martingale convergence argument. We note that if

for each $n≥1$, then ${M_{n}:n≥1}$ is a martingale (with respect to the natural filtration) and for each $n≥1$,

where the second last inequality follows from the assumption that $nb_{n} $ is increasing in $n$, and the last inequality is due to the fact that there exists constant $C>0$ such that $∑_{j=k}j_{2}1 ≤kC $ for every $k≥1$.
Hence, (2) implies that ${M_{n}:n≥1}$ is bounded in $L_{2}(P)$.
A standard martingale convergence result implies that $lim_{n→∞}M_{n}$ exists in $R$ almost surely^{3}One can also use Kolmogorov’s maximal inequality to prove the almost sure existence of the limit of $M_{n}$., which, by Kronecker’s lemma again, leads to

Finally, we write $b_{n}S_{n} $ as

where the last two terms have been proven to converge to 0 almost surely, and (1) implies that, with probability one, the limit of the first term is also 0. We have completed the proof.

### 239

In Beetown, the bees have a strict rule: all clubs must have exactly $k$ members. Clubs are not necessarily disjoint. Let $b(k)$ be the smallest number of clubs that the $n≥k_{2}$ bees can form, such that no matter how they divide themselves into two teams to play beeball, there will always be a club all of whose membees are on the same team. Prove that

for some constant $C>0$.

*Rob Morris (IMPA, Rio de Janeiro, Brasil)*

#### Solution by the proposer

This is an old result of Erdős, and a classic application of the probabilistic method. Let us think of the two teams as being red and blue, so that a club is ‘monochromatic’ if all of its membees are on the same team.

First, for the lower bound, we need to show that if $m<2_{k−1}$, then for any collection of $m$ clubs there exists a colouring with no monochromatic club. To do so, we choose the teams randomly, and observe that the expected number of monochromatic clubs is less than $1$. To be precise, let $Pr(bis red)=21 $, independently for each bee $b$, and let $S$ count the number of monochromatic clubs. Then, by linearity of expectation, $E[S]=m⋅2_{−k+1}<1$, since each club is monochromatic with probability exactly $2_{−k+1}$. But this implies that $Pr(S=0)>0$, so there exists a colouring with no monochromatic club, as required.

For the upper bound, we choose the clubs randomly. To be precise, choose $N=k_{2}$ bees, and choose each club uniformly and independently from the $k$-subsets of these $N$ bees. The idea is that, for any colouring of the bees, the expected number of monochromatic clubs is at least $k_{2}$, so the probability of having no monochromatic club should be at most $e_{−k_{2}}$. Since there are $2_{k_{2}}$ colourings of these bees, the expected number of colourings with no monochromatic clubs is less than $1$, so there must exist a choice for which it is zero.

To spell out the details, fix a colouring, and suppose that $x$ of the $N$ chosen bees are red. The probability that a random club is monochromatic is

for some constant $c>0$, where in the final inequality we used the fact that $N≥k_{2}$.

Now, let $T$ count the number of colourings of the $N$ bees with no monochromatic club, and observe that if there are $m=k_{2}2_{k+c}$ clubs, then

It follows that there exists a choice of $m$ clubs such that $T=0$, as required.

### 240

$N$ agents are in a room with a server, and each agent is looking to get served, at which point the agent leaves the room. At any discrete time step, each agent may choose to either shout or stay quiet, and an agent gets served in that round if (and only if) that agent is the only one to have shouted. The agents are indistinguishable to each other at the start, but at each subsequent step, every agent gets to see who has shouted and who has not. If all the agents are required to use the same randomised strategy, show that the minimum time to clear the room in expectation is $N+(2+o(1))g_{2}N$.

*Bhargav Narayanan (Rutgers University, Piscataway, USA)*

#### Solution by the proposer

Here is a simple strategy that works in expected time $N+(2+o(1))g_{2}N$. The agents all toss independent fair coins to decide whether to shout or not in each of the first $k=(2+o(1))g_{2}N$ rounds. It is easy to see that with high probability, after these $k$ rounds, every agent (still in the room) has a unique ‘history’, i.e. no two agents have the exact same sequence of turns (shouting/staying quiet). Now the agents are all distinguishable, and we are done in $N$ more steps; for example, the agents can interpret each others histories as numbers in binary, and can get served in increasing order. Below, we show that no strategy can do significantly better.

At any time, we can partition all the agents into clusters based on their history so far: two agents go into the same cluster if they have chosen to do the same thing in all previous rounds. By the requirement that the agents all have the same randomised strategy, we know that at any time, all the agents in the same cluster must have the same strategy. Let $X$ be the number of times an agent from a cluster of size at least 2 gets served and leaves the room, and let $Y$ be the number of times either

exactly two agents from the same cluster, and nobody else, ask to be served, or

nobody asks to be served at all.

An easy computation shows that

for all $m>1$ and any $0≤p≤1$; consequently, it is easy to see that $Y$ stochastically dominates $X$. So, if for some strategy,

then the expected time to clear the room, which is at least $N+Y$, is at least $N+2g_{2}N$ in expectation. So we may assume that $X<2(g_{2}N)_{2}$ with high probability for any strategy under consideration.

Let $S$ be the set of agents who leave the room only when they belong to their own singleton cluster. As we just observed, the number of such agents $∣S∣=M=N−X$ may be assumed to be at least $N−2(g_{2}N)_{2}$. The key observation is this: if someone leaves the room in a particular step, the cluster structure of $S$ does not change in that step. To see this, note that when an agent not from $S$ leaves the room, that agent shouts and everyone in $S$ does not, so there is no change to the cluster structure of $S$. On the other hand, when an agent from $S$ leaves the room, that agent is, by definition, already in their own singleton cluster, and every other agent in $S$ does not shout in this step; again, there is no change in the cluster structure of $S$.

But we know that at the end of the process, which let us say takes $N+Δ$ rounds, $S$ has been split from a single cluster into $M$ singleton clusters. Nothing changes in the cluster structure of $S$ in the $N$ rounds when someone leaves the room, so $S$ gets broken down into singleton clusters in the remaining $Δ$ steps.

Consider these