matter where player I hides, at least one line from the cover will find him, so
he is found with probability at least 1
/k
. Thus K¨
onig’s lemma shows that
this is, in fact, a joint optimal strategy, and that the value of the game is
k
−
1
, where
k
is the size of the maximal set of independent 1’s.
2.7 General hide-and-seek games
We now analyze a more general version of the game of hide-and-seek.
Example 2.7.1
(
Generalized Hide-and-seek
)
.
A matrix of values (
b
i,j
)
n
×
n
is given. Player II chooses a location (
i,j
) at which to hide. Player I chooses
a row or a column of the matrix. He wins a payment of
b
i,j
if the line he
has chosen contains the hiding place of his opponent.
First, we propose a strategy for player II, later checking that it is optimal.
Player II first chooses a fixed permutation
π
of the set
{
1
,...,n
}
and then
hides at location (
i,π
i
) with a probability
p
i
that he chooses. For example, of
n
= 5, and the fixed permutation
π
is 3
,
1
,
4
,
2
,
5, then the following matrix
gives the probability of player II hiding in different places:
0
0
p
1
0
0
p
2
0
0
0
0
0
0
0
p
3
0
0
p
4
0
0
0
0
0
0
0
p
5
Given a permutation
π
, the optimal choice for
p
i
is
p
i
=
d
i,π
i
/D
π
, where
d
i,j
=
b
−
1
i,j
and
D
π
=
∑
n
i
=1
d
i,π
i
, because it is this choice that equalizes the
expected payments.
For the fixed strategy, player I may choose to select

2.7 General hide-and-seek games
67
row
i
(for an expected payoff of
p
i
b
i,π
(
i
)
) or column
j
(for an expected payoff
of
p
j
b
π
−
1
(
j
)
,j
), so the expected payoff of the game is then
max
parenleftbigg
max
i
p
i
b
i,π
(
i
)
,
max
j
p
π
−
1
(
j
)
b
π
−
1
(
j
)
,j
parenrightbigg
= max
parenleftbigg
max
i
1
D
π
,
max
j
1
D
π
parenrightbigg
=
1
D
π
.
Thus, if player II is going to use a strategy that consists of picking a
permutation
π
∗
and then doing as described, the right permutation to pick
is one that maximizes
D
π
. We will in fact show that doing this is an optimal
strategy, not just in the restricted class of those involving permutations in
this way, but over all possible strategies.
To find an optimal strategy for player I, we need an analogue of K¨
onig’s
lemma.
In this context, a
covering
of the matrix
D
= (
d
i,j
)
n
×
n
will be a
pair of vectors
u
= (
u
1
,...,u
n
) and
w
= (
w
1
,...,w
n
), with non-negative
components, such that
u
i
+
w
j
≥
d
i,j
for each pair (
i,j
). The analogue of
the K¨
onig lemma is
Lemma 2.7.1.
Consider a minimal covering
(
u
∗
,
w
∗
)
(i.e., one for which
∑
n
i
=1
(
u
i
+
w
i
)
is minimal). Then
n
summationdisplay
i
=1
(
u
∗
i
+
w
∗
i
)
= max
π
D
π
.
(2.5)
Proof.
Note that a minimal covering exists, because the map
(
u
,
w
)
mapsto→
n
summationdisplay
i
=1
(
u
i
+
w
i
)
,
defined on the closed and bounded set
braceleftbig
(
u
,
w
) : 0
≤
u
i
,w
i
≤
M,
and
u
i
+
w
j
≥
d
i,j
bracerightbig
,
where
M
= max
i,j
d
i,j
, does indeed attain its infimum.
Note also that we may assume that min
i
u
∗
i
>
0.

