6

A well-known (non-)paradox in probability involves a two-envelope game played between two players, $A$ and $B$:

  1. $A$ selects two distinct (real) numbers, $x$ and $y$, writing each one down on a card and sealing each card in an envelope, then presenting the two envelopes to $B$.
  2. $B$ chooses one of the envelopes and looks at the card inside. They then guess whether the number on the card they've chosen is the larger or smaller of the two numbers.

The 'paradox' here is that regardless of $A$'s scheme for choosing numbers — and even if $A$ knows $B$'s strategy in advance — there's a strategy for $B$ that will achieve a better-than-even success rate in the long run: choose an envelope at random, then map the number in it onto the interval $(0,1)$ using some (arbitrary) monotonic function. Choose a random deviate $U\in(0,1)$, and then guess 'higher' or 'lower' according to whether (the mapping of) the number looked at is higher or lower than the generated random deviate.

I'll skip the analysis of this strategy here (see Do better than chance or Who discovered this number-guessing paradox? for more details), but note that it explicitly relies on having a source of random deviates. My question is whether this is necessary for $B$ to have the advantage. More specifically, consider the following variant of the game:

  1. B chooses computable functions $f():\mathbb{N}\mapsto\{0,1\}$ and $g():\mathbb{N}\mapsto\mathbb{Q}\cap(0,1)$. Note that $A$ knows nothing about these functions, other than that they are computable.
  2. For each integer $n$, in turn:
    • $A$ selects two distinct real numbers $x,y\in(0,1)$, writing each down on a card and presenting them in sealed envelopes. (I'm restricting the numbers here to eliminate the mathematically-moot mapping step.)
    • $B$ computes $f(n)$; if $f(n)=0$ then $B$ chooses $x$, and if $f(n)=1$ then $B$ chooses $y$. Call $B$'s chosen number $z$.
    • $B$ computes $g(n)$; if $g(n)\leq z$ then $B$ guesses 'higher', otherwise $B$ guesses 'lower'.

Note that this is essentially $B$ following the strategy in the usual version of the game, except that $B$ is following a computable strategy rather than a purely randomized one.

Can $A$ win this game in the long run?

Since $A$ doesn't know what computable strategy $B$ is following, they'll clearly have to take some dovetailing approach; instinctively it feels like randomness is inherent in $B$'s ability to win with the usual strategy and that $A$, with the knowledge that $B$'s strategy is actually computable, should be able to 'game the system' and win. Unfortunately, I can't see a clear proof here (and I wouldn't be entirely shocked to learn that I'm wrong). Is anything known about this problem?

EDIT: to clarify, I should point out that unlike $B$, the strategy that $A$ follows does not have to be computable; $A$ can, for instance, take advantage of an oracle that enumerates the total computable functions. For example, this ensures that $A$ can guarantee they'll win at least once: enumerate all possible pairs $\langle f_n(), g_n()\rangle$ of recursive functions and in round $i$ behave as though $B$'s selections for the round will be $f_i(i)$ and $g_i(i)$ (by choosing values that will win given that these are $B$'s selections). Also, I suspect $f()$ is actually superfluous and that we can ask the analogous question for the strategy where $B$ always guesses $x$, but if the presence or absence of $f()$ does matter then it'd be interesting to know that too.

  • Why don't you restrict to rationals instead of reals? I don't think it affects the intrinsic problem, and would make it even more interesting since we can handle rationals as finite strings. My intuition is that B can still win by choosing a random computable strategy. If you don't allow that, how do you expect B to choose? If B chooses deterministically, A would know B's strategy and can win every round. – user21820 Aug 05 '15 at 04:21
  • Unless you are asking whether there is a strategy for A that guarantees winning no matter what computable strategy B uses. In which case I think it depends on your axiomatic framework. You may need oracles as you said. – user21820 Aug 05 '15 at 04:23

2 Answers2

0

Your question is too ambiguous. I assume that you are asking for a strategy for A that guarantees winning in the long run no matter what strategy B chooses that is unknown to A, but even in that case it is not clear what is considered as winning. In this answer I assume that you want the limit infimum of the ratio of wins to games to be more than half.

(1) A plays the same against any B : A cannot guarantee to win!

If A's strategy does not depend on the game history, then A certainly has no winning strategy because for any strategy for A that wins against a strategy for B, the opposite strategy for B (pick the same way but guess the opposite) wins against that strategy for A. This applies even if A's strategy is not computable. However, B may have a better strategy if A is restricted, as we shall see below.

(2) A and B plays computably : Neither can guarantee to win

If A has a computable strategy, we can trivially construct a strategy for B so that B will always win A, simply by simulating A's strategy for each move and playing against it. Similarly if A knows B's strategy A can always wins. Thus there is no single strategy that either side can use to guarantee a win, just like in Scissors-Paper-Stone. Note that this applies even if the opponent does not remember the game history, since the counter-example strategy does not need to.

(3) A can use the halting oracle and B plays computably: A wins!

For the $k$-th move, A runs all halting computable functions on the entire game history as input. If one of them outputs a sequence of length $k$ that matches all the previous $(k-1)$ moves by B, then A plays against that sequence for the current move. As before, if more than one halting computable function does, A takes the one with the smallest description. If B is playing by some computable function, then clearly A would eventually settle on an equivalent function with smallest description (guaranteed to exist by induction) because eventually no function with smaller description would satisfy the criterion. From that point onwards A would win every game!

(3') B can use the halting oracle and A plays computably: B wins!

Same argument as in (3).

user21820
  • 60,745
  • @Steven Stadnicki: Sorry I made a mistake in my previous answer. It is impossible for A to computably defeat B no matter B's strategy, even if B uses a computable strategy that does not depend on the game history. This was obvious from another of my previous proofs but I only just realized it. So the only way A can guarantee a win is to use an oracle, and with the halting oracle A can guarantee always winning from some (unknown) point onwards as I proved earlier, now (3). – user21820 Aug 09 '15 at 11:10
-1

Fonctions $f$ and $g$ are total recursive functions. For player $B$ to be able to game the system, the main problem is that the set of total recursive functions is not recursively enumerable. Hence he can't (as it is the only way) enumerate them until he finds some functions that match the values already given, knowing that ultimately this process will give him the real functions $f$ and $g$ used by payer $A$.

But in practice, player $B$ can observe the time used by player $A$ to give his answers, and then guess the time (or more precisely, the number of computations steps) needed to compute $f$ and $g$. Knowing that he can find the smaller program that match both answers of $f$ and $g$ already known in the (estimated) computation time. The process will ultimately converges to the real $f$ and $g$ and leads $B$ to ultimate victory.

Xoff
  • 10,493
  • 1
    I think you have the roles of $A$ and $B$ reversed. :-) That said, there are a couple of issues with this; most specifically, that the strategy for $A$ (the number-giver) doesn't (unlike $B$'s) have to be computable; we can assume that $A$ has an oracle that enumerates the total recursive functions. (I'll make this clearer in the problem statement itself.) In particular, this means that $A$ can assure he always wins at least once. – Steven Stadnicki Apr 22 '15 at 19:06