When Expectations Fail

I’ve seen three examples of the same strange thing happening over the past few months, and I wanted to compile them here. The basic set-up is this: in an optimal stopping problem, we usually have some kind of stochastic process, say on a state space {S}, and for each state {x\in S}, we take {f(x)} to be the payoff from stopping at {x} and {u(x)} to be the expected payoff from continuing from {x}. Then, {v(x)}, the “value” of state {x}, should be {v(x)=\max\{u(x), f(x)\}}. Between stopping and continuing, we pick whichever is better. If they’re the same, it doesn’t matter which one we choose.

This formalism is really just a manifestation of the way I think most people think about the kinds of optimal stopping problems they meet in the real world: am I better off stopping or not? But the weird thing I’ve seen is that this strategy can actually give completely counterintuitive and incorrect results. The first time I saw this, I brushed it off as a weird edge case, but then I saw two more such examples.

This is weirdly unsettling, because expected value is (maybe implicitly) held up as this great tool for thinking about how to play little games like this and make decisions, but it clearly has some serious drawbacks that I hadn’t been fully aware of until seeing these.

1. Three Examples

Example 1. Consider the secretary problem (this is actually what’s called the cardinal payoff variant of the problem), where we see some sequence of {n} applicants for a position, each applicant has an associated value measuring their quality, and our payoff is this quality value. We must decide whether to accept/reject an applicant immediately after seeing them, and cannot call them back.

If the qualities of the applicants are drawn from a uniform distribution, then it can be shown that the best strategy is to go through {\sqrt{n}} candidates, and then pick the next best one. But if the qualities are drawn from the negative half of the Cauchy distribution, that is, with PDF

\displaystyle P(\text{Quality}=x)=\frac{2}{\pi}\cdot \frac{1}{1+x^2},\quad x\leq 0,

then strange things happen. In particular, we can check that

\displaystyle E(\text{Quality})=\frac{2}{\pi}\int_{-\infty}^0 \frac{x}{1+x^2}\, dx=-\infty,

so the expected quality of any applicant is negative infinity! What this means is that when we see the first applicant, the expected payoff from continuing is negative infinity, so we always accept the first candidate we see. But the payoff from using this strategy is the expected quality of the first candidate, which is still negative infinity!

Example 2. Consider a simple random walk on {{\mathbb Z}^+} starting at some {N>0} with an absorbing boundary i.e. once you reach zero, you stop walking. If we have the payoff function {f(n)=n^2} for stopping at {n}, then

\displaystyle E(f(X_{n+1})\mid X_n=k)=\frac{1}{2}(k+1)^2+\frac{1}{2}(k-1)^2=k^2+1>f(X_n),

so the expected payoff from continuing is always greater than the payoff from stopping. But the walk is recurrent, so if we continue walking for long enough, we are guaranteed to reach 0 and win nothing.

Example 3. Consider a series of double-or-nothing bets on the results of a flip of a biased coin that comes up heads with probability {p>\frac{1}{2}}. The expected value of taking the bet when you have {N} dollars is

\displaystyle p(2N)+(1-p)0=2pN>N,

so we should always take the bet. But the coin will eventually come up tails with probability 1, and we will lose everything.

(Credits: I saw the first example in a Facebook post by Kevin Dong, the second on a problem set for my stochastic processes class, and the third in a conversation with a friend.)

What’s going on? How did this seemingly foolproof plan of just doing the thing that maximized our expectation go so awry?

2. The Secretary Problem

For the secretary problem, the apparent issue is that having infinite expectations just gives meaningless results. It’s not too bad when you’re trying to compare an infinite and finite expectation, because then you can at least say something like “we should do anything to avoid something with infinitely negative expected value,” but the example here is trying to compare two infinite expectations (the expectation from stopping after one candidate and the expectation from stopping after two candidates), and it all falls apart. The only consolation is that this expected value will never be attained – the first candidate can only be finitely bad, after all.

3. The Random Walk

For the random walk, I think a small computation is helpful. Let’s say we adopt a strategy where we a set two markers, say at {a<N<b}, and stop whenever we reach them. To compute the expected payoff of such a strategy, let {X_1,\cdots, X_n} be i.i.d. Bernoulli random variables with parameter {p=1/2}. Then our simple random walk is given by the recurrence {S_n=S_{n-1}+X_n} with {S_0=N}.

We can check that if {\mathcal{F}_n} is the {\sigma}-algebra generated by {X_1,\cdots, X_n}, then {S_n} is measurable with respect to {\mathcal{F}_n}, and furthermore,

\displaystyle E(S_{n+1}\mid \mathcal{F}_n)=E(S_{n}+X_{n+1}\mid \mathcal{F}_n)=E(S_n\mid \mathcal{F}_n)+E(X_{n+1}\mid \mathcal{F}_n)=S_n,

so {S_n} is a martingale with respect to {\mathcal{F}_n}. Now, if we let

\displaystyle T=\min\{n\mid S_n=a\text{ or }S_n=b\},

the optimal stopping theorem gives us {E(S_T)=E(S_0)=N}. Then,

\displaystyle P(T=a)a+(1-P(T=a))b=N\implies P(T=a)=\frac{N-b}{a-b}, P(T=b)=\frac{a-N}{a-b}.

With this in mind, our expected payoff is

\displaystyle E(a,b)=a^2\frac{N-b}{a-b}+b^2\frac{a-N}{a-b}=\frac{(a^2-b^2)N+ab(b-a)}{a-b}=N(a+b)-ab.

The issue is a little clearer now: {E(a,b)} is linear in both {a,b}, so it’s extrema are at the endpoints – that is, we can always do better by making {a} smaller or {b} larger. This corresponds exactly to the “we’re always better off taking another step” mentality that the one step expectation calculation gave, except that now we see that this problem isn’t just an artifact of only looking one step ahead.

I think what’s going wrong here is that the expectation isn’t accounting for the opportunity cost involved. When we take {f(X_{n+1}\mid X_n=1)}, the fact that state 0 has zero payoff doesn’t sufficiently capture the fact that we’ll also lose the opportunity to continue playing. This can also be thought of as a local/global type issue, where the thing that’s ruining everything is the recurrence of the walk, which is a global property, but our expected value computations are all local.

4. The Coin Game

It makes sense to do a computation similar to the previous one, but the computation is much easier here: if we decide that we’re going to stop after {k} flips, the expected payoff is

\displaystyle p^k\cdot 2^k+(1-p^k)\cdot 0=(2p)^k.

This has the same qualitative behavior as the payoff function from the random walk, where it increases as we play for longer, and we can explain it in the same way, by saying that the expected value doesn’t realize that the game ends when we flip tails. But this behavior is arising from a different kind of payoff function, so there’s an underlying structural difference between the two. I’d be curious to know if there are other examples of games that have such payoff functions, or if these are in some sense the only two ways this can happen.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s