**1. Introduction **

I spend a lot of time playing pool during the school year. Unfortunately, I haven’t had access to pool tables for a while, so I haven’t been able to play. Since I can’t actually play pool, I decided instead to work on a problem involving it that occurred to me while playing a game earlier this year.

Some “team” sports are not really team sports in the sense that what matters is not the total strength of your team, but the strength of your best player. A good example of this is chess: when two people play together, the stronger player can just overrule the weaker player, and so a pair of chess players together is roughly as strong as the stronger player (this isn’t a perfect model, admittedly – for example, two grandmasters against a single grandmaster is probably in favor of the two, but this is definitely true when the abilities of the three players are well separated).

Pool is not like this at all though. When you play with another person, you alternate taking shots, and so playing with someone weaker than you can be a substantial handicap (and similarly for playing with someone much stronger than you). Playing with another person then, amounts to taking some mix of your two abilities. In this post, I try to quantify this mix, and ultimately answer the following question:

**Question. **Consider three players, , with their skills quantified as . What kind of condition can be given on (in terms of ) to guarantee that has an advantage over and combined?

**2. Set-Up **

Before we can do anything else, we need to develop a reasonable statistical model of how pool works. The key assumption underlying the entire discussion that follows is that there is a fixed probability that a player makes any given shot. This fixed probability will correspond to the quantified skill from before. This probability should somehow be weighted by the frequency of shots e.g. never making a shot that only shows up of the time should count less towards this probability than never making a shot that shows up of the time. Put another way, this is meant to be an empirical probability: you could estimate it by playing some large number of games, and measuring the proportion of shots you attempted that you made.

Some comments on this assumption: it’s pretty unrealistic. In particular, the game becomes much harder as it progresses, because there are fewer balls available to pocket. Furthermore, it’s not always in your best interest to pocket a ball. These problems can maybe be addressed by introducing some kind of decay factor or something, but this analysis is involved enough with this assumption that I’m wary of the complications that arise by relaxing it.

With this set up in mind, let be the random variable that is the number of balls a player pockets on turn (we assume that each of the is independent of the others). The number of balls a player pockets in the first turns then, is

and the player will win the game after turns, where

i.e. once you pocket the seven object balls, and then the eight ball. The way to determine the winner of the match is to compare the values of for the player and her opponent, and whoever has the smaller value wins (with equality favoring the player who goes first).

This can also be imagined in the following way: the two players are playing on separate tables, each with eight balls, and counting the number of turns it takes them to clear the table. Whoever does it in fewer turns wins. Underlying this set-up is the assumption of independence between the shots of the two players, which is another unrealistic but unavoidable assumption.

With this set up in mind, here’s the executive summary of how we’ll attempt to answer our question:

- Understand the distributions of for a single player
- Understand the distribution of for a two-player team
- Use the results of (i) and (ii) to give an answer

**3. One Player **

Suppose we have a single player with a fixed probability of making a shot. Then, each is distributed geometrically, so

The next thing to do is look at the distribution of . Suppose we had . This would mean that we had attempted shots, of which had failed and had succeeded. This argument gives

Finally, we look at :

This isn’t pretty, but it’s tractable. Shown below are the distributions for , to give some sense of what we’re looking at.

As a sanity check, these have the correct qualitative behavior: higher values of correspond to fewer shots needed to finish.

**4. Two Players **

Next, we look at a two-player team. Let their fixed probabilities be and , where the player with probability is the first player, and the player with probability is the second player. Then, on the first players turn and the second players turn, the number of balls they pocket are given by random variables , where

(We use two different letters for convenience.)

For the , we need to distinguish based on parity: we have

Computing the distributions of these is a bit trickier this time, but we use a similar method.

Suppose we had . For this to happen, we must have made shots, of which are failures and are successes. Of the failures, came from the first player, and came from the second player. Furthermore, suppose that of the successes came from the first player, and came from the second player. We construct a valid sequence of successes and failures by considering the two players separately, and then intercalating the outcomes of their turns.

From the first player, we have shots. The last shot must be a failure, and there are to arrange the remaining ones. Similarly for the second players shots, we find that there are arrangements. Then, for each pair of valid arrangements, we go along the till we have a failure; then go along the till we have a failure, and so on, to construct a valid sequence of shots that would give . The probability of any such sequence is .

Using this argument, and allowing to vary, gives

Using an analogous argument for the even case, we find

As another sanity check, looking at does in fact give the convolution of and , so everything’s good.

**Remark****.*** When we plug into our formula, we recover the results of the single player section. This gives the identity
*

Final note before moving on: these formulas fail in the case of , because our counting argument breaks down. In that case though, we just have , whose distribution we already know.

Next, we turn to . Distinguishing cases based on parity and repeating the arguments of the previous section gives

and we could plug in the distribution of to get a single expression, but that would take too much space.

At any rate, these expressions are no problem for a computer, and some sample distributions are shown below.

These graphs also show the correct qualitative behavior: a team with one player much stronger than the other should be much more likely to finish on turns where the stronger player is up, and taking recovers the distribution from the previous section.

**5. Two on One Matches **

We can now study a two on one game of pool, and see when the single player has the advantage. Consider three players with skills , where the first two are playing together against the third. Let be the number of turns taken for the first two to clear their balls (we use to make it clear that we are not just adding the probabilities in the subscript) and let be the number of turns taken for the third to clear her balls. For the game to be in the single player’s favor, we need .

Unfortunately, those expectations have no closed form, so this is where we start fudging and approximating.

To get a feel for things, we plot the value of for different values of .

This looks vaguely exponential, so let’s do against .

The logarithmic plot looks suitably linear, and running OLS gives

with . Of course, based on the graph, we know that this approximation is much worse for , so if you’re terrible at pool, you should take this with a larger grain of salt (and maybe not play against two people at once).

It makes sense to try the same thing for . Of course, things are three-dimensional now, so patterns are a bit harder to spot. For that reason, we go straight to plotting for varying ):

This has the same linear (or planar, I suppose) look, and running OLS again gives

with . Not quite as good, but still a reasonable approximation (with the same caveat of failure at the edges).

Now, if we replace the condition of

we have

where we’ve rounded stuff off in that final result.

So this computation gives the following answer to our question:

*Answer:* Take a weighted average of skills that is of the first player, of the second player, and a correction factor of of a perfect player (i.e. on who never misses a shot). If you make a higher percentage of your shots than this, you should expect to beat them.

To wrap up, let’s try an example.

**Example. **Suppose you are facing two players, one of whom makes of her shots and another who makes of her shots. Since

you should expect to beat this pair if you can make of your shots.