This is the first in a series of posts about percolation theory, which I’m studying this summer. The main text I’m using is Bollobas’s book on the subject, and I plan to follow it pretty closely. There should be around six posts, culminating in the proof of the Harris-Kesten theorem, a major result about percolation on . This material corresponds to the first three chapters of Bollobas, with some omissions and rearrangement.
1. The Big Picture
The idea is to consider some graph (prototypically but theoretically any graph), and choose a random subgraph of this graph by selecting each edge to be part of the subgraph with probability . Alternatively, do the same thing with vertices instead of edges, and take the subgraph to be the one induced by the chosen vertices.
- The edges are bonds
- The vertices are sites
- The chosen edges/vertices are open
- The chosen graph is the open subgraph
- The connected components of the open subgraph are open clusters
According to this language, the two models described above are bond percolation and site percolation.
The physical motivation for this model is a liquid percolating through a solid lattice, where the open bonds/sites represent the bonds/sites that the liquid can pass through. The main question suggested by this interpretation is whether the liquid will be able to pass through the lattice completely, which corresponds to the existence of an infinite open cluster.
The Harris-Kesten theorem mentioned above answers this question for bond percolation on . It shows that for , there is an infinite open cluster with probability 1, and for there is an infinite open cluster with probability 0.
This post takes the first step toward proving this result by defining two critical probabilities that are closely related to this question, and then computing them for some simple graphs. Before we can even take that step though, we must construct a probability space in which to work. For a graph , we take:
- to be the set of all arrangements of open edges, where 0 represents a closed edge and 1 represents an open edge;
- to be the -algebra on generated by the cylinders
- to be the probability measure on induced by
In everything that follows, all probabilities are defined according to this measure.
2. Critical Probabilities
For a site , let be the open cluster containing , and let be the probability that is infinite.
Proposition 1. There is a critical probability such that if and if , for all .
(The notation is in honor of Hammersley, who first proposed this model.)
Proof: Take any two sites such that (this is the distance induced by ). Then , since if there is an infinite open cluster containing and a path from to , there will be an infinite open cluster containing as well. Thus, if for one site, then for all sites, and if for one site, then for all sites. If we show that is an increasing function of , the existence of follows.
To see the increasing behavior (which is intuitively clear), consider assigning each edge an independent random variable , and then taking it to be open if . If , the subgraph obtained with probability will always be contained in the subgraph obtained with probability , so must be increasing.
We now connect this to our original question about the existence of an infinite open cluster.
Corollary 2. Let be the event that there is an infinite open cluster. Then if and if .
Proof: The event is a tail event, so by the Kolmogorov 0-1 law, it can only take the values 0 and 1. If , then so . If , then for at least one site, so .
So to understand when there is an infinite open cluster, it suffices to understand .
When , any is almost surely finite, but it’s expected size might not be. Set . By the same arguments as before, increases with , and it is either finite for all sites or for no sites. Thus we define
with the notation in honor of Temperley this time.
We have been deliberately ambiguous until now about whether we are working with bond percolation or site percolation in this section, and in fact we mean both. That is, there are actually four critical probabilities: . And since each of these depends on the choice of graph, we actually need to write . This notation is a bit cumbersome though, so we refrain from using it except where absolutely necessary.
One obvious relationship between and is that . Some more difficult relations are known, but we’ll look at those in a later post. For now, let’s do an example.
3. Computing the Critical Probabilities
Determining and in general is quite hard (after all, the Harris-Kesten theorem basically boils down to doing this for , and is already highly non-trivial), but there are some simple graphs for which it can be done. We do an example with bond percolation.
Suppose is the tree in which every vertex has degree , except for the root which has degree . Call the root and let be the subgraph obtained by truncating this graph levels down from the root. We write for the probability that, in bond percolation with probability , there is an open path of length from to a leaf of .
For there to be an open path of length from to a leaf of , there must be an open edge between and some other vertex, and then an open path from that vertex to a leaf. This gives
Consider the function . The result from iterating this function starting with , so we need to study it’s fixed points. We have and . If , then for some small , but , so there is some fixed point . Otherwise, the unique fixed point is at . Concretely, , so this condition reduces to .
In the case where , we see that , and since , we always have . But this is a lower bound on , since if there is an infinite open cluster containing there must be a path to the bottom, so , and so .
On the other hand, if , the converge to the unique fixed point at 0, meaning that . It follows that for .
We can also do : a site at depth is part of with probability , so
which is finite if and only if . Thus as well.
A few remarks about this calculation: the fact that here is not unusual. This is true for a very general class of graphs.
Also, it turns out that this tree has, in some sense, the lowest critical probability possible. To see this, take to be a graph with maximum degree . If , then there is an open path from to , so we can bound by the number of paths in that start at . Using our information about the maximum degree, this gives
which converges exactly when . This means that , and since , this bound holds for as well. In , , and this bound is attained.
So the tree is special in more ways than one. It turns out that the more “tree-like” a graph is, the more easily and can be computed. Unfortunately, is nothing like a tree, so a different approach is required to tackle it.