The next chapter of Bollobas is a collection of probabilistic tools used throughout the book; this post presents two of these results that will be key to our proof of the Harris-Kesten theorem. Both of these results are set in the framework of weighted hypercubes, which we now introduce.

Take a set and its power set . This power set can be identified with the set of functions by identifying with its characteristic function. In the case where is finite (and this is the case we care about), we can think of as and as the cube This cube is the set of binary sequences of length , where a subset is associated to a sequence that has a 1 in position if and a 0 otherwise.

More geometrically, we can think of as a graph whose vertices are . Two sets are connected in this graph if (i.e. and differ at one entry in their sequences). Put yet another way, if are two vertices, they are connected if , where is a standard basis vector. We can make a directed graph by saying the edge is oriented from to if . We write this as .

Now, shifting gears, the points of are associated with the outcome of a sequence of coin tosses. If is the sequence of probabilities of success for these coin tosses, then induces a natural probability measure on .

For example, if we obtain the uniform measure, and if , we obtain . We write for the cube with this measure, and it is equivalent to the standard percolation models.

We write for the probability space and distinguish two kinds of events. A subset is an up-set if . Similarly, is a down-set if . (Keep in mind that is a subset of , which is identified with , so are subsets of the original set .) The corresponding events are referred to as increasing and decreasing respectively.

With that set up out of the way, we can state our first lemma.

**1. Harris’s Lemma **

This lemma is the intuitive fact that increasing events are positively correlated. Let’s do a bit more set up.

For a set and we can define

If is an up-set, , and if is a down-set, (because always differ at only one position, so their difference is always a basis vector, so it’s just a matter of sign).

If we write for the probability measure on both and , we have

Finally, the result:

**Lemma 1.** * Let be subsets of . If they are both up-sets or both down-sets, then . If is an up-set and is a down-set, then . *

(This implies the correlation statement, since the first condition rearranges to and similarly for the second.)

*Proof:* We induct on . The case is trivial, so suppose the result holds for .

If and are both up-sets or both down-sets, then either and or and . This means that

Our argument will rely on the equivalent fact that

Now we write

which is

We cleverly rewrite this as

(yes, they’re really equal!) and use our earlier inequality to see that this is

But this last line is equal to

so we conclude.

From this first piece, the other inequality follows easily: if is an up-set and is a down-set, then is an up-set, so

from the previous result. This then is

as desired.

**2. The Russo-Margulis Lemma **

Take an event in the weighted cube . We want to understand how varies with .

For a point , we say that entry , , is *pivotal* for if only one of and are in , where is with the entry at flipped. Then, we define the *influence* of entry as

**Lemma 2.** * Let be an increasing event in where . Then
*

*Proof:* Assume without the loss of generality that . If we have for , write for the probability of in the measure induced by on . In particular, if , then .

Also, for , let and be the events formed by extending with a 1 or 0 in the last slot, respectively. We have

by construction.

Now take an up-set , and let be the set of such that and are both in , and be the same set but such that only is. Then,

which, by our earlier observation, becomes

It’s clear now that

but since is exactly the set of for which is pivotal, this sum is .

That proof is a notational mess, but the idea is simple: think of an element of as an element of , plus a bit of information that tells you whether it is pivotal, and then use this decomposition to directly take the derivative.

Of course, this result is only useful if you actually understand things about the . I won’t prove it, but we have this bound: if and for each , then

This bound will be the second ingredient necessary in our proof of Harris-Kesten, which we will finally begin with in the next post.