Skip to content

When did Majority become the stablest? (Part 2)

May 10, 2012

The first question we’d like to answer is this: which is the monotone, balanced, transitive Boolean function which is least sensitive to bit-flips? We know that Majority is the worst possible with NS( Maj) = \Theta(1/\sqrt{n}).

The new champion turns out to be the Tribes function first studied by Ben-Or and Linial. Tribes_{s,w} is a read-once DNF which is the OR of s disjoint terms, each term is the AND of w variables:

Tribes_{s,w}(x^1_1,...,x^1_w,...,x^s_1,...,x^s_w) = \vee_{i=1}^s\left( \wedge_{j=1}^w x^i_j \right)

By choosing s = \Theta(2^w) carefully, we get a (near)-balanced, transitive, monotone function where NS(Tribes) = \Theta(2^w) = \Theta(\log n/n). So it beats Majority quite handily. But is this the best possible? Or can we hope to achieve sensitivity O(1/n)? This would be optimal, since it is easy to argue an \Omega(1/n) lower bound.

It turns out that in fact O(\log n/n) is the best possible. This follows from the celebrated result of Kahn, Kalai and Linial (KKL), which tells us that this is a lower bound on the sensitivity of any balanced Boolean function, where all variables have small influence. The Theorem is usually stated in terms of average sensitivity, which is AS(f) = n\cdot NS(f). In our setting, the monotone and transitive conditions imply that no variable has influence more than O(1/\sqrt{n}), hence their result applies.

At first, Tribes might not seem as natural a function as Majority. So why does it turn out to be the least sensitive to bit-flips? For KKL to be tight, the Fourier expansion of NS(f) tells us to look for a function which is concentrated on the first O(\log n) levels of the Fourier spectrum. This is equivalent to saying that f is well-approximated by a polynomial of degree O(\log n).  We know DNFs have such concentration. But why not something even simpler, like a decision tree that has (exact) degree O(\log n)? It turns out that such functions will invariably have some “special” co-ordinates, and hence cannot be transitive. This is a relatively recent result by O’Donnell, Saks, Schramm and Servedio, which says that every degree d Boolean function has a variable with influence \Omega(1/d). So we are back to CNF/DNFs and Tribes is about as simple as DNFs get.

Another natural example of a function that has low sensitivity (which I learnt from Yuval Peres): consider n points on a circle. Assign a random bit to each. Define g(x) to be 1 if the longest run of 1s is of length c\log n. For a suitable choice of c, this function is near balanced. It is monotone, and further NS(g) = \Theta(\log n/n).

So given that Tribes beats Majority so handily for bit-flips, how does it fare under \epsilon-noise for constant \epsilon? Not very well, it’s sensitivity approaches 0.5. Indeed, if you are looking for a function which is maximally sensitive, Tribes is not a bad choice.

So life is yet again complicated: there is no single functions that seems optimal for all settings of \epsilon-noise. Indeed, this is why results on hardness amplification within NP which rely on composition with monotone functions need to switch from one function to another at various points. In contrast, if we don’t care about staying within NP, we can just use the XOR function.

But at the same time, perhaps a unified explanation is possible to explain the entire spectrum?

  1. Is there an intuitive explanation for why the least sensitive function should look like a Hamming ball when \epsilon is large, and look “Tribes-like” (Tribish? Tribal) when \epsilon is small? In some sense, we got into this mess by disallowing co-ordinate functions, which are essentially sub-cubes. Are these the functions that “look” most like sub-cubes (from some strange angle)?
  2. Is there an inverse result which interpolates smoothly between KKL and Majority is the Stablest? Given that these are two of the most powerful theorems  in all of Boolean function analysis, it would have to be quite a theorem.


Comments are closed.

%d bloggers like this: