In a previous blog post, we saw how ideas from differential privacy can be used to prove a lower bound on the hereditary discrepancy for the Erdös Discrepancy problem. This lower bound happened to be nearly tight.

It turns out that this tightness is no accident. The connection between hereditary discrepancy and privacy is in fact much deeper, and can be used to get an approximation algorithm for hereditary discrepancy of any set system. In this post, I will try to summarize this connection.

First, let us recall the definition of *hereditary discrepancy*. For a matrix *discrepancy* of

It may be worth mentioning a

*Hereditary discrepancy* is a more robust variant of this definition: the hereditary discrepancy

To start with, note that one can prove that

Proving a good upper bound on hereditary discrepancy seems harder: how would I convince you that every sub-matrix

** The Partial Coloring technique **

A very useful technique in discrepancy theory is that of *partial coloring*. Suppose that we can find a

The partial coloring trick basically says that

** and Privacy **

Imagine we have a database

As you might be thinking, this would be impossible to guarantee if we respond with the correct answer

Let us say that a serious privacy violation occurs if an adversary who knows some

Let

** Approximating **

I will next try to argue how to get from this a polylogarithmic approximation algorithm for

The second certificate seems less obvious: how would I convince you that for *every* sub-matrix

- The mechanism that outputs
satisfies a reasonable privacy definition, that rules out a serious privacy violation. - The expectation of the
norm of is at most some value .

Since a serious privacy violation is ruled out, the expected error must be at least

In our case, the reasonable privacy definition will be

How do we construct such a mechanism? That is a slightly longer story and I will only sketch the main ideas. First, we move from

How do we find an

There is of course no reason for this mechanism to be anywhere near optimal, and in general it is not. However, it turns out that if this ball of radius *Minimum volume Ellipsoid* that contains all the points *restricted invertibility theorem* of Bourgain and Tzafriri guarantees that in this case, one can find

Of course in general, we will not be lucky enough to get a ball as the John-Loewner Ellipsoid. Suppose we get

Thus we now know that hereditary discrepancy is approximable to a polylog factor, and not approximable better than a factor of 1.5. Closing, or even narrowing this gap is an interesting open problem.

Many thanks to Alex Nikolov and Li Zhang for his contribution to the content and the writing of this post.

Edit 2/13: Edited to fix the missing link to the paper.