Exact Algorithms from Approximation Algorithms? (part 1)

One great “soft” challenge in (T)CS I find to be how to go on to find useful algorithms for problems that we believe (or have even proven!) to be hard in general. Let me explain by giving the all-too-common-example:

Practitioner: I need to solve problem X.

Theoretician: Nice question, let me think… Hm, it seems hard. I can even prove that, under certain assumptions, we cannot do anything non-trivial.

Practitioner: Slick… But I still need to solve it. Can you do anything?

Theoretician: If I think more, I can design an algorithm that gives a solution up to factor 2 within the actual solution!

Practitioner: Interesting, but that’s too much error nonetheless… How about 1.05?

Theoretician: Nope, essentially as hard as exact solution.

Practitioner: Thanks. But I still need to deliver the code in a month. I guess I will go and hack something by myself — anyways my instances are probably not degenerately hard.

What can we give to the practitioner in this case?

Over years, the community has come up with a number of approaches to the challenge. In fact, last fall, Tim Roughgarden and Luca Trevisan organized a great workshop, on “Beyond Worst-Case Analysis”. Besides designing approximation algorithms (mentioned above), one of the other common approaches is to design algorithms for certain subsets of instances of problem X (planar graphs, doubling metrics, or, abusing the notion a bit, average/semi-random instances).

Here, I would like to discuss a scenario where an approximation algorithm lead to an *exact* algorithm (with some guarantees). In particular, I will talk about the case of the Locality Sensitive Hashing (LSH) algorithm for the near(est) neighbor search (NNS) problem in high dimensional spaces.

While I hope to discuss the NNS problem in detail in future posts, let me quickly recap some definitions. The nearest neighbor search problem is the following data structure question. Given a set S of n points in, say, d-dimensional Hamming space, construct a data structure that answers the following queries: given a query point q, output the nearest point p\in S to q. It will be more convenient, though, to talk about the “threshold version” of the problem, the near neighbor problem. The Rnear neighbor problem is one, where are also given a threshold R at preprocessing, and the R-near neighbor query asks to report any point p within distance R from q.

This problem is a classical example of the curse of dimensionality: while the problem has nice, efficient solutions for small d (say, for d=2, a solution is via Voronoi diagrams+planar point location), these solutions degrade rapidly with increasing dimension. In fact, it is believed that for high dimensions, d\gg \log n, nothing “non-trivial” is possible: either query time is linear (corresponding to a linear scan of points), or the space is exponential in d (corresponding to the generalization of the above solution).

What is the approximate version of the R-near neighbor problem? Let c denote the approximation factor (think c=2). The relaxed desiderata is: if there exists a point p at distance R from q, the data structure has to report any p' within distance cR. Otherwise, there is no guarantee. (Intuitively, the “approximately near” points, at distance between R and cR, may be considered to be either “near” or “not near” as is convenient to the data structure.)

The LSH framework, introduced by Piotr Indyk and Rajeev Motwani in 1998, yields an algorithm for the c-approximate R-near neighbor with O(n^{1/c}\cdot d\log n) query time and O(n^{1+1/c}+nd) space. For example, for approximation c=2, this is ~\sqrt{n} query time with ~n^{1.5} space (we think of dimension d as being much less than n). There are indications that this may be near optimal  at least in some settings.

Now we are in the case at the end of the above dialogue. What’s next?

It turns out that the LSH algorithm may be used to solve the exact near neighbor search, with some, relaxed guarantees, as I will explain in the next post.

2 thoughts on “Exact Algorithms from Approximation Algorithms? (part 1)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s