Updates from ICM 2014

This week I’m at the International Congress of Mathematicians  (ICM) 2014 in Seoul, and thought I would post a quick update from a TCS perspective. See Tim Gowers’s blog for a much more comprehensive account. There are several other TCS folks here, and I hope some would also post their impressions and recommendations as well.
For TCS the big news was of course that Subhash Khot won the Nevalinna award for his work on the Unique Games Conjecture. As Omer mentioned, this is a research topic that I am extremely interested in, and so am very happy about this well deserved choice. Subhash also gave a fantastic talk, which I highly recommend. Like many others, I was also excited to witness the first time a female mathematician, Maryam Mirzakhani,  is awarded the Fields medal and hope we won’t have to wait too long for the first female Nevalinna medalist.

All the plenary talks were videotaped, and I believe that sooner or later they would be available on this website, so I thought I would mention a few talks that TCS folks might want to look at. Every (plenary or section) talk also had an accompanying survey paper, which again I hope would be available online in the not too far future. (Some people, like many of the TCS folks, already posted the papers on the arxiv/eccc etc.., and I hope we will see some more blog posts about it.)

Two talks that I particularly recommend are Emmanuel Candes’s talk on the “Mathematics of sparsity” and Manjul Bhargava’s talk on “Rational points on elliptic and hyperelliptic curves”.

 

Candes’s talk was an amazing exposition of the power and importance of algorithms. He showed how efficient algorithms can actually make the difference in treating kids with cancer! Specifically, one of the challenges in taking MRI images is that traditionally they take two minutes to make during which the patient cannot make a single breath. You can imagine that this would be dangerous to nearly impossible to achieve for young children. The crucial difference is made by using a sublinear-samples algorithm (i.e. compressed sensing), which allows to recover the images from much fewer samples, reducing the time to about 15 seconds. Another approach of dealing with this issue is to allow the patient to breath but try to algorithmically correct for this movement. Here what they use [as far as I recall] is a decomposition to a low rank plus a sparse matrix which they achieve via a semidefinite program related to the famous Geomans-Williamson max cut algorithm.  Interestingly, the latter question is also related to the well known lower bound question of matrix rigidity, and the parameters they achieve roughly correspond to the best known values for this question– somewhat (extremely) speculatively, one can wonder if perhaps an improved rigidity lower bound would end up being useful for this application..

Hearing Candes’s talk I couldn’t help thinking that some of those advances could perhaps have been made sooner if the TCS community had closer ties to the applied math community, and realized the relevance of concepts such as property testing and tool such as the Geomans-Williamson to these kind of questions. Such missed opportunities are unfortunate for our community and (given the applications) also to society at large, which is another reason you should always try to go to talks in other areas..

 

Bhargava’s talk just blew me away. I can’t remember when I went to a talk in an area so far from my own and felt that I learned so much. I can’t recommend it enough and of course given the use of elliptic curves in cryptography, its not completely unrelated to TCS. I will not attempt any technical description of the talk [just watch it, or read the accompanying paper] but let me mention a TCS-related theme, which actually seems to appear in the works of some of the other Fields medalists as well.

One example of the “unreasonable effectiveness” of algorithms is that they often capture our notion of “mathematical understanding”. For example, a priori, the fact that the clique problem is NP hard, does not mean that we should not be able to figure out the clique number of a particular graph such as the Cayley graph, but it turns out that this actually a real obstacle to do it. Similarly, in other areas of mathematics, whether it is figuring out the solution of a differential equation, or the number of points on an elliptic curve, a priori the non existence of an algorithm should not preclude us from answering the quesiton, but it often does. (I am deliberately conflating here the notion of non-existence of an algorithm and the notion of the non existence of an efficient algorithm; indeed for any finite problem, there is some trivial brute-force algorithm, but the existence of it does not help at all in achieving mathematical understanding.)

While we have a difficult time determining the clique number of any specific graph, we do have many tools to determine the clique number of a random graph. Such problems are still by no means trivial: e.g., rigorously determining the precise satisfiability threshold of a random 3SAT is still open. Bhargava tackled the problem of trying to determine the number of rational points on a random elliptic curve. In particular he proved that with some nonzero constant probability this number is infinite, and with some nonzero constant probability the number is zero [at least I think so; perhaps its only guaranteed to be finite]. This can be viewed as progress towards the Birch and Swinnerton-Dyer conjecture, which is one of the Clay math problems.

One interpretation of the “natural proofs” barrier is that to make progress on lower bounds, we need to develop more “non constructive” proof techniques, ideally going beyond those that apply only for “random” objects. Perhaps some of these advanced probabilistic tools can still be used in this effort. Also, there have been some “non constructive” results showing that deterministic objects have a certain “pseudo-random” property even in a setting where we don’t have algorithms to certify that a random object has that property. In particular, Bourgain (see this exposition by Rao) showed that a graph somewhat similar to the Payley graph has clique size at most n^{0.499} even though we still don’t have an algorithm for the planted clique problem that can certify that a random graph has clique number \ll \sqrt{n}.

 

Two other plenary talks that I liked at ICM were János Kollár’s talk on “The structure of algebraic varieties” and James Arthur’s talk on “L-functions and automorphic representations”. I can’t say I understood much of the latter, but I am now slightly less terrified of the Langland’s program (though I still wouldn’t like to meet it in a dark alley..). While I also couldn’t follow much of the former, it still gave me a overview of the effort of classifying algebraic varieties, which I could imagine would have possible TCS applications.

 

10 thoughts on “Updates from ICM 2014

  1. yes the fusion of hardcore TCS/math continues unabated and this batch of fields medals generally really attest to that. agreed about the need (or “moral”) to study random objects more wrt “lessons of natural proofs”. for example, random DAGs in particular seem not very well understood. really like the area of graph complexity as a promising direction. also suggest that maybe empirical studies, not so common for ❓ reasons, really have something to offer in the study of random objects. more on fields/ nevanlinna prizes

  2. Hey Boaz: It seems unfair to suggest (not sure if that is what you meant to do) that Candes et al are using TCS ideas. They have an independent sequence of discoveries (starting with Donoho’s original work on sparse recovery) that had really nothing to do with property testing. Also calling their algorithms “related to GW” doesn’t seem a fair attribution. They have built a nice edifice of convex programming ideas.

    I agree of course with the overall point that we in TCS should pay more attention to these problems.

  3. Hi Sanjeev,

    I didn’t mean to imply that Candes et al use more TCS ideas beyond what they attribute. (For example, Candes himself said [as far as I remember from the talk] that his algorithm for the phase recovery problem is the Geomans-Williamson algorithm.)

    But on the general point I agree with you that there were several ideas that were independently discovered in these different communities. This was exactly my point since it demonstrates that both communities could have benefited from more interaction.

  4. Boaz,

    When you say,
    “Hearing Candes’s talk I couldn’t help thinking that some of those advances could perhaps have been made sooner if the TCS community had closer ties to the applied math community, and realized the relevance of concepts such as property testing and tool such as the Geomans-Williamson to these kind of questions. Such missed opportunities are unfortunate for our community and (given the applications) also to society at large, which is another reason you should always try to go to talks in other areas.”

    How about besides going to talks in other areas (which I’m not disagreeing with), perhaps the TCS community as a whole should start caring a bit more about applications in general? I’m still of the opinion that FOCS/STOC theory is trending more and more isolated, in part because application-oriented work is consistently undervalued.

    Michael

    1. Michael, I do agree that it is important that our community values work that is more applied (or more generally interfaces with other communities) and that we can do better. (I don’t know if it was better in the “good old days” or not, and I think this a tendency toward isolation is an issue that arises in any scientific community, not just in theoretical CS, though that doesn’t mean we shouldn’t fight it.)

      For example, while our canonical metrics such as running time and approximation factor are very useful proxies for being a “good algorithm” (and in particular led to such advances as the Geomans-Williamson algorithm) it is important to remember that they do not correspond exactly to the notions of efficiency and quality that people would actually use. I do think people sometimes forget this basic fact.

      One great example of a line of work that both extends the notion of a “good algorithm”, as well as interfaces with other disciplines is the study of differential privacy, and a freely available monograph (or is it duograph?) on this topic by Dwork and Roth had just come out, http://www.nowpublishers.com/articles/foundations-and-trends-in-theoretical-computer-science/TCS-042/book-details

      Interestingly, sometimes differential privacy can be useful for notions that a priori have nothing to do with privacy, such as statistical generalizability, e.g. see the upcoming FOCS 2014 paper of Hardt and Ullman ( http://arxiv.org/abs/1408.1655 , I should note that both are former (grad and undergrad) students of mine, though that shouldn’t be held against them 🙂 )

  5. Can you explain the connection between cliques and Bourgain’s result? Rao’s survey does not explicitly say anything about cliques.

    1. OK, I think I got it. There is this remark in the introduction of Rao’s exposition on K*K minors of a boolean matrix.
      Do you know if there has been any improvement on Bourgain’s 2005 result?

      1. If you want an extractor (minors are almost balanced rather than being merely not the complete or empty graph) then no. For the latter weaker property known as being a disperser or a Ramsey graph, my paper with Rao, Shalitel and Wigderson improves on it but the expense of a very complicated construction. I don’t know of other improvements.

Leave a comment