This week I’m at the International Congress of Mathematicians (ICM) 2014 in Seoul, and thought I would post a quick update from a TCS perspective. See Tim Gowers’s blog for a much more comprehensive account. There are several other TCS folks here, and I hope some would also post their impressions and recommendations as well.
For TCS the big news was of course that Subhash Khot won the Nevalinna award for his work on the Unique Games Conjecture. As Omer mentioned, this is a research topic that I am extremely interested in, and so am very happy about this well deserved choice. Subhash also gave a fantastic talk, which I highly recommend. Like many others, I was also excited to witness the first time a female mathematician, Maryam Mirzakhani, is awarded the Fields medal and hope we won’t have to wait too long for the first female Nevalinna medalist.
All the plenary talks were videotaped, and I believe that sooner or later they would be available on this website, so I thought I would mention a few talks that TCS folks might want to look at. Every (plenary or section) talk also had an accompanying survey paper, which again I hope would be available online in the not too far future. (Some people, like many of the TCS folks, already posted the papers on the arxiv/eccc etc.., and I hope we will see some more blog posts about it.)
Two talks that I particularly recommend are Emmanuel Candes’s talk on the “Mathematics of sparsity” and Manjul Bhargava’s talk on “Rational points on elliptic and hyperelliptic curves”.
Candes’s talk was an amazing exposition of the power and importance of algorithms. He showed how efficient algorithms can actually make the difference in treating kids with cancer! Specifically, one of the challenges in taking MRI images is that traditionally they take two minutes to make during which the patient cannot make a single breath. You can imagine that this would be dangerous to nearly impossible to achieve for young children. The crucial difference is made by using a sublinear-samples algorithm (i.e. compressed sensing), which allows to recover the images from much fewer samples, reducing the time to about 15 seconds. Another approach of dealing with this issue is to allow the patient to breath but try to algorithmically correct for this movement. Here what they use [as far as I recall] is a decomposition to a low rank plus a sparse matrix which they achieve via a semidefinite program related to the famous Geomans-Williamson max cut algorithm. Interestingly, the latter question is also related to the well known lower bound question of matrix rigidity, and the parameters they achieve roughly correspond to the best known values for this question– somewhat (extremely) speculatively, one can wonder if perhaps an improved rigidity lower bound would end up being useful for this application..
Hearing Candes’s talk I couldn’t help thinking that some of those advances could perhaps have been made sooner if the TCS community had closer ties to the applied math community, and realized the relevance of concepts such as property testing and tool such as the Geomans-Williamson to these kind of questions. Such missed opportunities are unfortunate for our community and (given the applications) also to society at large, which is another reason you should always try to go to talks in other areas..
Bhargava’s talk just blew me away. I can’t remember when I went to a talk in an area so far from my own and felt that I learned so much. I can’t recommend it enough and of course given the use of elliptic curves in cryptography, its not completely unrelated to TCS. I will not attempt any technical description of the talk [just watch it, or read the accompanying paper] but let me mention a TCS-related theme, which actually seems to appear in the works of some of the other Fields medalists as well.
One example of the “unreasonable effectiveness” of algorithms is that they often capture our notion of “mathematical understanding”. For example, a priori, the fact that the clique problem is NP hard, does not mean that we should not be able to figure out the clique number of a particular graph such as the Cayley graph, but it turns out that this actually a real obstacle to do it. Similarly, in other areas of mathematics, whether it is figuring out the solution of a differential equation, or the number of points on an elliptic curve, a priori the non existence of an algorithm should not preclude us from answering the quesiton, but it often does. (I am deliberately conflating here the notion of non-existence of an algorithm and the notion of the non existence of an efficient algorithm; indeed for any finite problem, there is some trivial brute-force algorithm, but the existence of it does not help at all in achieving mathematical understanding.)
While we have a difficult time determining the clique number of any specific graph, we do have many tools to determine the clique number of a random graph. Such problems are still by no means trivial: e.g., rigorously determining the precise satisfiability threshold of a random 3SAT is still open. Bhargava tackled the problem of trying to determine the number of rational points on a random elliptic curve. In particular he proved that with some nonzero constant probability this number is infinite, and with some nonzero constant probability the number is zero [at least I think so; perhaps its only guaranteed to be finite]. This can be viewed as progress towards the Birch and Swinnerton-Dyer conjecture, which is one of the Clay math problems.
One interpretation of the “natural proofs” barrier is that to make progress on lower bounds, we need to develop more “non constructive” proof techniques, ideally going beyond those that apply only for “random” objects. Perhaps some of these advanced probabilistic tools can still be used in this effort. Also, there have been some “non constructive” results showing that deterministic objects have a certain “pseudo-random” property even in a setting where we don’t have algorithms to certify that a random object has that property. In particular, Bourgain (see this exposition by Rao) showed that a graph somewhat similar to the Payley graph has clique size at most even though we still don’t have an algorithm for the planted clique problem that can certify that a random graph has clique number .
Two other plenary talks that I liked at ICM were János Kollár’s talk on “The structure of algebraic varieties” and James Arthur’s talk on “L-functions and automorphic representations”. I can’t say I understood much of the latter, but I am now slightly less terrified of the Langland’s program (though I still wouldn’t like to meet it in a dark alley..). While I also couldn’t follow much of the former, it still gave me a overview of the effort of classifying algebraic varieties, which I could imagine would have possible TCS applications.