Today, I choose to remember the five amazing years I spent in MSR-SV Labs (which are unfortunately closing). In a place with no boarders between research areas, I was free to follow my intellectual curiosity with colleagues I wouldn’t normally have the great fortune of working with. My non-theory colleagues have left me a much more complete computer scientist than I ever been.
My theory colleagues left me in absolute awe! Being surrounded by the creativity and brilliance of a unique collection of young scientists was such a rush. I initiated Windows on Theory because I thought that this rush must be shared with everyone. I hope that the readers of this blog got a glimpse of the breadth and depth of my theory colleagues. I am confident that they will make many departments and research groups much better in the following months and years. My only regret is every free minute I didn’t spend learning from these wonderful colleagues and friends.
My email for now will be email@example.com, so drop me a line.
My hearty congratulations to MacArthur Fellowship for handing down the right decision and naming Craig Gentry its fellow, better known as a genius. What a truly deserving winner! As the readers of this blog know full well, Craig has done seminal work in cryptography – time and time again. In his prize-winning Ph.D. work in 2009 Craig achieved what many had considered to be impossible – fully-homomorphic encryption. In short three years he (with co-authors, Sanjam Garg and Shai Halevi) proposed another object – a cryptographic multilinear map whose existence I’d been willing to bet against. Last year Craig (with several more co-authors) constructed an obfuscation mechanism with two amazing properties: it looks impossibly difficult to achieve and useless for any cryptographic applications. Both statements – you see the pattern here – are wrong. Indistinguishability obfuscation, as it has become known, quite plausibly exists and we are still in the process of grasping its true potential.
The FOCS program is now online here.
Congratulations to Yin Tat Lee and Aaron Sidford for winning the best paper and the best student paper awards for their paper “Solving Linear Programs in O˜(√rank) Iterations and Faster Algorithms for Maximum Flow“. They made an important advance in the theory of interior point methods by showing that you can actually converge faster, and match the non-constructive iteration-bound of Nesterov and Nemirovsky, if you modify on the fly the path the algorithm is taking. On top of that (and with a lot of extra work) they showed that these ideas can yield faster algorithms for Max-Flow in a broad range of parameters. It’s always nice to see how trying to solve one problem such as Max-Flow can often yield unexpected payoffs in areas that at first sight may seem unrelated (Max-Cut is another great example of this phenomenon) .
Of course, as I and some others mentioned, there are many other great papers in the conference, and the workshop/tutorial day is looking very good too. The schedule is also perhaps a bit saner this time around, with a bit less parallelism, and somewhat longer breaks than usual, so I am hoping to see many of this blog’s readers in Philadelphia in October! (Deadline for early registration and discounted hotel rate is September 22nd.)
Keeping up with the times, FOCS now has a more mobile-friendly website (thanks to Wolfgang Richter that gave me access to the codebase of the SOSP 2013 website) and even a twitter account ( @focs14 ). We might even have an app – more on that later.
[Boaz's note: videos of all ICM 2014 talks, including Mark's talk discussed below, as well as the talks of Candes and Bhargava I mentioned before are available online here. In particular, if you still don't know how one constructs a fully homomorphic encryption scheme then you should (a) be ashamed of yourself and (b) watch Craig Gentry's talk to see a description of a simple scheme due to him, Sahai and Waters that will close this gap in your education. ]
Guest post by Mark Braverman
My survey covers recent developments in the area of interactive coding theory. This area has been quite active recently, with at least 4 papers on the topic appearing in the next FOCS. This level of activity means that parts of the survey will probably become obsolete within a few years (in fact, I had to rewrite parts of it when the separation result by Ganor, Kol, and Raz was announced in April. [See also this newer result that was posted after Mark sent me his text --Boaz]
The basic premise of interactive coding theory is extending the reach of classical coding and information theory to interactive scenarios. Broadly speaking “coding” encompasses compression (aka noiseless coding), error correction (over both adversarial and randomized channels), and cryptography. The latter does not really fit with the rest of the agenda, since cryptographic protocols have always been interactive.
The interactive version of noiseless coding is communication complexity – and taking the information-theoretic view to it yields information complexity, which behaves as the interactive analogue of Shannon’s entropy. The analogue of Shannon’s Noiseless Coding Theorem holds in the interactive case. To what extent interactive compression is possible (i.e. to what extent the interactive analogue of Huffman Coding exists) is a wide-open problem.
On the noisy side, much progress has been made in the adversarial model, starting with the seminal work of Schulman in the 1990s. Many problems surrounding the interactive analogue of Shannon’s channel capacity, even for simple channels, such as the Binary Symmetric Channel remain open.
For the current state of affairs (surveyed for a Math audience) see my ICM survey which is available here.
The Simons Institute for the Theory of Computing at UC Berkeley invites applications for Research Fellowships for the research program on Cryptography that will take place in Summer, 2015. These Fellowships are open to outstanding junior scientists (at most 6 years from PhD by 1 May, 2015).
Further details and application instructions can be found at simons.berkeley.edu/fellows-summer2015. General information about the Simons Institute can be found at simons.berkeley.edu, and about the Cryptography program at simons.berkeley.edu/programs/crypto2015.
Deadline for applications: 30 September, 2014.
This week I’m at the International Congress of Mathematicians (ICM) 2014 in Seoul, and thought I would post a quick update from a TCS perspective. See Tim Gowers’s blog for a much more comprehensive account. There are several other TCS folks here, and I hope some would also post their impressions and recommendations as well.
For TCS the big news was of course that Subhash Khot won the Nevalinna award for his work on the Unique Games Conjecture. As Omer mentioned, this is a research topic that I am extremely interested in, and so am very happy about this well deserved choice. Subhash also gave a fantastic talk, which I highly recommend. Like many others, I was also excited to witness the first time a female mathematician, Maryam Mirzakhani, is awarded the Fields medal and hope we won’t have to wait too long for the first female Nevalinna medalist.
All the plenary talks were videotaped, and I believe that sooner or later they would be available on this website, so I thought I would mention a few talks that TCS folks might want to look at. Every (plenary or section) talk also had an accompanying survey paper, which again I hope would be available online in the not too far future. (Some people, like many of the TCS folks, already posted the papers on the arxiv/eccc etc.., and I hope we will see some more blog posts about it.)
Two talks that I particularly recommend are Emmanuel Candes’s talk on the “Mathematics of sparsity” and Manjul Bhargava’s talk on “Rational points on elliptic and hyperelliptic curves”.
Candes’s talk was an amazing exposition of the power and importance of algorithms. He showed how efficient algorithms can actually make the difference in treating kids with cancer! Specifically, one of the challenges in taking MRI images is that traditionally they take two minutes to make during which the patient cannot make a single breath. You can imagine that this would be dangerous to nearly impossible to achieve for young children. The crucial difference is made by using a sublinear-samples algorithm (i.e. compressed sensing), which allows to recover the images from much fewer samples, reducing the time to about 15 seconds. Another approach of dealing with this issue is to allow the patient to breath but try to algorithmically correct for this movement. Here what they use [as far as I recall] is a decomposition to a low rank plus a sparse matrix which they achieve via a semidefinite program related to the famous Geomans-Williamson max cut algorithm. Interestingly, the latter question is also related to the well known lower bound question of matrix rigidity, and the parameters they achieve roughly correspond to the best known values for this question– somewhat (extremely) speculatively, one can wonder if perhaps an improved rigidity lower bound would end up being useful for this application..
Hearing Candes’s talk I couldn’t help thinking that some of those advances could perhaps have been made sooner if the TCS community had closer ties to the applied math community, and realized the relevance of concepts such as property testing and tool such as the Geomans-Williamson to these kind of questions. Such missed opportunities are unfortunate for our community and (given the applications) also to society at large, which is another reason you should always try to go to talks in other areas..
Bhargava’s talk just blew me away. I can’t remember when I went to a talk in an area so far from my own and felt that I learned so much. I can’t recommend it enough and of course given the use of elliptic curves in cryptography, its not completely unrelated to TCS. I will not attempt any technical description of the talk [just watch it, or read the accompanying paper] but let me mention a TCS-related theme, which actually seems to appear in the works of some of the other Fields medalists as well.
One example of the “unreasonable effectiveness” of algorithms is that they often capture our notion of “mathematical understanding”. For example, a priori, the fact that the clique problem is NP hard, does not mean that we should not be able to figure out the clique number of a particular graph such as the Cayley graph, but it turns out that this actually a real obstacle to do it. Similarly, in other areas of mathematics, whether it is figuring out the solution of a differential equation, or the number of points on an elliptic curve, a priori the non existence of an algorithm should not preclude us from answering the quesiton, but it often does. (I am deliberately conflating here the notion of non-existence of an algorithm and the notion of the non existence of an efficient algorithm; indeed for any finite problem, there is some trivial brute-force algorithm, but the existence of it does not help at all in achieving mathematical understanding.)
While we have a difficult time determining the clique number of any specific graph, we do have many tools to determine the clique number of a random graph. Such problems are still by no means trivial: e.g., rigorously determining the precise satisfiability threshold of a random 3SAT is still open. Bhargava tackled the problem of trying to determine the number of rational points on a random elliptic curve. In particular he proved that with some nonzero constant probability this number is infinite, and with some nonzero constant probability the number is zero [at least I think so; perhaps its only guaranteed to be finite]. This can be viewed as progress towards the Birch and Swinnerton-Dyer conjecture, which is one of the Clay math problems.
One interpretation of the “natural proofs” barrier is that to make progress on lower bounds, we need to develop more “non constructive” proof techniques, ideally going beyond those that apply only for “random” objects. Perhaps some of these advanced probabilistic tools can still be used in this effort. Also, there have been some “non constructive” results showing that deterministic objects have a certain “pseudo-random” property even in a setting where we don’t have algorithms to certify that a random object has that property. In particular, Bourgain (see this exposition by Rao) showed that a graph somewhat similar to the Payley graph has clique size at most even though we still don’t have an algorithm for the planted clique problem that can certify that a random graph has clique number .
Two other plenary talks that I liked at ICM were János Kollár’s talk on “The structure of algebraic varieties” and James Arthur’s talk on “L-functions and automorphic representations”. I can’t say I understood much of the latter, but I am now slightly less terrified of the Langland’s program (though I still wouldn’t like to meet it in a dark alley..). While I also couldn’t follow much of the former, it still gave me a overview of the effort of classifying algebraic varieties, which I could imagine would have possible TCS applications.
I am delighted by the news that Subhash Khot was awarded the Rolf Nevanlinna Prize. I am reminded of a time (many years ago) when Robert Krauthgamer and I were arguing about one of Subhash’s papers if it is more of a Complexity Theory paper or more of an Algorithms paper. While this was a foolish argument then (and even more so now), it reflected our joint excitement by that work.
This is also a good opportunity to recall Boaz’ post on the unique game and other conjectures.