Skip to content

ITCS 2018 (guest post by Anna Karlin)

July 5, 2017

The ITCS 2018 Call For Papers is now available!  

ITCS is a conference that stands apart from all others. For a decade now, it has been celebrating the vibrancy and unity of our field of Theoretical Computer Science. See this blog post for a detailed discussion of what makes ITCS so cool and the brief description of ITCS’17 at the end of this post.

ITCS seeks to promote research that carries a strong conceptual message  (e.g., introducing a new concept, model or understanding, opening a new line of inquiry within traditional or interdisciplinary areas, introducing new mathematical techniques and methodologies, or new applications of known techniques). ITCS welcomes both conceptual and technical contributions whose contents will advance and inspire the greater theory community.

This year, ITCS will be held at MIT in Cambridge, MA from January 11-14, 2018.

The submission deadline is September 8, 2017, with notification of decisions by October 30, 2017.

Authors should strive to make their papers accessible not only to experts in their subarea, but also to the theory community at large. The committee will place a premium on writing that conveys clearly and in the simplest possible way what the paper is accomplishing.

Ten-page versions of accepted papers will be published in an electronic proceedings of the conference. However, the alternative of publishing a one page abstract with a link to a full PDF will also be available (to accommodate subsequent publication in journals that would not consider results that have been published in preliminary form in a conference proceedings).

You can find all the details in the official Call For Papers.

 

On last year’s ITCS (by the PC Chair Christos Papadimitriou)

This past ITCS (2017) was by all accounts the most successful ever.  We had 170+ submissions and 61 papers, including 5 “invited papers”, and 90+ registrants, all new records.  There was a voluntary poster session for authors to get a chance to go through more detail, and the famous Graduating Bits event, where the younger ones get their 5 minutes to show off their accomplishment and personality.

The spirit of the conference was invigorating, heartwarming, and great fun.  I believe none of the twelve sessions had fewer than 70 attendees — no parallelism, of course — while the now famous last session was among the best attended and went one hour overtime due to the excitement of discussion (compare with the last large conference that you attended).

TheoryFest + wutorial updates

June 23, 2017

Tomorrow is the last day of TheoryFest. My sense is that it was very successful in the most important metric that it was a great event for the people that attended it.  However, we will know more about this once we send out a questionnaire to attendees next week. (Please respond when you get it!)

Many people helped in making TheoryFest happen (my own role was quite limited: chairing the short invited paper committee), but two people that I think are worth singling out are the general chairs Hamed Hatami and Pierre McKenzie. Doing local arrangements often means a lot of work dealing with hotels, food, registrations, etc., without much recognition, but they really did an amazing job.

I might attempt later some longer expositions of some of the talks, but in the meantime, see Suresh’s and Dick’s posts. Luca writes about the awards. The best student paper awardee, Pasin Manurangsi, gave a wonderful talk about his almost tight hardness of approximation result for the classical densest-k-subgraph problem.  He managed to explain clearly what the result is, what are the limits of what we can hope for, and convey the main ideas of the proof, all in a 20 minute talk. The proof itself is beautiful: the reduction (from 3SAT) is simple and natural, but the analysis is quite subtle, combining clever new ideas with classical results in combinatorics.

Tomorrow is the workshop day which I am also looking forward to. Tselil Schramm said that we should probably have called our SOS workshop a “wutorial” since, while we will cover some “hot off the presses” results,  we are planning to mostly make the talks an accessible introduction to the field. In particular, Pablo Parrilo’s talk will be a broad overview of the sum of squares algorithm’s history, applications, and connections to other areas. (See my previous post for description of all the talks.) That said, there are some other great workshops in parallel to ours. That is probably one of the biggest problems with TheoryFest – too much great content and too little time..

TheoryFest begins

June 19, 2017

I arrived to Montreal tonight and already marked the talks I want to attend on the STOC mobile app. I am looking forward to a great program.

I am already seeing several times that I would like to attend two or three of the talks that occur in parallel. One day where this would definitely be the case is during the workshops.

All of them seem very interesting. Eli mentioned the workshop on PCP’s. It’s amazing to see how the probabilistically checkable proofs, that at the time I was a student seemed like the quintessential hard and abstract area of theory, is now the foundations for startup companies.

The workshop on “user friendly” methods from continuous optimization could also be very useful. One of the drawbacks to a method being so useful is that because many communities use it, the same concepts can be described in very different ways and terminologies.  Recently some significant advances were made by bridging together ideas from optimization and TCS, and this workshop is organized by some of the experts who did that, and can help the rest of us understand these different viewpoints.

The workshop on mechanism design has a packed schedule of many of the experts in this area including sessions about the tradeoffs of simplicity (which is often crucial for mechanism) and connections to learning. Speaking of learning , the workshop on machine learning  has a full day of talks on what is now one of the most active areas of interaction with theory. In fact, learning research is so active that it has direct interactions with several of the areas (such as continuous optimization, mechanism design, and sum of squares) that are in the concurrent workshops… but we TCS people know that bounded resources often necessitate tradeoffs.

I am involved in organizing (with  Pablo Parrilo and Tselil Schramm) one of the workshops – on the sum of squares algorithm . This workshop will consist of tutorial style talks, that assume only general TCS background, but will talk also about very recent works.

Pablo (who is one of the inventors of the SOS algorithm) will talk about some of its applications outside TCS and then about how the symmetry reduction technique, which is important in practical applications, turns out to be also crucial for proving combinatorial inequalities on graphs using flag algebras.

Tselil will talk about spectral algorithms inspired by SOS, and how these can be used to give new algorithms, and sometimes even quite efficient ones, that avoid using general SDP solvers.

Sam Hopkins will talk on obtaining SOS lower and upper bounds using a computational Bayesian perspective in the context of planted problems such as planted clique, community detection, etc.., and how low degree statistics play a crucial role.

Finally, I will talk about whether SOS could be an “optimal algorithm” and what’s the relation to Khot’s Unique Games Conjecture.  If you want to know what this means, or are one of the people who cringe at the notion of an “optimal algorithm” and want to upbraid me in person, this  talk will be for you.

The 1st Symposium on Simplicity in Algorithms (guest post)

June 17, 2017

[Guest post from Seth Pettie and the SOSA steering committee. –Boaz]

Attendees of the SODA’17 business meeting may recall our proposal for an algorithms conference dedicated to simplicity and elegance. We appreciate all the encouragement that we received from the community.

Thanks to the support from SIAM, the First Symposium on Simplicity in Algorithms (SOSA) will happen, and will be co-located with SODA’18, with a submission deadline of mid-to-late August (TBA). Raimund Seidel is the first PC Chair and is now assembling an impressive PC.

We’d like to encourage algorithms researchers to think about potential submissions.

SOSA is dedicated to advancing algorithms research by promoting simplicity and elegance in the design and analysis of algorithms.  The benefits of simplicity are manifold. Simpler algorithms improve our understanding of hard problems; they are more likely to be implemented and trusted by practitioners; they are more easily taught and are more likely to be included in algorithms textbooks; they attract a broader set of researchers to tackle difficult algorithmic problems.

Simplicity is prized in our community. We love to learn and teach algorithms that are “from The Book.”

However, simplicity frequently goes unrewarded. Often there is no appropriate venue to publish an alternative solution to a problem, especially if the new solution has a slightly worse running time. It’s almost a cliche to hear: “What a pretty result.” “Yep. We won’t be able to publish it.”

There are many reasons that a result could qualify as simple. For example, a submission might

• introduce a new simpler algorithm,

• present a simpler analysis of an existing algorithm, or

• offer insights that generally simplify our understanding of important algorithmic problems.

We hope that you’ll share your simple and elegant results by submitting them to the 1st SOSA.  Short and sweet papers are especially encouraged.

Your algorithms may already appear in “The Book,” but we’d also like to see them in SOSA.

The SOSA Steeting committee,

M. Bender, D. Karger, T. Kopelowitz, S. Pettie, R. Tarjan, and M. Thorup

Bitcoin and Theoretical Computer Science (guest post by Eli Ben-Sasson)

June 15, 2017

[This is a guest post by Eli Ben-Sasson, mentioning only some of the fascinating TCS connections to crypto-currencies. If you’re interested in more then you should check out the TheoryFest talks that Eli mentions. –Boaz]

What is Bitcoin? Why should TCS care?

Eli Ben-Sasson

In Crypto We Trust

You probably heard of Bitcoin, the crypto-currency invented in 2008 by an unidentified person (or persons) under the pseudonym Satoshi Nakamoto. When I started working on this blog post, Bitcoin’s market cap was around 30B USD. Now, a month later, it exceeds 40B USD. Although its long-term price stability is debatable, it has already spawned hundreds of similar crypto-currencies and a flourishing startup econony. It is also the method of payment preferred by certain ransomewares.

I got involved with crypto-currencies when I attended the Bitcoin Conference in San Jose in May 2013, to talk about how implemented cryptographic proof systems, like PCPs and ZK, could benefit Bitcoin. This led to my involvement in Zerocash (academic paper), and ZCash (its implementation as a crypto-currency); but that’s a topic for a different blog post. That conference was, by far, the most electrifying I have ever attended. The atmosphere was manic and there were lines of people waiting to get autographs from Bitcoin core developers (!). The excitement wasn’t just because Bitcoin’s price recently soared (though that certainly helped) but because people sincerely felt that they were “writing History”.

Time will tell if Bitcoin is a disruptive technology or just a modern Tulip Mania. But Bitcoin did already achieve something quite amazing. It showed for the first time (to the best of my knowledge) that cryptographic protocols over a decentralized network can practically replace functions that, so we thought, can only be managed by a Central Trusted Party (CTP). In the case of Bitcoin, this function is Fiat Money (explained below). Bitcoin’s moto, “In Crypto We Trust”, says it all. I admit that this revelation, that Money can be created “out of thin air” in this manner, and that people all over the world will attach economic value to it, has surprised me even more than the success of Wikipedia. Like Wikipedia, I never would have guessed it to be possible.

Fiat Money: Way back, Money was associated with physical resources whose amount is limited by Nature, like seashells or gold. Money still takes this form, of physical limited resources, in closed micro-societies like prisons (cigarettes) and elementary schools (bubble gum wrappers, in mine). Fiat means in Latin “it shall be”; Fiat Money is a resource made limited by decision and, prior to Bitcoin, this decision laid in the hands of CTPs like Central Banks, whose Monetary Policy determines how much money is minted (or removed from circulation) and who gets it.

Bitcoin’s protocol has set in stone an ingeniously simple monetary policy that incentivizes participants to reach consensus regarding Bitcoin’s decentralized payment-ledger, called the blockchain. If you haven’t heard yet about mining and blockchain, you should spend a few minutes and watch one of these excellent introductory videos [1, 2], or read one of these short explanations [1, 2].
My brief redux of it appears at the end of this post.

Bitcoin and theoretical computer science

The connections between Bitcion and TCS and old and strong. Bitcoin’s reliance on moderately hard functions goes back to the fundamental TCS work of Dwork and Naor on combatting spam-mail. The consensus mechanism that Bitcoin uses addresses in a novel way the Byzantine agreement problem, a cornerstone of distributed computation dating back to the 1980 work of Pease, Shostak and Lamport, and understanding the exact nature of this solution is a matter of ongoing research [cf. 1, 2, 3].

As a theoretician who happens to talk to Bitcoin (and other crypto-currency) developers, I am constantly impressed by how deeply they care about our research, and how passionate they are about learning it. This is a gratifying experience that I have not had in prior areas of theory that I have worked in. I believe this feeling is shared by other theoreticians working in this space.

But most importantly for theoreticians, Bitcoin’s core innovation, the creation of economic value from protocols and algorithms, makes you wonder. Human societies have used various ways to reach consensus and govern pooled resources. Fiat Money is just one case, and things like Law, Politics and Government loom behind. Could some other concoction of crypto and economic incentives lead to the practice of new forms of Governance?

You can hear more next week at the Theory Fest in Montreal:
+ Monday, June 19: Aviv Zohar will give a tutorial on Decentralized Cryptocurrencies

 

Appendix – Bitcoin’s mining protocol

The ideal functionality that Bitcoin attempts to simulate is that of an append-only ledger, or sequence, of transactions (like “Alice pays Bob 10 coins”) that are individually valid (Alice has 10 coins in her disposal before paying Bob). The real-world model in which Bitcoin tries to simulate this is a decentralized and dynamically evolving network of nodes, or miners; each miner is assumed to be anonymous, rational, and selfish. Bitcoin suggests a simple protocol that ensures that a sufficient majority of honest nodes agrees on a valid prefix of the transaction ledger, and furthermore is incentivized to accept new transactions. Transactions fees suffice to incentivize miners to add new transactions to the ledger. Reaching consensus is the tricky part. Bitcoin’s protocol incentivizes miners to reach consensus by paying a reward of C new coins to the first miner that manages to extend the blockchain by a new valid block. The miner reward is part of that block, and this is how new money is printed. All miners record (their view of) the whole blockchain, and will reject a block if it contains even a single invalid transaction; this incentivizes miners to report only valid blocks (recall the miner’s reward is part of her reported block).

The reward is a hefty sum (at time of writing, more than 35,000$), so miners compete to win it. The winner is the first miner that solves a moderately hard problem (MHP) P(B,b) that is specified by the blockchain state B and the new block b that the miner is attempting to append to B. While the problem is moderately hard to solve, verifying correct solutions for P(B,b) is easy. Satoshi’s protocol says that miners should always attempt to extend the longest valid blockchain. If Alice announces her solution first, most likely (up to network delays) most honest miners will accept the solution, and consequently will try to extend the new, longer, blockchain B’=B||b. As long as a majority of mining computation power is honest, an attacker has low probability of success in rewriting the blockchain history because the “honest history” will grow longer much faster than his “dishonest” version of it. Thus, the much needed stability property is achieved in Bitcoin.

“TCS: The Next Decade” Panel at STOC 2017

June 14, 2017

(Guest post by Anna Karlin)

As you know there will be a panel on “Theoretical Computer Science: The Next Decade” on Wednesday morning at STOC 2017.
The panelists are Andy Yao, Russell Impagliazzo, Cynthia Dwork, Dan Spielman, Tim Roughgarden and Ankur Moitra.
As moderator, I’d love to hear any specific questions/topics you’d like to hear the panelists pontificate about.

Send any thoughts along these lines to Anna Karlin (karlin at cs dot washington dot edu).

Any questions/suggestions that we don’t use will be posted to this blog after the panel takes place.

Thanks!

Two years ahead of schedule?

June 11, 2017

When I taught my crypto course in the spring of 2016, I motivated the study of lattice-based cryptography by presenting the following spoofed NYTimes headline from four years into the future:

nytimes

It seems like Google is trying to achieve this much earlier:

 

googlequant

If and when a convincing “quantum supremacy” demonstration emerges, it would be interesting to see the effect on classical public key crypto systems. If communication needs to be kept secret for X years, and it takes Y years to transition to a new standard, then the transition away from RSA, Diffie-Hellman and Elliptic curves needs to start the moment we suspect that we are X+Y years from quantum computers large enough to break them, which is probably why the NSA already started this process.

(BTW the “president Trump” part in the spoofed headline was joke (or at least I thought so at the time): this was two months before Mr. Trump became the GOP presumptive nominee.)