Skip to content

Turning STOC 2017 into a “Theory Festival”

May 7, 2015

This blog post seeks to solicit input from the theoretical CS community on possible changes to STOC starting 2017. This planning was set in motion as a result of a long discussion session at FOCS 2014 (see these two earlier posts) when strong support was expressed for a longer “theory festival” that would include STOC but have have other events, and broader appeal.

The SIGACT executive committee has asked us (Sanjeev, Boaz, Piotr, Omer and Tim) to prepare suggestions for the scope and shape of such an event, which we will present in the STOC 2015 business meeting. We welcome your input, either as comments below or as email to the address theoryfestival at gmail.com. We are also studying conference structures in other research communities, and soliciting opinions directly from many individuals.

Main consideration: What role should STOC play these days, now that papers are electronically available many months before the conference? Perhaps the focus should be on promoting network effects. Exchange of ideas between sub-communities is an important feature of theoretical CS, and one of the main reasons for holding a “big-tent” conference in the first place. An event with a bigger menu of activities to pick and choose, as well as a strong plenary component, could serve a variety of audiences and bring together people from different subfields, potentially increasing attendance. This means papers get more exposure; and attendees have a higher probability of connecting with colleagues, all of which make the conference even more attractive, thus creating a virtuous cycle.

What kind of activities might such a “theory festival” contain above and beyond a typical STOC? Some options are:

  1. Plenary talks: In recent STOC/FOCS’s there has been very little room for them; which is a waste of a rare opportunity when broad swaths of the theory community get together. Such plenary talks could contain:

    a) “Outward looking talks” by leading researchers in adjacent fields, whether it’s other parts of computer science or other sciences such as mathematics, physics, biology or economics.

    b) “Inward looking talks” by leading theoretical computer scientists that can survey for the audience recent advances in some area of TCS. (Recent STOC/FOCS’s have sometimes contained such talks, but often very few of them.)

    c) Presentation of recent technical work, whether it is from the current conference (where typically these days only the best paper awardees are presented in a plenary session) or top papers from other theory conferences.

  2. Workshops/tutorials: Recent FOCS/STOC have had a day of these, but it would be good include more “mini workshops”, ranging from a couple hours to a full day. Researchers often have many time and travel budget constraints which sometimes preclude them from going to a broad conference such as STOC/FOCS where only a tiny fraction of the talks will be in their immediate area. Having a specialized workshop as part of it might make it more appealing, promote information exchange within subcommunities and allow subcommunities to present their most important work in a polished form to the STOC attendees.
  3. Social activities: These bring the community together and help researchers of different sub-fields, geographical areas, and seniority levels get to know one another. Currently, STOC/FOCS have little room for these.
  4. Poster session, rump session, etc.: Poster sessions (consisting of papers selected by the PC) are an important part of conferences such as NIPS. They present information in a way that makes it possible to interactively browse through a vast menu, and ability to determine the quantum of time and attention (whether 2, 5 or 10 minutes) to devote to each. Poster sessions have had limited impact on theory conferences so far but this could easily change. If poster sessions were selected by the PC, they would attract better papers, which in turn would draw a bigger audience, which in turn would attract even better papers. Poster sessions could also give smaller sub-communities within theoretical CS a more permanent place at STOC, instead of being represented by a couple of papers.

We’ve been tossing around some of the above ideas, but we would love to hear more suggestions for other activities and content that could make the conference more attractive for you.

Alas, even if STOC expands to 5 or 6 days, it may not be possible to fit everything without some tradeoffs. So it is important to us to learn which tradeoffs are acceptable to you. Possible approaches to get more flexibility in scheduling include:

  1. More parallelism – moving from 2 parallel sessions to 3 parallel sessions for those talks that are not scheduled in the plenary session. (Note however that if there is 50% growth in attendance then each talk could still get the same visibility. In addition, accepted papers could be presented in evening poster sessions.)
  2. Paper presentations of varying lengths. Some papers presented at plenary sessions; others presented in shorter talks (possibly accompanied by poster presentation), as is done at NIPS. One can also have a “fast forward session” (as at SIGGRAPH) where authors give pointers to look for in the parallel sessions.  Experimenting with different talk for a brief presentation of their paper (aka “advertisement talk”) so that people know which formats might be a good idea regardless of schedule constraints — we have become very used to the standard format of a 20 minute talk in 2 parallel sessions, but there may be other ways to facilitate an exchange of ideas, and enable participants to get samples from areas of research outside their own.
  3. The default mode presentation mode for most STOC papers could be as a poster presentation together with a 3-5 minute “advertisement talk”. A subset of papers would be selected for the plenary session (which would, as mentioned above, feature also TCS papers from other conferences).  Other papers might be presented as part of the specialized workshops and tutorials.

What do you think? What would make you more likely to go to STOC? What changes could turn your off? Please do comment below.

Two requests: First, we prefer signed comments, but if you must comment anonymously, please include some information on your research area, country where you work, and your current academic status (e.g., student, faculty, postdoc etc..) and any other pertinent information. One of our goals is to understand whether STOC/FOCS serve some populations better than others.  Second, please make no comments about open access and copyright issues, or issues such as conferences vs journals,  as those are beyond the purview of this working group that is focused on making STOC a better event for the people that attend it.

We will monitor these comments until the STOC 2015 business meeting where we hope to hear from you in person. We are looking forward to your input!

Sanjeev Arora, Boaz Barak, Piotr Indyk, Omer Reingold and Tim Roughgarden

DIMACS looking for an associate director

April 14, 2015

The DIMACS Center at Rutgers University (dimacs.rutgers.edu) is seeking an Associate Director. DIMACS facilitates research, education, and outreach in discrete mathematics, computer science theory, algorithms, mathematical and statistical methods, and their applications.  The Associate Director is expected to play a leadership role in planning, developing, and running DIMACS activities and programs, including setting new directions.  A PhD in computer science, mathematics, operations research, statistics, or a related field is preferred. Applicants with a doctorate will be considered for Associate Director with a non-tenure-track calendar-year research faculty appointment at Rutgers. Highly qualified applicants without a doctorate will be considered for a staff appointment as Assistant Director. For more details, visit dimacs.rutgers.edu/AssociateDirector.

2015 Swedish Summer School in Computer Science – apply soon

March 25, 2015

Last summer I gave a mini-course on the Sum of Squares algorithm in the Swedish Summer School of Computer Science.

It was a great experience – the venue was  Djurönäset  – a hotel in the beautiful Stockholm archipelgo with stunning views and great food. It was organized very smoothly by Jakob Nordström, Per Austrin, and Johan Håstad, andthe students were bright and enthusiastic. I could see them collaborating on homework problems well into the night, and I’m sure that beyond learning on theoretical computer science, they made great new connections.

In fact, I believe the only downside to that summer school was my own course – this was the first time I taught  this material and so it felt very much like a beta version. (On the other hand, Ryan O’Donnell gave a fantastic lecture series on analysis of Boolean function, so the students didn’t suffer too much.) Luckily, this year the summer school features two absolute top experts on coding theory, Venkat Guruswami and Sergey Yekhanin, who will give courses on developments that are not yet as widely known in the theoretical computer science community as they should be, but have had wide reaching impact. Polar codes are considered one of the great recent breakthroughs in coding theory as they provide in some sense the first explicit codes that achieve Shannon’s non-constructive bounds for the capacity of the binary symmetric channel. Codes with local decoding are a prime example of the surprising interplay between theory and practice. Originally arising from probabilistically checkable proofs, they have recently found very significant industrial applications,  saving companies 100’s of millions of dollars.

I hear that there are still a few slots left. If you are a Ph.D student, Masters student, or postdoc who want to know more about coding theory, I can hardly think of a more pleasant and beneficial way to do so. If you are interested, please apply as soon as possible at http://s3cs.csc.kth.se/application.php . For any questions, the organizers can be contacted at  s3cs-2015@csc.kth.se.

NSF mandates (sort of) open access

March 24, 2015

See here.

NSF-sponsored papers should be freely available no more than 12 months after publication in a journal.

This is not perfect, but a step in the right direction. Computer scientists should insist that conference proceedings are treated the same way, and made freely available no more than 12 months after publication.

Hat tip: Lance Fortnow.

Tips for future FOCS/STOC program chairs

March 2, 2015

There are only two FOCS/STOC chairs per year, and most would do just fine without my unsolicited advice. Nevertheless, I thought it might make sense to write down some of my thoughts after chairing FOCS 2014, and perhaps even some people that are not program chairs will find it of interest.

This is based on advice I received from many people, some of which are mentioned by name but most aren’t. However the opinions here are my own. Others’ opinions (including those of the people mentioned) may vary: there is more than one way to run this process. Despite the length of this document, there are still many points and details that I omitted. If you are going to be a program chair, feel free to contact me for more advice, examples of forms, documents, and my (badly written and comment-free) code for scripts and website. You can also ask questions in the comments below.

Many discussions on conferences focus on issues of process – one-tier vs two-tier PC, anonymous submissions, page limits, conflicts of interest rules etc… These are all important issues and as you’ll see below I do have my own opinions on them. But I believe that to a first approximation the only thing that matters is the work that is put in collectively by the chair, PC, and reviewers. If the papers get reviewed by the right people that spend sufficient time to do a quality job then the results will probably be fine no matter what process you use; otherwise no process will save you. To a large degree, whether this happens is controlled by the amount of effort you put in as chair in first selecting the right people and then constantly staying on top of the process making sure that no paper falls between the cracks.

There are two common arguments why things will turn out fine even if you don’t work as hard as you could. I disagree with them both:

“No matter what, the good papers will get in, the bad papers will be rejected, and the only job of the PC is to figure out the decisions for the papers in the middle.”

This is patently false as we all know famous examples of great papers that were nonetheless rejected from conferences. There is no reason to think that such colossal mistakes were unavoidable and could not have been prevented by the PC working harder. For every one of those famous examples, there are many other “near misses” where the PC made the right decision only because of the diligence and persistence of one person.

“Even if you reject a great paper, it’s not so bad, since the author can always submit it to the next conference.”

Sure the author would do fine, but this is not the point. You don’t work for the authors but for the community and it’s not the conference that honors the paper – it is the paper honors the conference. So when a great paper is omitted from the program it is the conference, and by extension the community, that suffers. This is true even if the paper appears in the next conference but there are also other cases: the paper might be rejected again, or (increasingly likely these days) the author might submit the paper to a specialized conference, and so it might take much more time for the work to get known by the general theoretical CS community.

With general ruminations out of the way, here are some concrete tips in chronological order, from the time you are asked to be a program chair to the actual conference.

Read more…

Erdős’s Book and the Asymptotic Religion

January 12, 2015

In an undergraduate algorithms class we learn that an algorithm is a high level way to describe a computer program. The running time of the algorithm is the number of operations it takes on inputs of a particular size- the smaller the better. So, as even Barack Obama knows, if you implement Quick-Sort, with its running time of {O(n\log n)}, it would run faster than the {O(n^2)}-time algorithm Bubble-Sort. However, if you pick up a more recent algorithms paper, things become a little murkier. Many papers feature algorithms with running times such as {O(n^6)} or even {O(n^{20})}, and even some linear-time algorithms have hidden constants that make you want to run to your bed and hide under the covers. Asymptotic analysis does become more applicable as technology improves and our inputs grow but some people may still question the practical relevance of, say, an {O(n^{10})}-time algorithm, which seems no more feasible than the trivial {2^n}-time brute force algorithm.

So why do we care about getting asymptotically good algorithms? Every TCS graduate student should be able to recite the “party line” answer that it has happened again and again that once a problem has been shown to be in polynomial time, people managed to come up with algorithms with small exponents (e.g. quadratic) and reasonable leading constants. There is certainly some truth to that (see for example the case of the Ellipsoid and Interior Point algorithms for linear programming) but is there an underlying reason for this pattern, or is it simply a collection of historical accidents?

Paul Erdős envisioned that “God” holds a book with the most elegant proof for every mathematical theorem. In a famous quote he said that “you don’t have to believe in God, but you have to believe in the Book”. Similarly, one can envision a book with the “best” (most efficient, elegant, “right”) algorithms for every computational problem. The {2^n}-time brute force algorithm is in this book, and the essence of the {\mathbf{P}\neq\mathbf{NP}} conjecture is that this exhaustive search algorithm is in fact the best algorithm for some problems in {\mathbf{NP}}. When a researcher shows a cumbersome and impractical {O(n^{20})}-time algorithm A for a problem {X}, the point is not that you should code up A and use it to solve the problem {X} in practice. Rather, the point is that A‘s existence proves that  {X} is not one of those problems for which brute force is optimal. This is a very significant piece of information, since it tells us that the “Book algorithm” for {X} will be much faster than brute force, and indeed once a non-trivial algorithm is given for a problem, further improvements often follow as researchers get closer and closer to the true “Book algorithm” for it. We’ve seen enough of the Book to know that not all algorithms in it are either {O(n^2)} or {2^{\Omega(n)}} time, but there should still be an inherent reason for the “Book algorithm” to have a non-trivial exponent. If the best algorithm for some problem {X} is, say, {n^{\log n}}-time, then by reading that page of the Book we should understand why this is the right answer (possibly because solving {X} boils down to running the brute force algorithm on some smaller domain).

The same intuition holds for hardness results as well. The famous PCP theorem can be thought of as a reduction from the task of exactly solving the problem SAT to the task of solving it approximately. But if you apply the original PCP reduction to the smallest known hard instances of SAT (which have several hundred variables) the resulting instance would be too large to fit in even the NSA’s computers. What is the meaning of such a reduction? This issue also arises in cryptography. As some people like to point out, often when you reduce the security of a cryptographic scheme Y to the hardness of some problem X, the reduction only guarantees security for key sizes that are much larger than what we’d like to use in practice. So why do we still care about such reductions? The answer is that by ruling out algorithms that run much faster than the natural exponential-time one, the reduction gives at least some evidence that this particular approximation or crypto-breaking problem might be one of those problems for which the exponential-time algorithm is the “Book algorithm”. And indeed, just like the case for algorithms, with time people have often been able to come up with increasingly more efficient reductions.

Another way to think of this is that the goal of a theory paper is not to solve a particular problem, but to illustrate an idea- discover a “paragraph from the book” if you will. The main reason we care about measures such as asymptotic running time or approximation factor is not due to their own importance but because they serve to “keep us honest” and provide targets that we won’t be able to reach without generating new ideas that can later be useful in many other contexts.

Erdős’s quote holds true for theoretical computer scientists as well. Just as physicists need to believe in a “book” of the laws of nature to make inferences based on past observations, the Book of algorithms is what makes TCS more than a loose collection of unrelated facts. (For example, this is why in his wonderful essay, Russell Impagliazzo was right to neglect some “non-Book” scenarios such as \mathbf{P}=\mathbf{NP} with an n^{100}-time algorithm.) Ours is a young science, and we only know very few snippets from the Book. I believe that eventually, even statements such as {\mathbf{P}\neq\mathbf{NP}} would be seen as mere corollaries of much grander computational “laws of nature”.

[Updated 2/25 with some minor rephrasing]

FOCS 2014 videos are online

January 5, 2015

And available at this web page. At some point I would like to add a direct link to the video for each paper from the program page, but I figured that it’s best to announce this now rather than let the perfect be the enemy of the good.

Update: Paul Beame notes below that the videos, as well as other content for this and previous FOCS’s are available on  http://ieee-focs.org

Follow

Get every new post delivered to your Inbox.

Join 299 other followers