Away with Page Limits on Submissions

The FOCS 2013 PC is currently working on the call for papers (cfp). Our basis is the FOCS 2012 cfp. The main change we are contemplating is getting rid of the page limit for submissions. In fact, FOCS 2012 and previous conferences already took a step in this direction. For example FOCS 2012 cfp says: “There is no bound on the length of a submission, but material other than the abstract, references, and the first ten pages may be considered as supplementary and will be read at the committee’s discretion.” While this was in my opinion a step in the right direction, I feel it did not go far enough (and it does not answer all of the concerns that I will mention below).

I want to emphasize that there will be no change in the proceeding version (which will still be a two-column extended abstract). The hope is that authors will work on a full version of the paper, and submit it when it is already well written (e.g., ready to be posted on the authors’ web pages and on on-line repositories). Authors of accepted papers will still have to go through the unfortunate exercise of reformatting the paper for the proceeding version (which very few will look at).

Why the change? In essence, I think that the 10 page limit on submission is fiction. It is as if we have an agreement: authors write everything needed for the evaluation in 10 pages and reviewers will read these 10 pages carefully. The reality is different:

* Authors bypass the page limit by using less line spacing, denser fonts, shrinking the margins, separating abstract into a separate page, moving figures to the appendix, moving proofs into appendix, having 11.5 pages instead of 10, etc. All these things make it more difficult and unpleasant to review the paper (e.g., reviewers can skip proofs on their own rather than flipping back and forth between the body and the appendix). Reviewing papers in their natural structure is easier.

* Reviewers do not usually read the 10 pages carefully (by the way, we never promise in call-for-papers that they will read the 10 pages, but rather just state that they are not required to read the rest). On the other hand reviewers may need to read beyond the 10 pages, as we expect reviewers to reach some level of confidence regarding correctness. So in fact, the “legalistic approach” does not reflect reality (as reviewers read both less and more than required).

* The 10 page limit gives authors an excuse to submit papers that are half baked (deferring proofs etc. to the “full version” that may or may not ever appear).

* Authors waste time reformatting their paper. I personally hate doing it, and I’m sure I’m not alone on that.

The best argument I heard for page limit is that it forces authors to invest some thinking into their presentation and to make it accessible. But I’m not sure I buy it for 10 pages (whereas I may buy it for 2 pages). I think that turning a paper into submission is usually done mechanically and those that invest thought in it would also invest thought in their presentation regardless. A paper that is well written is well suited for conference reviewing whereas a badly-written paper will usually not improve by turning it into a 10-page submission.

The change: simply remove the page limitation! We will give additional guidance on how a paper should look like in  terms of presentation (rather than in terms of format) and will give concrete guidance to sub-reviewers about their commitments. At the end though authors
should apply common sense in writing their papers (taking into account the limited resources of the review process) and reviewers should apply common sense in reviewing it.

My prediction: this will make the vast majority of papers easier to review. Some papers will be very hard to read (this happens a lot with the 10 pages too). To make the PC work feasible, it is important that the PC allow itself the freedom to reject papers due to inadequate presentation (this sometimes does good for the paper and authors, as well as the community at large)

24 thoughts on “Away with Page Limits on Submissions

  1. Will this be changed only for the submissions, or with the published versions too?
    In which case, how would this play with journal papers?

    1. Only for submissions, the proceedings are still bounded by physical constraints, and as you mention we do not want to disturb the balance between conferences and journals. Still given that physical proceedings are losing their importance, this question requires more thought (which is beyond the scope of the PC).

      I actually hope that this change will make some (even if modest) contribution towards the maturity of submissions, and thus may increase the chances of papers ending up in journals (or at least being made public in full version).

  2. Because the final format of the proceedings is so different from the submission format, you are going to lose a lot of control over what subset of the results the authors choose to include in the proceedings. This strikes me as inviting gaming, for instance by packing a lot of results into a single submission, splitting off a least publishable unit from it for the proceedings, and then submitting what’s left to another conference and iterating.

    1. I disagree. Firstly, I do not think that we are losing control, or that there was much control (in the sense of enforcement) in the current system. Secondly, this is primarily a matter of ethics. Resubmitting anything that appeared in the submission version is bluntly unethical. Furthermore, anything that was emphasized in the submission version (e.g., appeared in the introduction), needs to be emphasized in the proceeding version.

      While unethical behavior for scientists is not unheard of (e.g., cases of plagiary have been reported), going to the extreme you are suggesting is unusual and dangerous to the reputation of a scholar. In terms of enforcement, abstract of accepted papers will be published and resubmissions face the risk of encountering the same reviewers. Your point does strengthen the direction of enforcing the dissemination of a complete (even if preliminary) version at an early stage (as discussed on this blog and others).

    2. How about allowing the PCs/reviewers to have a quick look at the camera-ready versions of the accepted papers so that they can make sure that no such thing happens? Of course, this could lead to more work and time needed for preparing the final version but it shouldn’t be so much, compared to how it helps prevent such a gaming.

  3. Quoth Omer: “To make the PC work feasible, it is important that the PC allow itself the freedom to reject papers due to inadequate presentation.”

    I hope you will get back to us on how this goes. In my experience, a paper proving something substantial can get away with an awful lot of badness in writing/presentation. PCs tend to condone this under some version of the argument, “Well what’re we going to do? Reject this important result because of some minor technical errors that any theorist worth their salt can mentally correct for? Besides, these little things will get fixed in the journal version.” (And I’m not saying this argument is without merit.)

  4. Quoth Omer: “To make the PC work feasible, it is important that the PC allow itself the freedom to reject papers due to inadequate presentation.”

    I hope you will get back to us on how this goes. In my experience, a paper proving something substantial can get away with an awful lot of badness in writing/presentation. PCs tend to condone this under some version of the argument, “Well what’re we going to do? Reject this important result because of some minor technical errors that any theorist worth their salt can mentally correct for? Besides, these little things will get fixed in the journal version.” (And I’m not saying this argument is without merit.)

    1. This is a very valid point, and it’s not black and white (it is unrealistic to expect flawless submissions). There is always a risk in rejecting important papers. In one example, authors that I admire submitted a paper that turned out to be very important (and even at the time I suspected as much). I argued that the community will gain from resubmission, I did not fight very hard and the paper got in. It is quite possible that my approach was wrong then (especially that the authors did revise). We’ll never know.

      I would say that the stronger the conference the less tolerance for bad presentation. Also, the worse the presentation the stronger the paper needs to be.

  5. Looks like a great idea to me: treating both authors and reviewers as adults who can figure out what to write and what to read.

    I am not too worried about the discrepancy between submission and proceedings version. I think the most important version is the arXiv one, and the proceedings version becomes less and less relevant with time. At this point, if the author posts on some archive, I don’t care so much what he submits as the camera-ready version. As far as I’m concerned he may as well put there photos from his last vacation..

  6. As with the previous system, the paper submitted will not be the paper published. This is ridiculous! Due to the time constraints, one cannot expect the reviewers to very carefully check every proof. Thus it increases the risk that a wrong result be published, and since the proofs are usually (at least partly) removed from the published version, it is harder for the community to detect the flaws. Of course, this can always happen, but at least if everybody can read the entire paper, somebody is probably going to detect the flaws.

    On the other hand, removing the page constraints seems to be a very good idea. A we should also let more time for the reviewers to review the papers. And maybe remove the a priori limit on the number of paper published. We could also change a but the conference so that people can submit during the whole year. This would be a much better system I guess. I propose a name for it: journal.

    1. To be fair, the journal version that appears is not the one submitted (this is exactly the point). More seriously, I think that FOCS/STOC did a lot for our field, and that they are still important. But this is a bit beyond the scope of this post.

  7. It’s not clear if this will nudge more people to post their longer versions either to an online repository or on their webpage. I sometimes find it incredibly frustrating to read papers (shorter versions) with a lot of their details skipped. Why can’t we force all the accepted papers to an online repository?

  8. You are of course absolutely right that a 10-page limit (or any page limit for that matter) is completely arbitrary. However, giving authors the ability to write without bound has its own problems. (It’s not just about forcing the author to better organize the paper; it’s about making the job of the reviewers easier.)

    (By analogy, a 55 mph speed limit is completely arbitrary — so what if I drive at 57 mph?! But removing the speed limit entirely would lead to chaos…)

    I don’t see what’s wrong with an arbitrary page limit plus unbounded appendices to be read at the reviewer’s discretion. All you’ve done is to replace that with an unbounded length paper, selected pages of which can be read at the reviewer’s discretion.

  9. Personaly, I think the whole “everyone is going to go nuts and write 100 page epics” argument is a canard. When you’re scrounging at the last minute to get the results in, are you really trying to pack things in like crazy ? I suspect the equilibrium size might settle at something between 15-20, which isn’t that bad.

Leave a comment