You are heading to Paris for the first time. You finally get there and it is time for dinner, you need to pick a restaurant. Of course you would like to avoid Tourists-Traps. Traditionally, people were relying on rule of thumb, and if they were lucky and knew someone that has some useful information, they could collect recommendations and use those in their decisions. Today, we rely heavily on information from recommendations of previous costumers (mostly complete strangers), for example, when using sites like Yelp to make such decisions. Such sites have “changed the game” for the providers: from a one shot interaction with individual costumers, in which the future cost of providing low quality food for high price is negligible (due to a stream of new and naïve consumers that keep coming, unaware of the experience of prior costumers) to a repeated interaction with a community of users in which good service is rewarded by future business. The existence of mechanisms for information sharing within the community creates an incentive for restaurants to improve and provide better service.
Now, consider the system we are now using to review papers to conferences. All top conferences have acceptance rate well below 50%, meaning that the majority of papers are rejected. Many (if not most) of these papers will be submitted to later conferences, sometimes even the same conference in the following year. Each time a paper is submitted to a conference it (usually) gets a fresh set of reviewers, completely unaware of prior submissions of that paper to previous conferences, and to the comments it got in previous reviews. In many cases these previous reviews point out issues that are important and yet sometimes overlooked by the new reviewers. Such a loss is clearly inefficient, and the current system creates an incentive to submit rejected papers, sometimes without properly addressing issues raised by reviewers, in order to get a “fresh roll of the dice”.
Reviewing papers is a major community effort and as a community we should make sure that effort is not wasted. The huge burden of reviews also results with less time to invest in each review, and as a result quality of reviews decreases and the community suffers. We would like to propose the following system for discussion:
– A conference that chooses to participate in the system will declare at the CFP that it expects that authors of each paper that is submitted to attach a document that includes a declaration of all prior submissions of the paper to conferences, the reviews that have been received and the way the authors addressed the criticism in these reviews. Authors are definitely allowed not to accept every comment and make changes accordingly, but they should explain the reasons for their decisions in the document.
– PC members and reviewers for the conference will not have access to the document before they submit their fresh review for the paper. This is to make sure their evaluation is not biased by previous reviews. After they submit their reviews, the PC members get access to the document and can check that they do not find any reasons there to change their review. If they do, they would submit a revision of their review (yet the PC chair would have access to both versions of the reviews so she could make the final call using the reviews as submitted before the access to the older reviews was granted).
Improvement in quality and reduce load of reviewing will not be a result of the program committee relying on the old reviews and saving the time to review the paper again, but rather due to indirect effects of the new system:
– The new system will give authors the incentive to self-select papers more carefully before resubmitting, and thus decrease the overall burden of reviews. Additionally, papers that are resubmitted will be of higher quality as authors will have a stronger incentive to address the issues raised in past reviews.
– Making prior reviews viewable to current PC members will also allow prior reviews to be evaluated over time and with the hindsight of authors response. Knowing that this will happen may indirectly increase the quality of reviews. In fact, in some communities (for example in statistics) public reviews and rebuttals are published along with the paper (but this is a matter for a later blog post).
This proposal is in the spirit of what is already happening at some extent when submitting a paper to a journal. Essentially, the current system we have in place to handle revision within the same journal (but not across journals) is similar to our proposal. When revising the paper after getting the first round of comments, authors are obligated to address the comments of the reviewers in a document. In journal reviews however, the reviewers of the revision are usually the same reviewers that reviewed the original submission, while that will probably not be the case when submitting to a different conference.
Moshe and Ittai
14 thoughts on “Community brain”
Another thing I would like very much: Only papers freely available online may be submitted to conferences and journals.
Thanks for your comment Emanuele. I agree that the issue of free access to papers is very important, several previous posts on this blog have discussed different aspects of this issue, please have a look at https://windowsontheory.org/2012/04/04/an-economic-perspective-on-academic-publication/ and https://windowsontheory.org/2012/07/04/interesting-new-development-in-math-publishing/.
and one more: https://windowsontheory.org/2012/03/13/encouraged-expected-and-enforced/
How about a simpler system where you recycle a constant fraction of the reviewers from one version to the next? You re-roll some dice, but also get to have some people focusing their efforts on what has been changed and improved, rather than having every reviewer start from scratch.
When a paper is rejected, authors are expected to address issues raised in the reviews, hopefully making some of the criticism obsolete. Ideally, this means that old reviews are no longer reflecting the evaluation of the new version of the paper. Taking some of the old reviews and just using them as new reviews seems to be problematic, as these are reviews of an older version. In the suggested system, the purpose of submitting the old reviews is to allow the new PC to evaluated if the old comments were adequately addressed, not to serve as the new reviews.
I did say reusing the *reviewers*, not the reviews… I agree that the latter is a bad idea! Informally, recycling some reviewers seems to already occur. It would be up to the PCs to make sure that reviewers being recycled are those that are doing a good job, e.g. not too cranky or lazy. I believe that there is some secret conference illuminati facilitating this already anyway, but maybe there is a more efficient and open implementation.
I’m not sure how the suggestion of reusing reviewers should work. As a PC member (or PC chair), I do not know which conferences a paper was submitted, and certainly not who the reviewers were. As a PC chair, if someone asks me for the identity of reviewers, I would suspect that I will need theor permission to pass this information on (as reviewer’s anonimity is important).
I agree that reviewers are being reused, but this is usually because for some papers there are very natural reviewers that everyone contacts.
Am I missing the point?
I have indeed misread your comment, recycling reviewers is much better than recycling reviews, and some of that happens anyways. Yet, I agree with Omer on the problems it raises.
It’s true that the invitation to X to become a reviewer again for a new version of a paper that X previously reviewed should be anonymous, and de-anonymized when they agree to review. It’s also true that you’d need some securel shared database of submissions to STOC, FOCS, SODA, and whatever other top conferences you want to check. I think that reviewer reuse beyond the “natural reviewers” is useful. But I don’t mean to poo-pooh your proposal! I just think reusing reviewers who get to see the evolution and improvements to the paper is the most productive thing that should be more widespread.
Definitely agree that if rewiers reuse is possible, it could be very effective as you point out.
I think this is a great suggestion and well laid-out argument for it. The tricky part, as usual, is to make such a significant change (or even one-time experiment) in our high-inertia conference system. More blog posts would be a good next step. We should also try to bring this discussion up at the upcoming FOCS/STOC/SODA business meetings.
Thanks Vitaly for the positive feedback. I agree that adoption of new systems is never easy. Yet, I think that this system can be rolled gradually, there is no need for all the conferences to adapt it concurrently (which is infeasible). This is unlike some other suggestions to completely reshape the way we handle conference vs. journal submissions. If one of the leading conferences (say, STOC/FOCS) decides to adopt this approach, I think it is quite unlikely it will experience a big drop in quality submissions, so the risk is not high while there are clear benefits. Once one of these conferences adopts the system, other conferences would hopefully follow.
I also believe that reviewers should also made somehow more accountable for their reviews. More often than not, the reviews I get are quite useless. They do not help me in improving my paper further, and often give the feeling that the reveiwer just did not spend enough time reading/understanding the paper. What makes this worse is the following statement that one gets from the PC Chair: “the reviews do not reflect the real reason why the paper was rejected and what happened in the discussions and PC meeting.” If a paper was rejected merely due to competetion (or the taste of particular PC members in the problem that paper studies), is it really fair to ask the authors to address the review comments for such a paper. In addition, I personally believe that a system like this can be utterly painful to the authors. Your proposal to ask authors to do this extra work, is simply transfer for more work to the author to save/resuse some “community effort.” Why is that useful? The authors themselves are part of the community and I do not see how this saves “community effort” or workload as such. Today, as authors, we do a lot of work in our papers to have them in presentable and disseminatable form to the society. Is it really fair to add this extra — if I may say so —“annoyance”?