Editorial:
Standards for reviewing papers

Published in volume 17, issue 3, September 2007

Many years ago when I was a student, my advisor walked into my office, handed me a paper, and said "here, review this." Since I'd never reviewed a paper, I asked how. He said "simply answer five questions: What are the major results, are they correct, are they new, are they clearly presented, and are they worth publishing?" Twenty-two years later, I still think this list captures the essence of reviewing papers. I've accumulated a few more detailed ideas since then, and this is the subject of this editorial.

When I started submitting papers, I observed that reviewers were sometimes very unscientific in their evaluations. They assumed that authors intentionally wrote stupid things instead of simply making mistakes, assumed that something they didn't understand must be wrong, would criticize any paper that omitted references to their own work, and expected theoretical papers to be empirical and empirical papers to be theoretical.

When I joined conference program committees, I found surprising reasons for rejecting papers. A reviewer didn't like the authors, the results contradicted or superseded the reviewer's results, the reviewer did not like the author's advisor, and the paper was too original and creative to be "safe."

Along the way, I collected a few interesting comments from reviews of my papers:

With the help of these negative examples, as well as many positive examples, I slowly developed my own philosophy on reviewing. A paper should be accepted or rejected based on its key results, not its presentation. Unfortunately, it is easier to criticize presentation than ideas. Reviewers must be objective–personal factors should not affect the review. If you cannot be objective, you have a bias and should not review the paper. This is hard, but objectivity is essential to the advancement of science. Perhaps most importantly is that the authors worked hard and deserve our respect, even if their paper is wrong. Reviewers should never be rude while hiding behind the anonymity of the review process.

I have found that most scientists are good at reading papers and identifying their problems. What we are not always good at is translating those assertions into valid assessments of the paper. Problems in papers can be categorized as technical, presentational, or problems of omission. Just as we do with software failures, I've found it helps to assign a severity level to each problem.

Technical ProblemsPresentation ProblemsProblems of Omission
Minor: Mistake in background, related work Minor: Typos, spelling, grammar Minor: Omitted background, related work
Moderate: Does not affect the key results Moderate: Makes understanding the paper harder (organization, notation, repeated grammar) Moderate: Not part of the key results
Major: Changes the key results Major: Prevents understanding of part of the paper Major: Missing in the key results (proof or experiment, lack of control in experiment)
Critical: Negates the key results Critical: Prevents understanding or evaluating a key result Critical: Must be in the paper to evaluate the result (experimental study, etc) or not enough results

Most problems are minor and moderate presentation problems, minor technical problems, and minor problems of omission. These should be fixed in a revision, but should not be grounds for rejecting a paper.

Once reviewers find and categorize the problems, they should use those assertions to make an overall assessment of the paper and recommendation to the editor (or conference program chair). The first principle is that a paper should be rejected on technical grounds. Presentation problems that are so bad the results are inaccessible should be considered technical problems. A second principle is that if the changes may not be enough, the authors deserve to know. A third principle is that sending the authors "back to the lab" is always a major change. This will take time and the paper will have to be reviewed again. A fourth principle is that if the paper does not contain enough results, then the authors should have the opportunity to decide whether or not they want to add more or send the paper elsewhere. For example, if the paper has no validation, telling the authors that a "major revision must have a validation of the concepts" can be more fair than a flat rejection. Finally, it is important to remember that as a reviewer, you might be wrong! This is why it is very important to look at all comments from the other reviewers; sometimes they notice things we do not.

Putting all this together yields the following decision table for making assessments of papers.

 TechnicalPresentationOmission
RejectCriticalCritical 
Major Revision1Major
Moderate2
MajorCritical
Major
Minor RevisionMinorModerateModerate
Accept MinorMinor

1An important difference between a journal and a conference is that a major revision should be grounds for rejection in a conference.

2A moderate technical problem can lead to a minor revision if the reviewer believes the editor can check the changes.

I would like to acknowledge help in formulating this guidance from Richard A. DeMillo, John Knight, Lee White, Peter Denning, and hundreds of anonymous reviewers.

Jeff Offutt
offutt@gmu.edu
1 July 2007