Editorial
Published in volume 26, issue 3, May 2016
This issue presents three new inventions in software testing.
A major research topic in software engineering is that of fault localization, that is, finding faulty code in failing software.
Probabilistic reasoning in diagnosing causes of program failures, by Junjie Xu, Rong Chen, and Zhenjun Du, invents a new graph to model possible faults probabilistically.
(Recommended by Atif Memon.)
The second paper, Behaviour abstraction adequacy criteria for API call protocol testing, by Hernan Czemerinski, Victor Braberman, and Sebastian Uchitel, invents new criteria to test whether APIs are used correctly.
Specifically, the criteria measure the extent to which software that uses the APIs conform to the expected protocol such as whether the methods are called in valid orders.
(Recommended by Hasan Ural.)
The third paper in this issue addresses the difficult oracle problem.
Predicting metamorphic relations for testing scientific software: A machine learning approach using graph kernels, by Upulee Kanewala, James M. Bieman, and Asa Ben-Hur, invents a technique to use machine learning to predict metamorphic relations, which are used to create oracles in software for which correct behavior is unknown.
(Recommended by TY Chen.)
How to Write an Effective “Response to Reviewers” Letter
A well written response letter is critical when resubmitting a paper to a journal. A key is to be proactive. As authors, we’re happy that the journal did not reject the paper, but we always find comments that are annoying, frustrating, or downright insulting. Authors even react to comments that are valid and correct. But as I discussed in my last editorial [1], our goal is not to argue with anonymous reviewers, but to publish a paper. Indeed, authors, reviewers, and editors all want the same thing: to publish good papers. Even the worst reviews can help us write better papers.
A response letter is the authors’ chance to assert proactive control over the process. Don’t try to convince reviewers they were wrong-that’s the path to rejection. The goal is much simpler: The response letter should convince the reviewers that you modified the paper perfectly before they even look at the revision.
Sadly, many reviewers decide whether to accept or reject a paper in the first five minutes they consider it. Then they spend the rest of their time looking for reasons to support their initial assessment. (Please note that I am definitely not advocating that approach. It is anti-science and destructive, but is unfortunately common.) The response letter is the first impression of that crucial first five minutes. I’m suggesting some ways to not blow it.
Below is a list of “dos” and “don’ts” for writing effective response letters.
DO this:
- Recognize that the reviewers are volunteers and used their time to work hard on your paper. Thanking the reviewers for their time is not just good manners; it helps the communication to be positive and productive.
- Respond to every single comment. Even trivial typos should be acknowledged. That not only helps ensure you don’t miss something, but tells the reviewers and the editor that you were careful and diligent.
- For your convenience and the reviewers, it helps to number each comment. If you can use different fonts or formatting for the comments and your replies, the response letter will be easier to read and evaluate.
- Keep your responses, short, direct, and to the point. This is no place for philosophy or speculation. A simple “Done.” is a terrific response, as is “Please see our response to comment 7 from reviewer 1.”
- Always accept responsibility. When comments are wrong, take ownership by suggesting the reviewer missed the point because the presentation was unclear.
- If you cannot, or choose not to, make the requested change, carefully explain why not. First check the editor’s summary to see if this is listed as a required change. Note that some reviewers will interpret an explanation of “this change would take too much effort” to be another way of saying “I am too lazy.” And be aware that the reviewer may take this response as a reason to reject the paper or ask for another revision.
- Make sure to highlight new material, and tell the reviewers exactly where it is. Be sure to check the page numbers after all changes are finished, because the new material may move during editing.
- Most journals (including STVR) send all reviews and responses to all reviewers. So if two reviewers contradict each other, it’s okay to point that out explicitly. Politely. For example, “reviewer 2 asked for just the opposite, so we are trying to split the middle by ...”
- If you change authors during a revision, be sure to inform the editor. Adding an author is usually non-controversial, but removing an author is. Most journals will ask for an email from the former author to ensure that the change was handled in an open and professional manner.
DON’T do this:
- Don’t insult reviewers. They may have made a silly mistake, but reviewing is hard work and we all make mistakes.
- Don’t accuse reviewers of incompetence. Maybe they were. Or maybe the real point is that your writing could only be understood by 10 or 15 specialists in a specific area. Journals don’t usually want to publish papers that are not accessible to a wide audience than that.
- If you really think a reviewer is biased, think twice. Authors suspect bias at least 10 times more than it occurs. But if you really think it’s a problem, address a polite note to the editor and ask him or her to consider the possibility.
- Don’t ignore comments you don’t know how to handle.
- Don’t ignore comments you can’t understand. Send a note to the editor and ask for a clarification.
- Don’t try to pretend you handled a comment when you really didn’t. Most reviewers (and editors) will notice, and when they do, you have just lost all credibility.
I want to acknowledge Stephen Schach, my most proactive co-author, for teaching me most of these tactics.
[1] Jeff Offutt. How to revise a research paper (Editorial),
Wiley’s journal of Software Testing, Verification, and Reliability, 26(2), March 2016.
Jeff Offutt
George Mason University
offutt@gmu.edu
17 March 2016