Rebuttal tips

Based on my own experience writing rebuttals, which I learned from Yejin Choi and Noah Smith.
- Do a first read: - Put the reviews in a Google doc, start color coding parts of the review: I like to highlight positive snippets in green, negative ones in red , and other addressable parts in orange. - Identify if there are any quick data analyses that you can start running (e.g., correlations, agreement numbers, etc). - Draft an angry response. Then, sleep on it! - Drafting the rebuttal: - Identify each of concerns that the reviewers had, and make sure that each has a dedicated bullet point. - If there are shared concerns among reviewers, you can use the general comments box. Don't forget to point to it in the response to individual reviewers too. - Be concrete when addressing comments. For example, if they complained that some measure wasn't used, say that you'll add these measures, and give an example of that measure to show that you can run it, if it's feasible. - Back up your points with numbers, citations, or even what other reviewers said. You can re-use citations from your paper, or introduce new ones to make your points. - Try to see from the reviewers perspective (this is a hard one). Try to understand what the reviewer values, what subfield they care most about, etc. Then you can try to appeal what they value by pointing out how your paper works towards their own goals. - Finetuning the rebuttal: - Be polite and considerate. You too are a reviewer, so imagine what tone of rebuttal you would want to read. - Promise and give clarifications instead of correcting the reviewer. I like to use the phrasing of "we'll clarify _[something that we already wrote in the paper]_ in the final version" because chances are that if they missed that point, and any other hasty reader will too, so in the end clarification can help your paper. - Start off with thanking them, and if you can, quote something positive the reviewer said. This serves two purposes: (1) it reminds the reviewer that they did like some aspects of your paper, and (2) it shows the area chair and other reviewers that this reviewer liked something about the paper. Note: phrasing this is tricky. - Don't hesitate to use formatting in your rebuttal. I like to bullet and italicize the main points that I'm responding to, and I sometimes bold things that are really important. - When making headers for each point that you're responding to, headers should summarize the rebuttal point, not the weakness/question. For example, say "*Clarifying the statistical significance of our findings*" instead of "*Are the findings significant?*". Again, this is because you're writing for the AC/meta-reviewer as well, and you want them to see at a glance that your work is valid, not have questions about the validity. However, make sure to keep the phrasing and words similar to the weakness/question so that the reviewer can easily identify which point you're responding to. - Don't forget to thank them for suggesting citations, pointing out typos, etc., and say you'll address them. - Writing to area chair (AC) / metareviewer: - Sometimes reviewers have biases against the paper. One particular reviewing bias that creeps up a lot is what ARR calls ["lazy thinking"](https://aclrollingreview.org/reviewertutorial#6-check-for-lazy-thinking), which is often worth noting to the AC. Try phrasing a respectful note to the AC, highlighting your contributions and why the work you did is not actually obvious, etc. Other great resource for writing rebuttals: - _[How we write rebuttals](https://deviparikh.medium.com/how-we-write-rebuttals-dc84742fece1)_ by Devi Parikh, Dhruv Batra, Stefan Lee. - _[How to write an author response to *ACL/EMNLP reviews ](https://docs.google.com/document/d/1mt8aYM88Jj5qkep1xYC5vj0wBlbX2u6gdxhf_puaiQI/edit)_ by Noah Smith. Thanks to Saadia Gabriel for comments on this, and to all my co-authors who have helped me write rebuttals. ### Note on getting discouraging reviews We all get bad reviews sometimes, either that are overly critical or just not well-justifiedly or not constructive. Unfortunately that happens. I just wanna share that some of the papers that I've gotten the worst reviews on, which got rejected on the first submission, ended up being some of the most impactful papers of my career. The lesson is that reviews can sometimes provide a harsh reality check on how people perceive our paper, which can give you the impetus to make the changes to make the paper amazing. Here's some example (hopefully inspirational) stories. - [Power and Agency Gender Biases in Movies](https://www.aclweb.org/anthology/D17-1247). This was my first first-author paper as a PhD student, and I was really excited about this project on measuring differences in how men and women were portrayed w.r.t. power and agency in movie scripts. But our first version, a long paper submitted to ACL 2017, did not get good reviews at all. Reviewers criticized it for the non-surprising conclusions, as well as for not belonging in an NLP venue (this was/is a common critique of more social-oriented work, people assume it should go into non-NLP venues because it "lacks technical novelty"). We learned our lesson from this rejection (mostly my advisor Yejin knew the lesson, I didn't understand much at the time), made it into a short paper, emphasized the novelty (which was our connotation frames of power and agency). It got accepted at EMNLP 2017. And since then, this work has had quite some impact in and beyond NLP: it has been used my many computational social science folks (e.g., analyzing [birth stories](http://dx.doi.org/10.1145/3359190), [dehumanization](http://arxiv.org/abs/2003.03014), [racism in textbooks](http://dx.doi.org/10.1177/2332858420940312), etc.) and has even been included in Dan Jurafsky's [Third edition of the Natural Language Processing textbook](https://web.stanford.edu/~jurafsky/slp3/21.pdf). - [Racial Bias in Hate Speech Detection](https://www.aclweb.org/anthology/P19-1163.pdf). This paper was initially submitted as a short paper to NAACL 2019, but rejected. One (or more, I don't quite remember) of our reviewers believed that it was fine that we uncovered racial biases in hate speech detection, but felt that we should have found a way to mitigate it as well (this was/is a common critique of bias papers, where reviewers are not satisfied with work that "*just*" shows that things are biased, which is "*too obvious*"; they want solutions). We begrudgingly thought about ways to mitigate it, and, as we explored and ruled out possible ML-based ways (e.g., adversarial unlearning of demographics, which we later showed are not that effective for dialect-based racial biases; [Zhou et al. 2021](https://aclanthology.org/2021.eacl-main.274/)), we decided to try doing a human-oriented mitigation strategy: tell the annotators that a tweet is likely in African American English before they label for toxicity. That turned out to work well at lowering the bias! And as such, when we re-submitted our short paper to ACL, we got *glowing* reviews, and all three reviewers nominated it for best short paper!! We didn't end up winning, but the nomination alone was good to feel vindicated from the initial rejection.