This is a regularly updated post, last updated March 10, 2024.

In academia, for better or worse, we have what’s called a peer review system, where papers get accepted to journals, conferences, or other venues on the basis of reviews from other researchers, who ideally are subject area experts and thus are qualified to evaluate the paper. The reviewers also cannot have a conflict of interest with the authors, and should not be overwhelmed with too many papers to review. This is the ideal world, and is not always what happens in practice.

From my experience in the robotics academic community (and this may apply to other disciplines), it generally seems like there is no standard definition of an “appropriate” or “maximum” reviewing load for a reviewer. This is difficult to define as different papers mandate different reviewing efforts; a massive journal paper requires more time and effort than 2-3 page workshop papers. Furthermore, reviewing responsibilities can and should change depending on the progression of a researcher’s career. Consequently, this blog post serves to share my reviewing load. Hopefully I will continually update this post to better track (and limit) my reviewing load.

Here’s a detailed look at my reviewing load.

Based on Publication Venue

The standard publication venues for the robotics work that I do are ICRA, IROS, CoRL, RSS, and the IEEE RA-L journal, so most of my reviewing is concentrated there. Also, the “RA-L journal” is a bit nuanced in that papers can be submitted there with the option of a presentation at a conference such as ICRA, IROS, and CASE, hence why some researchers will write on their CVs and websites: “Paper published at IEEE RA-L with ICRA presentation option.” I am not counting such papers when I list ICRA, IROS, and CASE paper reviews.

The IEEE conferences have Associate Editors (AE). These are experienced researchers in charge of recruiting reviewers for papers and later recommending to accept or reject to senior editors. I served as an AE for the first time for IROS 2022.

IEEE conferences also allow for reviewer delegation, where one can technically be assigned to review a paper, but formally request someone else to fill it out. I am not counting those cases for my reviewing load.

Without further ado, here are the papers. When I list “papers in year X” that means that I reviewed them for the venue / conference that happens at that year (X). This might mean the actual reviewing happened the year before. For example, I reviewed ICRA 2023 papers in 2022, but I list them as “ICRA 2023” here. In other cases, such as CoRL, I review them the same year the papers are presented (since CoRL is near the end of the year). Of course, some (if not most) of the papers don’t get accepted.

  • CoRL: 4 in 2020, 5 in 2021, 3 in 2022, 4 in 2023.

  • RSS: 3 in 2023, 4 in 2024. For workshops: 2 in 2020, 2 in 2021, both for the Visual Learning and Reasoning workshop.

  • ICRA: 3 in 2020, 3 in 2021, 5 in 2022, 3 in 2023, 3 in 2024. For workshops: 6 in 2022, 6 in 2023, all for the deformable manipulation workshop that I co-organize. For workshop proposals: 1 in 2023, 2 in 2024.

  • IROS: 1 in 2019, 3 in 2020, 4 in 2021, 2 in 2022, 2 in 2023. Associate Editor duties: 8 in 2022, 5 in 2023.

  • CASE: 1 in 2018, 1 in 2019, and 1 in 2021.

  • ISRR: 2 in 2022.

  • IEEE TASE: 1 in 2022, 1 in 2023.

  • IEEE RA-L: 2 in 2021, 4 in 2022, 2 in 2023, 3 in 2024.

  • IEEE T-RO: 1 in 2021, 2 in 2022. (These are longer papers, and can be up to 20 pages in IEEE’s tiny font format!)

  • IJRR: 1 in 2023. (These papers can also be quite long.)

  • AURO: 1 in 2023. (This refers to Autonomous Robots.)

  • NeurIPS: 4 in 2016 (see my ancient blog posts here and here), though I’ve never been asked again (!). I did some workshop paper reviewing: in 2020, I reviewed 3 workshop papers for the Offline RL workshop, and in 2021 I reviewed 3 workshop papers, two for the Offline RL workshop (again) and one for the Safe and Robust Control workshop.

  • ICML: just 1 workshop paper in 2023.

Based on Year

This list begins in 2019, since that’s when I started getting a lot of reviewing requests (and also publishing more papers, as you can see from my Google Scholar account).

To be clear, and as I said at earlier, these are how many papers I reviewed during that calendar year. For example ICRA is in the late spring and early summer most years, but the reviewing happens the prior fall. Thus, I would list papers I reviewed for ICRA 2020 in 2019 (not 2020) in the list below. In contrast, reviewing ICRA workshop papers happens the same year when the conference happens. I hope this is clear.

  • 2019: total of 5 conference papers = 1 (IROS 2019) + 1 (CASE 2019) + 3 (ICRA 2020), 0 journal papers, 0 workshop papers.

  • 2020: total of 10 conference papers = 3 (IROS 2020) + 4 (CoRL 2020) + 3 (ICRA 2021), 0 journal papers, 5 workshop papers = 2 (RSS 2020) + 3 (NeurIPS 2020).

  • 2021: total of 15 conference papers = 4 (IROS 2021) + 1 (CASE 2021) + 5 (CoRL 2021) + 5 (ICRA 2022), 3 journal papers = 2 RA-L + 1 T-RO, 5 workshop papers = 2 (RSS 2021) + 3 (NeurIPS 2021).

  • 2022: total of 10 conference papers = 2 (IROS 2022) + 2 (ISRR 2022) + 3 (CoRL 2022) + 3 (ICRA 2023), 7 journal papers = 4 RA-L + 2 T-RO + 1 TASE, 6 workshop papers = 6 (ICRA 2022), 8 AE papers for IROS 2022, and 1 workshop proposal for ICRA 2023.

  • 2023: total of 12 conference papers = 3 (RSS 2023) + 2 (IROS 2023) + 4 (CoRL 2023) + 3 (ICRA 2024), 7 journal papers = 4 RA-L + 1 IJRR + 1 TASE + 1 AURO, 7 workshop papers = 6 (ICRA 2023) + 1 (ICML 2023), 5 AE papers for IROS 2023, and 2 workshop proposals for ICRA 2024.

  • 2024 (in progress): total of 4 conference papers = 3 (RSS 2024) + 1 journal paper = 1 RA-L.

Above, I don’t list the journals with their year attached to them (i.e., I don’t say “IJRR 2023”) but I do that for the conferences and workshops. Hopefully it is not too confusing.

Reflections

What should my reviewing limit be? Here’s a proposed limit: 18 papers where:

  • Conference papers and short journal papers count as 1 paper.
  • Long journal papers (say, 14+ pages) count as 2 papers.
  • Associate Editor papers count as 0.5 papers.
  • Workshop papers count as 0.33 papers.
  • Workshop proposals count as 0.33 papers.

As you might be able to tell from my heuristic calculations above, I think the research community needs a clear formula for how much each paper “costs” for reviewing. Also, I have been exceeding my limit, both in 2022 and in 2023.

As of late 2022, my average reviewing time for a “standard” conference paper with 6-8 pages is about 1.5 hours, from start to finish, which is way faster than my first efforts at reviewing. The average length of my reviews is on the higher end; typically it will fill up at least two full pages on Google Docs, with the default Arial 11-point font.

Also I am amazed at the effort that goes into rebuttals for CoRL and for journals like RA-L. I like the idea of rebuttals in theory, but the problem is as a reviewer, I feel that I expended so much effort in my initial review that I have little energy left over to read rebuttals.

When authors have papers rejected and then re-submit to a later venue, it is possible for that paper to get assigned to the same reviewers. As a reviewer, I have experienced this two times. To state the obvious: it is a lot better to resubmit papers with the recommended reviewer improvements, rather than resubmit the same PDF and hope the random sampling of reviewers gives a good draw.

I am curious about what others in the community consider to be a fair reviewing load.