Reviewing and Conferences

Source: Microsoft Paint.

This piece is an imaginary dialogue between me and a friend about the process of reviewing for conferences. I’m currently working through my 6 NeurIPS reviews, and in the process have generated a lot of thoughts about my experiences.


Friend: Hey, I haven’t heard from you recently! What’s been going on, besides COVID-19?

AC: I’ve been really swamped by all these NeurIPS reviews I have to write. I mean, I didn’t have to write them because I volunteered to be a reviewer, but it’s really a lot of work.

Friend: Oh wow, that sounds like it sucks. Why do you have so many? I thought that reviewers are usually very experienced people in their field?

AC: Yeah, and I definitely don’t have as much experience as I’d like. It’s a bit different in machine learning though. In addition to journals, conferences are also a really big thing. In the past decade, attendance and submissions at conferences have really gone up. Especially at the main conferences like NeurIPS, ICML, and ICLR, there are usually more papers submitted than good reviewers to review them. It’s a huge problem.

Friend: How long does each review take?

AC: It really depends, both on the paper itself and on my background knowledge. There are papers you occasionally come across that are “obvious rejections”, but I hesitate still to call them that since we should always do our due diligence when evaluating a piece. It could be that you didn’t like the work because your reading of it was too superficial, especially if the work is contrarian to current scientific attitudes. Galileo wasn’t well-liked during his time, right?

For the papers I’ve had this cycle, I’ve spent anywhere between 4-12 hours on them, not counting the initial read that I do. I would say that this batch is of particularly decent quality though, so for another batch the time I spent could very well have been biased downwards. 4-12 hours might seem like a lot, but I spend that time going through all the proofs, reading or rereading the important citations, and checking other details in the paper. I’m by no means an expert and I’m certain that my judgement is flawed a lot of the time, but I’d love it for others to give my work the same attention; writing a paper isn’t easy!

Friend: Reviewing almost sounds like a full-time job! So nobody gets paid to do this?

AC: Yeah, it really has felt like another full-time job on top of research. I sort of feel bad for not having done a lot of my usual research in the past week because of reviewing. Except for small awards for best reviewers sometimes, reviewers generally don’t get paid, and I think that might be part of the problem. I mean, reviewers for journals often don’t get paid either; in fact, I think it’s a problem when journals profit off of submission fees, but don’t pay the people that do the scientific labour for them. With journals however, even though I imagine there still is a lot of work, it doesn’t seem like the ratio of submissions to good reviewers is as bad. With conferences though, every one of my colleagues has horror stories of reviewers seeming not to have even read their paper before accepting or rejecting it. Having your paper accepted for free might not sound so bad, but it really is a disservice to science.

Friend: So why would anybody volunteer at all?

AC: Right, that’s the thing. You don’t have to agree to review. There are certainly some good reasons why you might not review, like if you already have your hands full with mentoring a lot of people in your lab, or if you are already reviewing for a couple of journals. You might even have other personal circumstances, and that’s totally valid.

I personally like to review because it lets me be a part of the scientific process; it’s like I’m having a central role in making sure that good-quality science is being done. Even in my short time reviewing, I’ve seen some works that I’m glad I got a chance to look at and reject, after giving feedback to the authors of course. I also like reviewing because it gives me a chance to really go in depth and read a paper closely, something I find I often don’t have the motivation to do for any random paper. Even this reviewing cycle, I’ve had a chance to brush up on some areas of math I’ve grown rusty with, and even learn some new tricks!

Friend: That sounds really positive! But given what you said about reviewer horror stories, it doesn’t seem like everybody is convinced by your motivations…

AC: Unfortunately not. Reviewing is a lot of work, and devoting X amount of hours to reviewing is X amount of hours away from your research. I think there’s also a sort of prestige problem, where making more progress on your own research projects, like publishing papers, more visibly benefits you than doing the equivalent number of review hours. That’s not to say that reviewing can’t be more beneficial in the long run for what you learn and the new subjects you’re exposed to, but it’s hard to say. Also, even if you do choose to review, there doesn’t seem to be any strong, negative career consequences if you are a bad reviewer. Since reviews are, and should be, double-blind, basically nobody will know that you were a good or bad reviewer.

Friend: Basically nobody?

AC: Well, I guess maybe the area chair–a sort of local leader of reviews–will know. I think recognition of best reviewers is sometimes given out, so anonymity is dispelled at some level of the hierarchy, but in any case I still think there’s not as much social-status impact in being a good or bad reviewer.

Friend: So how do we make things better? Why don’t we just pay people?

AC: We could. It’s something I’ve been thinking about, and actually becoming a little bit more plausible because COVID has forced all the main conferences online, so the costs are much lower. Although, the registration fees are also lower, so I’m not sure if there’s really more money in the bag for reviewers. If we were to pay reviewers though, I think it would have to be on some sort of merit system, sort of how top reviewer awards are distributed now. I don’t think it makes sense to give an honorarium to just anybody that signs up to review, right? Maybe on the margin paying more people, and paying them more, would get more reviewers that put in a good-faith effort.

This doesn’t solve all the problems though. For instance, even if we have dedicated reviewers, there might still be a lack of reviewers with expertise in a specific area, which would result in people like me either having to defer to other reviewers with the necessary expertise (if they exist) or having to spend a lot of extra time learning a lot of new concepts rather quickly, which is quite fun and enriching, but might not always lead to the best review. It’s gotten a lot harder because conferences are so much bigger now.

Friend: Maybe part of the problem is having these conferences with so many submissions! Especially if there’s so much bad science as you say, it might be better to switch back to a more journal-based model.

AC: Maaaaaybe. I am sympathetic to that argument, and in general to having “slower” science that is more careful. The publish-or-perish paradigm is quite real, and leads to things being rushed out of the door often. Sometimes when I’m reading papers, I get the sense that we went nowhere very fast, instead of somewhere slowly. On the other hand, I can’t lie that there aren’t parts of the conference model that I do like. For one thing, the pace of progress has been extremely fast in ML in the past few years, due in no small part I think to having several main venues for presenting your work each year. We’ve definitely gone a bit overboard, and actually there was one paper not too long ago which showed that a lot of advances of the past two decades in this one area called metric learning weren’t actually real advances. Nevertheless, real advances still exist, like new algorithmic or theoretical ideas.

Getting to see all of these people, whether physically or virtually, is also really exciting! This week at ICML, one of the main machine-learning conferences, I had some great conversations with both relatively newer and relatively more experienced people about society and our research directions. It really makes you feel like you’re a part of a global community, and that you can find sub-communities that share your values and goals. Of course, part of the reason why conferences have become so big is because there has been so much more interest in studying AI. I think it’s good that more and more people are interested in and participating in the negotation of this impactful technology.

Friend: But perhaps there is some middle-ground, where you still have these big (virtual) get-togethers, but the science is much more solid.

AC: Yeah, that idea has also been floating around. Yoshua Bengio, a really famous AI researcher, proposed having previously vetted and available work be presented at conferences, rather than work newly submitted just for the conference. That way, people have more of an incentive to polish their work and really think about their ideas carefully, rather than rushing things for the conference deadline.

Maybe there needs to be some sort of hierarchy of venues, each of which focuses on some combination of pace and the quality of the science. arXiv, this online repository for papers, serves the maximum pace role right now, because anybody can post anything onto arXiv, so much so that academic shitposting is actually a thing now. Conferences are sort of a middle ground between pace and quality, while journals are the most extreme in terms of quality. I guess the debate now can be characterized as where on the pace-quality spectrum conferences should lie. I don’t have any easy answers; I suppose this issue is something the community will negotiate over time.

Friend: That sounds very complicated, but good luck in figuring it out!

Alan Chan
Alan Chan
PhD Student

I work on AI governance as a Research Fellow at GovAI and a PhD student at Mila.