Peer review is broken. ScienceOpen summarized the situation perfectly: “Traditional peer review is biased, opaque, and non-incentivised”.
Need a recent example of the broken-ness? You could ask evolutionary geneticist Fiona Ingleby. In case you missed it, a PLOS ONE reviewer advised her and her female coauthor to team up with some male biologists to improve their manuscript (Good on PLOS for apologizing and ousting the reviewer and academic editor).
This, of course, is an absurdly egregious instance. But we’ve all experienced the more mundane disappointments – the reviewer who devoted all of about 30 seconds to the task, or who thinks science is a contact sport and uses the review to land a sucker punch.
Definitely check out the new 5-part series on peer review at ScienceOpen, which discusses and actually implements ways we can do better. Reviews are public, and reviewers may claim academic credit. And they should get academic credit, because peer review is essential to scientific progress. For what we can do as reviewers to advance openness, see the Peer Reviewers’ Openness Initiative and join the grassroots effort!
But there’s one great benefit to out-in-the public peer review that I haven’t seen getting much attention: the chance to learn more efficiently how to write a high-quality review.
I don’t know what your introduction to review-writing was like. I had no training. To the authors of the manuscripts that first came my way – I’m sorry. (I’d apologize personally but I don’t know who you are. And you don’t know who I am.)
Most journals provide general guidance, but we don’t learn very efficiently via general guidance. We learn through examples. Here’s a good one; here’s a poor one. Here’s another good one; here’s another poor one.
Imagine if we could go online and find thousands of open access papers in our field, in archives or collections like bioRxiv or ScienceOpen (to name a couple of favorite haunts), and find along with them reviews which themselves have been reviewed. Maybe there would be a Yelp or Amazon-style numerical star system, or a “like” function available for each review.
We’d find our way pretty quickly to reviews that our colleagues judged as the best. We’d read the lower-rated ones too, to understand what sets the highest-rated ones apart, but in a totally transparent review system we wouldn’t see anything like the worst of what currently hides behind anonymity.
It also would be instructive to see these low-quality, anonymous reviews. I don’t mean necessarily the harsh or snarky reviews, though those are entertaining (unless you were the target, but even then you might get a chuckle from some like these.) The poor quality ones, whether disposed favorably or unfavorably to the manuscript, in their unabridged dreadfulness, would help build the case for transparent peer review.
I bet we’d find some exemplars of awfulness if a bunch of us dug into our archives. I’m happy to work with anyone who’s interested.