Distilled simplicity for scientific publishing
Scientific publishing is essentially a solved problem.
Research dissemination is almost free. For hundreds of years, humankind relied on printed material and postage or shipping to share their work. That hasn’t been needed for at least 30 years now. Anyone can post whatever they want on the web— a PDF, a blog article, an interactive web page, a github distro, or whatever cool stuff these people are cooking up. Put it online, and it is instantly accessible from anywhere in the world for free. A printing press isn’t needed. Shipping isn’t needed. Desktop publishing developed so much since the 1990s that we take it for granted. Anyone can make a beautiful, readable document with free or cheap resources. There is no need for people to turn to companies to share their scientific findings. It is nearly free to reach anyone in the world, with a high quality document.
This scales just fine. The traffic is rarely significant. I posit that most research products could be disseminated effectively from a personal computer on a home internet connection. For centralized or consolidated dissemination, then it does cost money of course. Still, it is very economical. ArXiv and BioRxiv are big operations, supporting thousands of individual publications. ArXiv runs on less than $5 million per year, and survives on donations. That money lets them handle about 20,000 submissions per month, and serve over 40 million downloads per month, sustainably. I didn’t see it broken out anywhere, but even with these crude overall numbers, we can see that the costs are way less than $20 per submission and one penny per download. And this infrastructure can be used by journals to make them essentially free, such as the excellent NBDT.
Peer review can be simple too.
There are different degrees of peer review that I appreciate. Note that these are not rankings.
(a) A spot-check to keep out nonsense articles. This is basically what bioRxiv does already. They don’t call it peer review, but it is a minimal amount.
(b) A more rigorous assessment by people in the field to catch errors and refine presentation. Minimal extra work– just friendly, constructive review.
(c) Highly engaged review, suggesting additional experiments and analysis. Expected to result in transformative revisions.
These are distinct process with different goals in mind– not a continuum with various shades of gray. When I submit to a high profile journal, I expect (c). I might only be 50-75% done with the study. I can invest another year of time to get it in. And I expect the process will make the paper much better, and I’m excited about that! What I find a huge waste of time is a journal sending a paper out for review, the authors spend time and resources doing work to address concerns, but then ultimately the journal decides that they won’t publish it. This is an aspect of peer review that eLife’s experiment addresses.
We can dispense with rankings.
Scientific journals do not need to rank papers. This isn’t something we need, and we should distill it out of the process for simplicity.
Journals evolved to implicitly rank papers, and this has been anachronistic for decades. Historically, when humankind needed to print papers on a press and distribute the hardcopies, this limited access to distribution was natural. Over time, some journals were more widely read and thus could provide higher impact. This led to an implicit ranking, in two stages.
First, during editorial selection about what manuscripts to send out for review. This gatekeeping function is intended to ensure that the journal, with its limited resources, publishes articles that will have high impact and thus increase or maintain its reputation as a journal with top-quality research reports. Second, during peer review, there is an implicit ranking of what papers to support or object to publishing. Reviewers concern themselves with the risk of elevating reports with claims that are poorly supported. Thus pre-publication gatekeeping and ranking emerged organically.
Today, it is an anachronistic foolishness. There is no barrier to dissemination, as described above. Plus, the journals no longer hold the keys to potential impact. A non-peer reviewed preprint can have more impact and citations than a dozen high profile journal papers combined. Social media posts can be a more effective advertisement than a table-of-contents email from a journal.
Moreover, pre-publication estimates of future impact are not reliable. There may be broad agreement at the extremes– excellent papers and very poor papers– but there is broad swath in the middle where reasonable people can disagree and unforeseeable factors can affect the future impact. A wise community would not spend too much time on such foolishness, nor ascribe too much influence to it. Perhaps that is easier said than done, especially as some influential institutions are desperate for quantitative measures to judge, compare, and rank the accomplishments of scientists and their home institutions. I sympathize with my colleagues that we cannot always hold ourselves above such issues, but we also needn’t indulge them.
Reviews, both long-form (e.g., Annual Reviews of …) and short-form (e.g., TINS), are excellent sources for understanding how individual papers fit into a field, and their strengths and limitations. These come some time after publication, and can provide a more reliable estimate of impact, at least to date.
Editors create value by being constructive.
Evaluation or peer review can happen outside of traditional journals through online comments linked to the original work, e.g., PubPeer. But often authors want some level of peer review prior to publication, like levels (b) and (c) described above. Editors at journals can coordinate this, including offering some level of anonymity, and they can ensure the process is fair and constructive.
Editorial gatekeeping is of dubious value. Some professional editors like to pride themselves on their taste and ability to discern which papers are likely to be impactful. In their opinion, their gatekeeping is expert discernment, and a service to the field. But how is their expertise determined? What if they’re terrible at it? How would they know? Journal reputations are sticky, and move little over time. Plus there is a degree of self-fulfilling prophecy in publishing at a high profile journal. Authors are often biased to cite articles that appeared in high profile journals, rather than making judgements purely on the scientific content.
There are still limited resources for peer review, namely the peers themselves. Person-hours for reviewing manuscripts are significant. They are given freely, but are an easily exhausted resource. Although dissemination scales well, peer review does not. Editor time is limited as well, whether professional editors or working scientists. It takes time and expertise to recruit reviewers and referee a constructive review process.
Editors can generate value by being effective referees, resulting in constructive peer review that makes authors want to come back for more. Ideally they could do that for more papers, but they have to pick-and-choose. Selections by editorial boards for papers to send out to review is a deliberative, but imperfect process, necessitated by limited resources. That is, the effective gatekeeping that persists is a limitation, not a feature.
Thanks for sharing your thoughts on this topic! (And also thanks for your other blog posts, which I always find interesting.)
There is one point with which I do not fully agree, that “journal ranking” is dispensable. Your idea, if I understood you correctly, is that the acceptance by the scientific community and the natural dissemination via bioRxiv or other open avenues will be sufficient to show the value of a paper and does not need pre-publication peer review to “rank” papers. One problem is that, without critical peer review, overstated claims in papers would not be penalized, or only with a long delay. Unsubstantiated claims are a problem already now despite peer review (I have seen it often enough that authors and reviewers fight about how the title or abstract and their claims do not reflect the experimental findings), and it would become worse without. A second problem is that those who advertise their research more aggressively, will be more broadly read. Having journals (or something else) rank research papers counterbalances this bias.
Yes, my opinion on this is not universally shared. Many thoughtful scientists that I deeply respect, such as yourself (I like your blog posts too), still want pre-publication ranking.
It can still be immediate. Contemporaneous review (e.g., Previews, or News & Views) pieces can provide independent evaluations of papers at the time of publication, and often these do indeed critically discuss weaknesses. The eLife process of publishing peer review notes is another form of contemporaneous discussion, and Nature Communications does something like this too– you discussed this in a blog post I enjoyed (indeed the reviews of the non-telecentric 2p paper were interesting to say the least!). Any of these are much more informative than “is it in Nature, Nature Neurosci, or J Comp Neuro?” One of the most impactful papers on higher visual areas in mice was published in J Comp Neuro, and I bet even those of us in the field that have published work in higher profile journals would agree that Wang and Burkhalter 2007 had more impact. This is just one example. There are tons. It is bad for science to use journal titles as a proxy for impact. I understand why it is done sometimes– it is easy– but scientists do not need to endorse it.
High profile journals have no special access to rigor. Uncritical acceptance of overstated (or misstated, or wholly unsubstantiated) claims is indeed an issue, even with current scientific publishing practices. Authors should always be held accountable for the nature of their claims in reference to their evidence, no matter what the publishing venue is. This standard of rigor should not be reserved for the highest profile journals.
Of course, the rigor of peer review is not uniform. There are indeed journals, predatory journals, where the checks are loose. Not all peer review is equal. One check on quality could be the editorial board. Those people put their name and reputation on the line in an endorsement for the type of peer review that happens at a journal, no matter how high or low profile the journal is.
Your last point, about how “those who advertise their research more aggressively, will be more broadly read” is not something I address. University PR offices vary in their prowess, and scientists vary in how boldly they might push their work, traveling and presenting it broadly. Again, reviews can provide context. That said, writing contemporaneous and later reviews is a chore. So it is no easy solution.
The Wang and Burkhalter paper is very nice example, one of many.
During the last year, I was planning to write more “contemporaneous reviews”, via blog. However, I find myself making drafts without finishing much (also because it takes additional time to write such that the review does not make the authors new enemies…).
But there are so many papers that I feel deserve to be highlighted. Just today I read a cool paper on online motion correction based on OCT (and found it funny to read your name in the acknowledgements): https://opg.optica.org/ol/fulltext.cfm?uri=ol-48-14-3805&id=532749
It really motivated to read up on OCT, which I feel I have overlooked before.
It’s cool that you found that work and like it too. It was an idea we were thinking about. I reached out to David Boas because of his OCT and 2p imaging experience and of course he had already thought of it and done a lot of ground work. Jianbo was part of that early work. Then Stephen Tucker hopped on and did fantastic work on it. He’s excellent and I like his style. I discussed the work along the way with them, and we provided funding (my NSF grant is acknowledged), but the engineering was all Stephen, David, and others in David’s group. I think they laid out a nice foundation to build upon if anyone is interested.
Also, I relate to your comment about the time it takes to write reviews. I hope there are ways we can find to decrease the energy required.