Distilled simplicity for scientific publishing

Scientific publishing is essentially a solved problem.

Research dissemination is almost free. For hundreds of years, humankind relied on printed material and postage or shipping to share their work. That hasn’t been needed for at least 30 years now. Anyone can post whatever they want on the web— a PDF, a blog article, an interactive web page, a github distro, or whatever cool stuff these people are cooking up. Put it online, and it is instantly accessible from anywhere in the world for free. A printing press isn’t needed. Shipping isn’t needed. Desktop publishing developed so much since the 1990s that we take it for granted. Anyone can make a beautiful, readable document with free or cheap resources. There is no need for people to turn to companies to share their scientific findings. It is nearly free to reach anyone in the world, with a high quality document.

This scales just fine. The traffic is rarely significant. I posit that most research products could be disseminated effectively from a personal computer on a home internet connection. For centralized or consolidated dissemination, then it does cost money of course. Still, it is very economical. ArXiv and BioRxiv are big operations, supporting thousands of individual publications. ArXiv runs on less than $5 million per year, and survives on donations. That money lets them handle about 20,000 submissions per month, and serve over 40 million downloads per month, sustainably. I didn’t see it broken out anywhere, but even with these crude overall numbers, we can see that the costs are way less than $20 per submission and one penny per download. And this infrastructure can be used by journals to make them essentially free, such as the excellent NBDT.

Peer review can be simple too.

There are different degrees of peer review that I appreciate. Note that these are not rankings.
(a) A spot-check to keep out nonsense articles. This is basically what bioRxiv does already. They don’t call it peer review, but it is a minimal amount.
(b) A more rigorous assessment by people in the field to catch errors and refine presentation. Minimal extra work– just friendly, constructive review.
(c) Highly engaged review, suggesting additional experiments and analysis. Expected to result in transformative revisions.

These are distinct process with different goals in mind– not a continuum with various shades of gray. When I submit to a high profile journal, I expect (c). I might only be 50-75% done with the study. I can invest another year of time to get it in. And I expect the process will make the paper much better, and I’m excited about that! What I find a huge waste of time is a journal sending a paper out for review, the authors spend time and resources doing work to address concerns, but then ultimately the journal decides that they won’t publish it. This is an aspect of peer review that eLife’s experiment addresses.

We can dispense with rankings.

Scientific journals do not need to rank papers. This isn’t something we need, and we should distill it out of the process for simplicity.

Journals evolved to implicitly rank papers, and this has been anachronistic for decades. Historically, when humankind needed to print papers on a press and distribute the hardcopies, this limited access to distribution was natural. Over time, some journals were more widely read and thus could provide higher impact. This led to an implicit ranking, in two stages.

First, during editorial selection about what manuscripts to send out for review. This gatekeeping function is intended to ensure that the journal, with its limited resources, publishes articles that will have high impact and thus increase or maintain its reputation as a journal with top-quality research reports. Second, during peer review, there is an implicit ranking of what papers to support or object to publishing. Reviewers concern themselves with the risk of elevating reports with claims that are poorly supported. Thus pre-publication gatekeeping and ranking emerged organically.

Today, it is an anachronistic foolishness. There is no barrier to dissemination, as described above. Plus, the journals no longer hold the keys to potential impact. A non-peer reviewed preprint can have more impact and citations than a dozen high profile journal papers combined. Social media posts can be a more effective advertisement than a table-of-contents email from a journal.

Moreover, pre-publication estimates of future impact are not reliable. There may be broad agreement at the extremes– excellent papers and very poor papers– but there is broad swath in the middle where reasonable people can disagree and unforeseeable factors can affect the future impact. A wise community would not spend too much time on such foolishness, nor ascribe too much influence to it. Perhaps that is easier said than done, especially as some influential institutions are desperate for quantitative measures to judge, compare, and rank the accomplishments of scientists and their home institutions. I sympathize with my colleagues that we cannot always hold ourselves above such issues, but we also needn’t indulge them.

Reviews, both long-form (e.g., Annual Reviews of …) and short-form (e.g., TINS), are excellent sources for understanding how individual papers fit into a field, and their strengths and limitations. These come some time after publication, and can provide a more reliable estimate of impact, at least to date.

Editors create value by being constructive.

Evaluation or peer review can happen outside of traditional journals through online comments linked to the original work, e.g., PubPeer. But often authors want some level of peer review prior to publication, like levels (b) and (c) described above. Editors at journals can coordinate this, including offering some level of anonymity, and they can ensure the process is fair and constructive.

Editorial gatekeeping is of dubious value. Some professional editors like to pride themselves on their taste and ability to discern which papers are likely to be impactful. In their opinion, their gatekeeping is expert discernment, and a service to the field. But how is their expertise determined? What if they’re terrible at it? How would they know? Journal reputations are sticky, and move little over time. Plus there is a degree of self-fulfilling prophecy in publishing at a high profile journal. Authors are often biased to cite articles that appeared in high profile journals, rather than making judgements purely on the scientific content.

There are still limited resources for peer review, namely the peers themselves. Person-hours for reviewing manuscripts are significant. They are given freely, but are an easily exhausted resource. Although dissemination scales well, peer review does not. Editor time is limited as well, whether professional editors or working scientists. It takes time and expertise to recruit reviewers and referee a constructive review process.

Editors can generate value by being effective referees, resulting in constructive peer review that makes authors want to come back for more. Ideally they could do that for more papers, but they have to pick-and-choose. Selections by editorial boards for papers to send out to review is a deliberative, but imperfect process, necessitated by limited resources. That is, the effective gatekeeping that persists is a limitation, not a feature.