Note: This post reflects my final diatribe on the issue of modern scientific publishing for a little while. While I think that this is definitely a flawed process in desperate need of fixing, I also think there are other issues that deserve our collective attention much more urgently. I will get to those in upcoming posts.
After a dreadfully long hiatus (due to being on the job market), I have finally returned to the world of blogging.
I've decided to get back to the topic I left hanging a few months ago: the so-called publication "crisis" the field of cognitive neuroscience. Regular readers of this blog already know my opinions of this problem, but a little bit of time has added maturity and nuance to my opinions on this matter.
In my travels these past few months I've had the opportunity to discuss the flaws of modern review process at length with many well respected colleagues. I've heard the good, the bad and the in-between about life on the front lines of scientific publishing. Some say it's way too difficult to publish today (or dreadfully biased). Others say it's too easy. But almost everyone agrees that it's a real pain in the ass.
I don't want to spend too much time on the status of the current system. Dwight Kravitz & Chris Baker have published a very complete background & review of this topic that has garnered a lot of good attention. I highly recommend it for anyone interested in this topic, if only to get a good understanding of how we got to where we are today.
So for our purposes today, let's consider some anecdotal stories relayed to me by colleagues.
The Bad
When given the choice between hearing "good news" and "bad news" I always accept the bad news first. So let's start by considering an example of where the process falls apart (from a colleague working in Ireland).
Submitted a manuscript Journal X [Note: obviously not the real journal name] in April 2010, received an email in mid June expecting this to be reviews, this was in fact an email from the editor saying they would send it out to review. Mid-Oct 2010 (1 day before it reached 6months since submission!) we received an apologetic email and a decision, invited to resubmit based on the reviewer comments. I say reviewer because we only had one reviewer, the other had broken contact with the journal. Mid August 2011(6 months since re-submission, 1 and a half years since the initial submission), it appeared that the initial reviewer whose comments we’d addressed had also broken contact with the journal and we once again only had a single new review. At this stage they just accepted the article pending revisions based again on the single reviewer.
Two other slightly irksome reviews both from Journal Y; in one case a reviewer told us we needed to discuss the role of the basal ganglia although this was a cerebellar paper, in the next round of reviews a different reviewer said “why are you discussing the basal ganglia in a cerebellar paper”. The second more annoying one was from one of an anatomy paper which used one of the most highly cited articles in my field (Kelly and Strick, 2003) to justify my regions of interest. The reviewer said Kelly and Strick were wrong and therefore we were wrong!
This highlights serveral key problems brought up again and again in my conversations with colleagues (and discussed at length by Kravitz & Baker).
First, there's the needless lag in getting the article turned around. Fifteen years ago, when manuscripts had to be physically mailed to journals, it was understandable to have a 1-2 year review process. However, we now live in an era of online submissions. Yet the time-line of many journals is still arduously slow and needlessly long.
Second, mid-tier and low-tier journals often find themselves scrambling to find good reviewers for papers. I've had many colleagues tell me about a paper getting in with only a single reviewer (for non-scientists, the norm is 2-3 reviewers). In many of these cases there is a trend for these to also be out-sourced to graduate students or post-docs.
Now don't get me wrong, this is an incredible training opportunity (and one that I am thankful to have started early in my career). However, without good oversight of an out-sourced review by the principle investigator who was originally solicited, a rookie mistake can kill a reasonably good study's chance of acceptance at a journal. This chance is worsened when the rookie is the only reviewer of a manuscript.
Finally the critiques are generally random, arbitrary and sometimes not related to the manuscript itself at all. Sadly, it is fairly common these days for a paper to get rejected because of more global disagreements in the field rather than the quality of the study itself. We work in a large field with many big theoretical disagreements. As a result, all of us at one time or another have been collateral damage in a fight amongst the larger egos in our field. Yet it only serves to stifle communication of ideas and results, rather than evaluating the true merit of any individual study.
The Good
But contrary to this (and the arguments raised by Kravitz & Baker, as well as many other scientists), there are many times when the system actually works well. Contrast the previous story with one communicated by a friend working here in Pittsburgh (paraphrased here and not her actual quote).
I submitted a paper to Journal Z and after a month I got comments back that were very positive and indicated a strong willingness to accept the paper. The reviewers were very collegial, but brought up an ideal control experiment that would bridge my work with another theory as well as making a better overall argument for my hypothesis. The editor had also read the paper and provided his own comments. He coalesced the reviewers comments into a specific set of recommendations and even offered recommendations for how to modify the design of the control experiment so as to make it a tighter/cleaner study. In the end, I felt like they were all invested in pushing the project forward.In my opinion this is a textbook example of how the process should work in the first place: fast turn around, constructive reviews and a well managed pipeline focused almost exclusively on the details of the study at hand. So while the system has it's flaws, it's not broken everywhere.
So What Gives?
Now there are two big differences between the Bad Story and the Good Story. The journals talked about in the Bad are all from neuroscience journals, while the Good is a story from a psychology journal. So differences in fields may play a key role.
But I think there's something deeper at play here. Specifically differences in editorial oversight.
Let's face it, most of us can agree that in many cases, editors at some journals appear to be falling asleep on the job. But let's not be too harsh on this judgement. I can understand why this happens in many cases.
Big publishing houses like Elsevier want to make as much money on free labor as possible. So in many cases, not only are professional scientists giving free professional labor as reviewers, we're also serving as under-compensated editors. Editors are overworked, under appreciated, and have little time to manage their own research careers, let alone supervise the careers of others through the editorial process.
In a context like this, I can understand the sub-optimal feedback and oversight in the editorial process.
But what about higher impact journals with full-time professional editors? Well in my experience, those are just as poorly managed. After numerous experiences submitting to journals with full-time and professional editors, I have never received the type of constructive feedback and oversight like what happened in the Good Story above. In fact, I don't really believe that many editors at these journals even read the articles at all.
Of course, editorial oversight is just one (albeit significant) part of the problem. Another part stems with the demand for reviews as well. Reviewers are under increasing pressure to turn around reviews in faster timelines, often with little time to really digest any particular study. This is on top of the increased quantity of reviews required to keep pace with the increased number of journals.
Scientists push to get out as many papers as they can and this increased volume of manuscripts requires even more reviewers. Eventually the demand gets so great that quality review quality drops precipitously.
The military has a word for this: "clusterfuck."
The Unasked Question
So this is the state of our current publishing process in the field of cognitive neuroscience. It's driven a lot of professional scientists to push for a change. To fix the broken pipeline.
Many argue that we need to overhaul the entire process itself. I view this opinion fondly, although I might hesitate to say that I am a "proponent". Some argue for a complete open access model with unblinded reviews. Others want a centralized database for submitting unreviewed manuscripts in order to crowd-source the evaluation of merit of individual studies.
If you ask five cognitive neuroscientists their opinion on the current publishing process, you'll get ten different recommendations.
But there's one question I think we're all neglecting and it's the most important question of all if we are going to try and address this so-called crisis.
What the hell do we want from the publication process?
Do we want to go back to the more traditional view where papers are a completed treatise on research topic? Decades ago this was the central view of what a scientific paper should be. Researchers would spend years investigating a single question, run all possible control experiments (dozens or more), and carefully construct a single paper after all those years that eliminates all other possible alternative explanations for the effect of interest. These papers would be long, carefully constructed, and a nearly definitive treatise on a research topic.
This is essentially the tortoise model of scientific advancement. The time-scale of publishing is very long, but also very reliable and consistent. Here the key measure of a research program's efficacy is the long-term citation record of a given paper. You might only publish one empirical article every few years, but it would be a definitive work.
The alternative is the hare model of scientific advancement. Here publications should reflect a snap-shot of a research program's status. Only one or two experiments are reported in a paper (although thoroughly analyzed) and repeated publications all tie together over time to tell a larger story. This is a very accelerated model of scientific progress. Articles are less a treatise on a core theoretical question and more of a "status-update" of a research program.
Over the last couple of decades, the push in cognitive neuroscience (and other fields) is to move away from the slower quality-focused model to the faster quantity-based model. There are many reasons for this, but you can primarily thank the increasingly heated "publish-or-perish" academic culture for this change.
Right now we are, unfortunately, stuck in a hybrid paradigm where we have the expectations of the tortoise model with the demands of the hare model. In my opinion, this bipolar paradigm is what is driving researchers crazy these days.
So I return to my original question. What do we want from scientific publishing? We can't be both a tortoise model or the hare model? We can't have both (or realistically we can't have the mechanics of one and the expectations of the other).
Unfortunately, there is no good answer to this. I see equally valid pro's and con's to either. Also, this is a decision that has profound implications well beyond how we structure the peer-review process. Departments and universities will have to completely revise how they evaluate the progress of faculty and students. Similarly so for granting agencies.
However, these changes are necessary. It's just a matter of deciding what the expectations need to be.
Therefore answering this question comes down to us talking together as a field. We need to decide which model we want to use and commit to it (with all the implications that follow). Until we make this decision, we can propose as many new publishing systems as we want, but they'll end up being nothing more than intellectual thought experiments that describes the world we wish could be, rather than real ways to fix the world that we have.
I think I've talked to you about this before, but I think what I want from publishing is less Toroise vs Hare. I look to peer review in the publishing process to make sure that an articles' conclusions fit with the data and analysis and one can understand what was done by reading the paper. Idealized peer review should weed out bad methodology or data. If a paper has minimal data and only hints of an effect, it should be able to get through peer review if they acknowledge this fact and not make out-stated claims. Such a paper isn't going to end up in a high profile journal, but that can be addressed with something like the approach Baker/Kravitz article.
ReplyDeleteI obviously agree that the review process needs to be more efficient, but I don't think this is as simple as for profit vs non-profit. For example, I've frequently reviewed and submitted a few papers to the for-profit Elsevier Journal, Neuroimage. Once I accept a review, I'm asked to submit my comments in 2 weeks. I get an automatic reminder after 12 days and harassing emails starting at 14 days. If they have two other reviewers in agreement and I've taken more than 3 weeks, the editor makes a decision without waiting for me. Their online review and submission system is simple & generally works (expect for slow figure uploads when submitting). At least in my Neuroimage experience, more than 6 week turn-around is rare.
On the other hand, the Frontiers Journals are non-profit and I think you have strong opinions about the speed and quality of their process. I suspect the quality of the review process depends more on editors than the publishers, but one thing for profit companies have going for them is that they don't make any money on articles under review. An efficient review process with as much online automatization as possible is in their financial interest.
--Dan H
Thanks Dan!
ReplyDeleteI agree. I don't think that either the for-profit or non-profit models are ideal. Really it comes down to editorial oversight. We discussed the Kravitz and Baker paper today in journal club and while their proposed model seemed utopian in design, the proposed model has several valid points: a) reducing the adversarial relationship between Reviewers & Authors, b) reshifting oversight back to the Editors and c) incentivizing Editors/Reviewers to provide better work. All of these are in line with my view for sure.
I still think the problem is in how we review and what we expect out of the review process itself. I'd agree that a low-powered, weak study should still be published. Hell I think all "null" studies should be published as well. This preference leans towards the Hare model, which is fine but has significant potential for information overload. But again... without decent editorial oversight, any changes in the system would be fundamentally useless.
--Tim
Great are your tips! Your post contains nice info. Actually the article is very practical.
ReplyDeleteWhatever accommodation you’re looking for