Sunday, October 28, 2012

How to make a zombie brain

Hello kiddos. It's that time of year again for zombie neuroscience to rise from the ground and rear is viciously thoughtful head.

Last year's series of posts on the zombie brain that I did in collaboration with Brad Voytek over at Oscillatory Thoughts was a huge hit.  Since then we've managed to turn this thought experiment into a book deal with the amazingly supportive folks at Princeton University Press.  Brad and I have also worked with the TED Education group to do a two-part series of education shorts on teaching neuroscience using zombies.  You can watch both videos over at TED-Ed or just watch the first one here.



I have to say, the TED Ed group is absolutely amazing to work with.  If you get a chance to do a project with them, I highly recommend it.

Okay, back to the main post.  A key part of the talks that Brad and I give about the zombie brain is our 3D model of how it should look.  This is a simulated brain brain to show where lesions who have likely occurred and how they relate to various zombie "symptoms."

Check it out.  The zombie brain is the one on the right.



Now I often get asked how we actually made this model zombie brain.  I mean we can't really get a zombie and image it right?  Right???

Well until the zombie apocalypse gets us sufficient specimens to can, we had to do something else.  We had to take a human brain and morph it into what the zombie brain should look like.

So this post is for all you imaging geeks out there who want to know how to make your own model of the zombie brain.

For this you'll need 7 things.  All files can be downloaded from my website.
1. A template brain of a normal human: HERE 
2. A segmented image of said human brain: HERE 
3. A list of voxel IDs for the segmented map: HERE 
4. A list of regions you wish to "lesion": HERE 
5. A routine for extracting regions from the segmented map: HERE 
6. A routine for merging region files: HERE 
7. The core lesion loop script: HERE
To follow exactly my steps, you'll need Matlab and SPM8.  Although the logic is simple enough that if you're familiar with other imaging analysis programs/platforms you can replicate the process there.

Step 1: Get a good template MRI image.  I like using the Colin Brain, which is an average of several dozen structural brain scans of the same subject (Colin).  The Colin Brain is a standard template image in most analysis software packages or you can download the original here.  For this to work properly, you should use the skull stripped version (I used the ch2better.nii template that comes default with MRICron).  So if it's not skull stripped already, you should use a skull stripping program to do it (I've had good luck with BET2 in the past).  You can download the specific Colin Brain template that I used here.

This is what the Colin brain looks like normally.  Consider this our human template.



Step 2: Get a template segmentation map.  There are dozens of template segmentations these days.  I used the Automated Anatomical Labelling (AAL) template from the aal.nii file that comes standard with MRICron, but this is a pretty ubiquitous template these days.  Or if you don't want to go with a standard segmentation, you can use a program like Freesurfer to automatically segment the template brain you've chosen.  The important thing is that the template you use has a list of labels for each region ID number so you can choose them from a list (e.g., the aal.lut.txt file).  I've posted the AAL template and the region list I used on my website.

Here's a snapshot of the regions in the AAL template. Each color represents a different labeled brain area.



Step 3: Isolate the segmentation map.  This is a painful step.  You need to write a routine to extract each region from the segmented template.  I wrote a quick Matlab program to do it and you can download the segmentation routine here (NOTE: no promises it will work for you, I'm not gonna support any of this code... just sharing).  If you load the text file of the region list and voxel ID numbers, this will run through and make separate images for each region.

Step 4: Reslice the regions to the template space (NOTE: you can skip this step if you ran a segmentation routine on the template brain yourself).  In my case the template (human) brain is a 3D matrix with xyz dimensions of 301x370x316.  However, the AAL template file I used has the dimensions of 181x217x181.  So we just need to reshape the matrix size of each region of interest we extracted from the AAL file to match the size of the template brain.  I used spm_reslice.m for this.

Step 5: Get a list of regions you want to "lesion".  Now from the full list of extracted regions, you'll want to choose which parts of the brain you want to wipe out.  Here's the list I used for our zombie brain.

Step 6: The virtual lesion loop.  The principles of this loop are pretty easy.  For each region you want to loop through the list, load the file of that region, find the voxel coordinates of that region, save them in a map.  Then you'll want to make two maps.

The first map, isolates the areas to be lesioned and removes the gray matter  (i.e., voxels in these regions with an intensity less than some threshold will be set to zero).  It will look like this.



The second map is a map of everything else to be spared.  It looks like this.



Once you've got your two maps, you'll want to put them back together again.  The resulting map looks like this.



It's not perfect but it's close.  The final step of the process is to smooth out the rough edges.  I used the spm_smooth.m function for this with a smoothing kernel of 3mm FWHM.  The end result is what you saw above.


Voila. You've just turned Colin into Zombie Colin!  Here's the overlay of the original Colin brain and the zombie brain (orange) cause it's so cool!



Now with this routine you can pick and choose which areas you want to lesion.  This is how I came up with the fast versus slow zombie brains.


There's a lot of room to use this as you please for teaching and demonstrations.  It makes for easy visualisation of lesion/atrophy for educational purposes.  But remember, it's faked data, so ALWAYS use that disclaimer.

So that's it.  You've made Colin into Zombie Colin and now you can show him off to all your friends.

Until next year, Happy Halloween everyone!

Thursday, October 18, 2012

SFN 2012 Recap

This past week I was at the largest annual neuroscientific gathering in the world, where 28,000 people converged in one place to talk about brains.  The Society for Neuroscience meeting was held in New Orleans and man was it a fun trip this year.  Okay it did help that a) New Orleans is an absolutely amazing city that I will never get tired of going back to, b) we got to enjoy a little schadenfreude as the internet wrought revenge on a famous scientist who let his sexism show, c) Voytek and I had a very successful guerilla science campaign, and d) I got to do my first press conference which has since been picked up and mis-reported by CNN's blog.

Oh, and Ned the Neuron was a nice, warm and fuzzy presence at the conference and also at the late night bars.

But beyond that, the science was exceptionally good this year.  Here are some of the highlights that I caught.

-- Larry Abbott, perhaps one of the best theoretical neuroscientists alive, gave a fantastic lecture on how (counter-intuitively) unstructured networks lead to the best decoding. Basically it turns out that if you take a set of connections, in his case the olfactory system of the fly, then having a completely random wiring pattern actually makes it easier to decode the input stimulus.  Now I haven't read the paper, so I'm not 100% sure I can say why this works, but it does provide some interesting food for thought.  He also went on to present Mark Churchland's work on the subject which I'm saving for a later post (I have always been a big fan of Mark's work and I think he's onto something really amazing with his latest set of studies).

-- Contrary to some pretty vocal (and sometimes warranted) criticism, the Human Connectome Project gave a preview of some of its preliminary data and I gotta say... it looks incredible!  Their diffusion imaging data is hands down the best I've seen so far and they're already letting people download it to use in their research.  Have already registered and feel like a kid in the candy store.

-- Although I missed this talk at a pre-conference workshop, I hear that Philip Sabe's lab (my first post-doc adviser) has basically done the first step towards building The Matrix!  His graduate student, Maria Dadarlat, had some fantastic work where she continually pair patterns of stimulation in the monkey brain with real visual stimuli during a motor control task.  The monkey is trained to use this complex pattern of moving dot fields on a computer display to figure out how to navigate his arm around a workspace. After training, Maria can turn off the real visual stimulus (i.e., what the monkey sees) and the monkey uses the brain stimulation signals to guide his arm with as much accuracy as if he was actually seeing the visual stimulus.  Expect to hear big things when this comes out in press.

-- The optogenetics footprint was the largest I've seen this year. Some really cool work by several labs showing extensions into the non-human primates (wonder how long until it's used in humans for something) and significantly improved bandwidth.  This technology still boggles my mind, although I'm starting to appreciate some of the criticisms from the traditional physiology side of the table.

-- I also learned a lot about the in's-and-out's of NSF funding priorities for the near future thanks to some really helpful program officers and a great presentation on how to navigate the NSF grant process.  I literally couldn't write my notes fast enough.  All I can say is, emphasize novel and unique data sharing ideas as much as you can in your next set of applications...

Okay, I have many many more notes from visiting posters that I wont blather on about, but they will serve as fodder for future posts.  Just wanted to put these up there for anyone interested before I'm reduced to a coma from exhaustion.







Thursday, September 20, 2012

Being Cautious About Consciousness


Last month a distinguished group of scientists gathered at Cambridge University to attend the Francis Crick Memorial Conference.  The larger public knows Dr. Crick for his Nobel Prize winning work on the structure of DNA. Few people outside of the scientific community know that he spent the last half of his career dedicated to finding the neural correlates of consciousness.  This conference, which included experts in the fields of human perception, animal sensory systems, evolutionary biology and psychopharmacology, was meant to honor Dr. Crick's vision of one day identifying how our brains give rise to consciousness.

At this conference, several prominent attendees signed The Cambridge Declaration of Consciousness.  In this document, the signatories outlined several scientific findings, including that non-human animals experience binocular rivalry, have homologous brain regions as humans, many primitive emotions are linked to subcortical brain areas, birds have REM sleep, and hallucinogens affect human and non-human brains similarly.  From these disparate observations, the conference attendees concluded, “Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors.”  In summary, many animals have the same neural substrates for conscious states as humans, a clear implication that animals experience consciousness.

To be blunt, this declaration is both inflammatory and grossly irresponsible for two reasons. 

First, it misrepresents the state of our scientific understanding of consciousness.  As of yet, the scientific community has not reached a consensus on an empirically testable definition of what it means to be conscious.  You see “consciousness” is a quale, which according to Merriam-Webster is defined as, “properties experienced as distinct from any source it may have in physical object.”  Now to be fair, in psychology and neuroscience we deal with qualia on a daily basis. Concepts such as memory, attention, and emotion are all, in themselves, immeasurable entities; however, in the laboratory we are able to observe manifestations of these concepts by constraining our hypotheses to empirically testable phenomenon.  For example, when we study memory, we are referring to the changes in behavior or in neural systems that arises from experience (or the lack of such abilities from lesions to different brain regions).  We don’t have a way of seeing an actual memory itself, but we do have measures of recall, adaptation and synaptic plasticity.  The same is true for emotion, attention, and perception. 

But consciousness, in its current definition, is a quale defined by other qualia. It is the state of being aware of things in the outside world and of ourselves. Thus consciousness is a construct built off of both “awareness” and “intention”.  (It should be noted that in the Cambridge Declaration, the signatories often conflate the concept of consciousness with intention.) Thus, the researchers who study the neural correlates of consciousness are forced to look at these other states, but as of yet there is no unifying definition of the state of being conscious. There is no brain area, network of brain areas, or neurochemical system that, when damaged, definitively removes the ability to be “consciousness” while still being awake and alert. There are many lesions that cause problems with visual perception, memory, verbal recall, etc., but how many of those abilities can we lose and still be conscious? Science still does not have an answer to that question.

The second, and by far the biggest, problem with the Cambridge Declaration of Consciousness is that it is a dangerous document.  It gives the false impression that consciousness is a ubiquitous state of simple animal brains.  This is a damning statement to any researcher who uses animal subjects in their experiments (and many of the signatories of the declaration do use animals in their research).  Are the signers saying that we should stop all animal research or stop using animals as domesticated food sources?  If animals do in fact experience consciousness as we do, then this has far reaching implications for how we use and treat animals. 

The impact of such a statement, however, goes well beyond the future of animal research and into even more hot button social issues.  For example, this declaration implies that neuroscience should be able to tell us when consciousness starts in the fetal brain (since knowing the neural substrates means knowing when they develop), diving head first into a social issue we have no business being in at the moment.  What about clinically “brain dead” patients whose families wish to let them pass away?  If having a neocortex is not necessary for consciousness, then is someone who has lost 85% of her cortex from a stroke still conscious? By signing this declaration, these scientists have given the impression that the field of neuroscience has the answers to these questions, which could not be further from the truth.  We don't even know where to start.

Saturday, September 8, 2012

The unsung "job creators"

Unless you've been living in a cave these last few years, you've undoubtably heard the term "job creator" thrown around a lot.  The phrase is often used in reference to CEOs and other businessmen (and women) who run companies.  Actually, more often than not, it is used to refer to anybody who somehow has made a lot of money (regardless of how many actual jobs they've personally "created" or whether they simply inherited their wealth).  The connotation is of a benevolent capitalist who invests in an industry with a forward eye towards sustaining local economies.

These executives are elevated to near angelic status by some, but truth be told, they are all actually sitting at the end of the "job creation" chain.  The true stimulus for developing industries and subsequent employment usually happens much much earlier.  It usually begins in the laboratory.


Building an economy by asking a question

Since the dawn of the Industrial Revolution, science and technology has served as the bedrock of emerging industries.  Without theoretical physics, we would never of had atomic energy.  Mathematics gave us computers.  Molecular biology gave us biotechnology.  Geology gave us most of the energy industry.

But often the scientific discoveries that allowed for an industry to bloom were not intended to shape an economy.  These research endeavors were started merely to satisfy a curiosity.  My favorite example of this is photography.  Few can argue that the advent of the photograph didn't fundamentally change our world.  It not only revolutionized journalism, but also spawned hundreds of new companies, most notably companies like Kodak.

However, the early photograph came about because of independent, basic science discoveries in physics and chemistry.  The research on photons, electromagnetism and light sensitive materials that lead to the photograph weren't developed explicitly for photography.  They were the result of many independent scientists who were simply curious about the world around them.  It wasn't until Joseph Niepce and Louis Daguerre put these together that the early photographic process began to get off the ground.

Or take another example more closely tied to my work. The advent of the now almost ubiquitous medical imaging procedure MRI didn't come around on its own.  It was built off of fundamental discoveries in physics on the electromagnetic properties of molecules and atoms, as well as biological studies on the tissue content of the human body.  So basically, answering questions about how hydrogen atoms spin and how fatty the human brain really is led to perhaps the most important medical technology advancement since the vaccine.

Now these aren't isolated anecdotes.  Nearly every industry built off of a technological advancement that can trace its roots to basic science discoveries that had no clear applications when they started. 


Pulling the rug out from under new industries


Today, science is exploding (sometimes literally) with many new fundamental discoveries.  We've found the Higgs boson, we've discovered life in places we never dreamed possible, and we've even figured out how use viruses to make neurons fire by using lasers.  None of these discoveries have any applied uses that we know of... yet.  But who know's what industries they may lead to in the coming years and decades?

Unfortunately, despite these amazing discoveries and advancements, the appreciation for basic research is plummeting in this country.  In general scientific literacy and public support for science is dropping.  Even major scientific funding agencies like the National Institutes of Health and Defense Advanced Research Projects Agency are pushing for more "applied research" projects at the expense of basic science.  While this may help facilitate immediate advancements in existing industries, it is only a short-term strategy that shifts focus away from the real work that leads to the advent of entirely new industries in the long-term.

I say it's time to take a step back and appreciate who really are the "job creators" around here.  Is it the executive who sends paychecks to tens, hundreds or maybe even thousands of employees or is it the unsung people whose discoveries eventually build entire industries that end up employing thousands or millions of individuals?

So the next time you hear a politician or television pundit talk about thanking a "job creator," head to your local university or research lab and thank a scientist.




Thursday, July 26, 2012

Of soda bans and neural strands

A lot of comedic energy has been recently focused on New York City's ban on extra large sodas. Nowhere is this more so than  at the Daily Show, where Jon Stewart recently pointed out the irony that marijuana is practically legal, buyer beware if you want 32oz of your favorite beverage.


Yet it is interesting that Mr. Stewart chose this particular comparison.

Years ago, social and political pressure mounted on science to show that chronic marijuana consumption could damage brain cells.  The story was to be that toking up meant killing neurons.  Well despite decades of federally funded research, there has been no definitive link between the consumption of marijuana and brain cell death.  Sure,  there is a slight possibility that the ink is there and we haven't found it, but it hasn't been for lack of trying.  


Now let's flash forward to today, with over 1/3 of the US population is obese and, as a population, we just keep getting bigger.  This is mainly due to a decrease in physical activity and an increase in high caloric diets like the nefarious "super-sized" sodas.

"So what?" you might say,  "It's not like that soda is killing my brain cells."

Actually, a growing body of evidence suggests that it might be doing just that.  Well to be clear, not that single soda per se, but the obesity that such high caloric intake can lead to.

From the waistline to the brain

Most people, even scientists that I talk to, assume that if there is a relationship between the brain and obesity, it is only in the sense that certain people's brains drive them to eat more and that's why their obese.  Maybe you've got a more addictive personality to begin with, so you're hardwired to seek reward and your drug of choice ends up being doughnuts and soda pop.


Now I wont argue that there might very well be a case for this argument and, in fact, there is some data to justify this hypothesis.  But let's step back for a minute and consider some general facts.  The brain needs a lot of energy to do its thing.  In fact, you can think of the brain as the United States of the global energy supply of the body.  It occupies only about 2% of the total tissue volume in our body, but it uses almost 15% of the output from the heart, 20% of the body's oxygen, and 25% of the circulating glucose.  


Now everyone knows that obesity is linked to all sorts of metabolic problems (e.g., diabetes, high blood pressured, cardiovascular disease, etc.).  So if we stick with our metaphor of the brain being like the United States of energy consumption, then think about what happens when energy prices skyrocket in this country.  The cost of doing things incrementally goes up and the overall productivity of the country pays a price. In fact, there are many studies showing that the brains and cognitive processes of obese individuals function differently than lean counterparts.


But emerging evidence suggests that obesity may be much more nefarious to the brain than simply raising the metabolic gas prices.  It might actually be, that's right... attacking brain tissue itself.


Okay, I'll admit that last statement seems to be quite hyperbolic, but emerging research is giving us a very startling picture on the relationship between the size of your gut and your brain health.  


Take for example a recent study my colleagues and I did that will be coming out in the journal of Psychosomatic Medicine.  We looked at how the underlying architecture of the brain itself was different in obese individuals compared lean counterparts. We took a group of neurologically health adults who spanned a range of body mass index (BMI) scores.  Higher BMI means, generally speaking, greater obesity.   We then used MRI to measure the integrity of the physical connections in the brain.  Remember that the two fundamental tissue types in the brain are gray matter (the cell bodies) and white matter (the long strands that connect cells together).  The type of MRI we used, called diffusion tensor imaging, looked at this latter tissue type (by measuring something called fractional anisotropy, which a very basic measure of white matter integrity).


We found that with every point increase on the BMI scale, there was an incremental decease in the integrity of white matter throughout the brain.  Now other studies also show this relationship between obesity and white matter, but our findings point to a global and pervasive effect throughout the brain.


As if that wasn't scary enough, in another study, recently published in the journal Cerebral Cortex, my colleagues and I used the same MRI approach to look at how social factors relate to neural health.  We found that lower social status (i.e., lower family income, fewer years of education and living in poorer communities) predicted a reduced integrity of the physical connections in the brain.


Let me repeat just to drive the point home: Socioeconomic status could actually predict the microscopic architecture of the connections in the brain


How can this happen?  Well it turns out that this relationship is mediated by an increase in unhealthy life-styles like smoking and, that's right, increased obesity. So life-style factors and access to resources that affect physical health may be directly influencing the physical structure your brain itself.  That means this health-to-brain relationship has vast societal implications that we are only just beginning to comprehend.


A molecular link between physical health and the brain

Okay, if you've stuck through this far, you are probably wondering how the heck changes in the body can influence the brain?  Well in the study I just described my colleagues also measured levels of a molecule called C-reactive protein (or CRP) in the blood.  This little protein reflects inflammatory  activity, which is an immune system reaction and is the reason why recent cuts turn red and flushed.  It turns out that, the link between both smoking and obesity and the brain could be mostly explained by increased CRP levels.  


Let's put it together.  Lower socioeconomic status lead to reduced physical health which led to increased inflammation which, in turn, led to reduced integrity of the white matter in the brain.


Schematic of the relationship between social and lifestyle factors the brain (adapted from Gianaros et al. Cerebral Cortex 2012)

Does this mean that cells are dying?  Well not necessarily.  


Remember, I said that white matter is the long fiber strands that connect neurons together.  So far all we can say is that the signal we use to measure this tissue is reducing.  However, adding one more fact into the equation leads to some very scary hypothesis as to what might be happening.

It turns out that the fat around your gut is actually an organ that secretes inflammatory molecules.  As that "organ" expands, it secretes more inflammatory chemicals (called cytokines). Many of these chemicals can cross the blood brain barrier that protects the brain from a lot of bad things.  Once in the brain they can induce a local inflammation of the support cells that basically serve as the scaffolding for the axons in the brain.  After a while, this scaffolding may collapse and break the underlying axons.

How do we know this can happen?  Well because this is precisely what happens in multiple sclerosis (MS) and we know that MS definitely damages physical tissue.

Now I should be up front.  The scientific evidence isn't there yet to suggest that obesity physically kills brain cells the same way MS does.  The emerging evidence nonetheless convincing that there is a troubling link between obesity and the same systems as MS.  This evidence keeps mounting every month as more scientific studies come out.


Food for thought

So while comedians and politicians may poke fun at Mayor Bloomberg's decision to ban extra-large soda drinks in an effort to curb obesity, we should take a step back and look at the science.  Increased obesity not only reduces your physical health, but it's becoming readily apparent that obesity is also interfering with the organ that sits as the root of all thinking.  Our work, along the studies from many other labs, shows that this has dramatic implications that extend into social issues as well as medical issues.  


Will banning 32oz sodas solve the problem?  Absolutely not... not even close.  But is it taking a problem seriously that we have, thus far, only been talking about tongue-in-cheek?  You bet it is.  


ResearchBlogging.org Brogan A, Hevey D, O'Callaghan G, Yoder R, O'Shea D. (2011). Impaired decision making among morbidly obese adults J Psychosom Res DOI: 10.1016/j.jpsychores.2010.07.012



 Brogan A, Hevey D, Pignatti R. (2011). Anorexia, bulimia, and obesity: shared decision making deficits on the Iowa Gambling Task (IGT) http://www.ncbi.nlm.nih.gov/pubmed/20406532# DOI: 10.1017/S1355617710000354

Gianaros PJ, Marsland AL, Sheu LK, Erickson KI, Verstynen TD. (2012). Inflammatory Pathways Link Socioeconomic Inequalities to White Matter Architecture. Cerebral Cortex DOI: http://www.ncbi.nlm.nih.gov/pubmed/22772650

Stice E, Spoor S, Bohon C, Small DM. (2008). Relation between obesity and blunted striatal response to food is moderated by TaqIA A1 allele Science DOI: http://www.ncbi.nlm.nih.gov/pubmed/18927395


Monday, March 12, 2012

Tales From The Science Trenches: The unasked question


Note: This post reflects my final diatribe on the issue of modern scientific publishing for a little while.  While I think that this is definitely a flawed process in desperate need of fixing, I also think there are other issues that deserve our collective attention much more urgently. I will get to those in upcoming posts.

After a dreadfully long hiatus (due to being on the job market), I have finally returned to the world of blogging.

I've decided to get back to the topic I left hanging a few months ago: the so-called publication "crisis" the field of cognitive neuroscience.  Regular readers of this blog already know my opinions of this problem, but a little bit of time has added maturity and nuance to my opinions on this matter.

In my travels these past few months I've had the opportunity to discuss the flaws of modern review process at length with many well respected colleagues.  I've heard the good, the bad and the in-between about life on the front lines of scientific publishing.  Some say it's way too difficult to publish today (or dreadfully biased). Others say it's too easy. But almost everyone agrees that it's a real pain in the ass.

I don't want to spend too much time on the status of the current system.  Dwight Kravitz & Chris Baker have published a very complete background & review of this topic that has garnered a lot of good attention.  I highly recommend it for anyone interested in this topic, if only to get a good understanding of how we got to where we are today.

So for our purposes today, let's consider some anecdotal stories relayed to me by colleagues.

The Bad

When given the choice between hearing "good news" and "bad news" I always accept the bad news first.  So let's start by considering an example of where the process falls apart (from a colleague working in Ireland).
Submitted a manuscript Journal X [Note: obviously not the real journal name] in April 2010, received an email in mid June expecting this to be reviews, this was in fact an email from the editor saying they would send it out to review. Mid-Oct 2010 (1 day before it reached 6months since submission!) we received an apologetic email and a decision, invited to resubmit based on the reviewer comments. I say reviewer because we only had one reviewer, the other had broken contact with the journal. Mid August 2011(6 months since re-submission, 1 and a half years since the initial submission), it appeared that the initial reviewer whose comments we’d addressed had also broken contact with the journal and we once again only had a single new review.  At this stage they just accepted the article pending revisions based again on the single reviewer.

Two other slightly irksome reviews both from Journal Y; in one case a reviewer told us we needed to discuss the role of the basal ganglia although this was a cerebellar paper, in the next round of reviews a different reviewer said “why are you discussing the basal ganglia in a cerebellar paper”. The second more annoying one was from one of an anatomy paper which used one of the most highly cited articles in my field (Kelly and Strick, 2003) to justify my regions of interest. The reviewer said Kelly and Strick were wrong and therefore we were wrong!

This highlights serveral key problems brought up again and again in my conversations with colleagues (and discussed at length by Kravitz & Baker).

First, there's the needless lag in getting the article turned around.  Fifteen years ago, when manuscripts had to be physically mailed to journals, it was understandable to have a 1-2 year review process.  However, we now live in an era of online submissions.  Yet the time-line of many journals is still arduously slow and needlessly long.

Second, mid-tier and low-tier journals often find themselves scrambling to find good reviewers for papers.  I've had many colleagues tell me about a paper getting in with only a single reviewer (for non-scientists, the norm is 2-3 reviewers).  In many of these cases there is a trend for these to also be out-sourced to graduate students or post-docs.

Now don't get me wrong, this is an incredible training opportunity (and one that I am thankful to have started early in my career).  However, without good oversight of an out-sourced review by the principle investigator who was originally solicited, a rookie mistake can kill a reasonably good study's chance of acceptance at a journal.  This chance is worsened when the rookie is the only reviewer of a manuscript.

Finally the critiques are generally random, arbitrary and sometimes not related to the manuscript itself at all.  Sadly, it is fairly common these days for a paper to get rejected because of more global disagreements in the field rather than the quality of the study itself.  We work in a large field with many big theoretical disagreements.  As a result, all of us at one time or another have been collateral damage in a fight amongst the larger egos in our field.  Yet it only serves to stifle communication of ideas and results, rather than evaluating the true merit of any individual study.

The Good

But contrary to this (and the arguments raised by Kravitz & Baker, as well as many other scientists), there are many times when the system actually works well.  Contrast the previous story with one communicated by a friend working here in Pittsburgh (paraphrased here and not her actual quote).
I submitted a paper to Journal Z and after a month I got comments back that were very positive and indicated a strong willingness to accept the paper.  The reviewers were very collegial, but brought up an ideal control experiment that would bridge my work with another theory as well as making a better overall argument for my hypothesis.  The editor had also read the paper and provided his own comments.  He coalesced the reviewers comments into a specific set of recommendations and even offered recommendations for how to modify the design of the control experiment so as to make it a tighter/cleaner study.  In the end, I felt like they were all invested in pushing the project forward.
In my opinion this is a textbook example of how the process should work in the first place: fast turn around, constructive reviews and a well managed pipeline focused almost exclusively on the details of the study at hand.  So while the system has it's flaws, it's not broken everywhere.

So What Gives?

Now there are two big differences between the Bad Story and the Good Story.  The journals talked about in the Bad are all from neuroscience journals, while the Good is a story from a psychology journal.  So differences in fields may play a key role.

But I think there's something deeper at play here.  Specifically differences in editorial oversight.

Let's face it, most of us can agree that in many cases, editors at some journals appear to be falling asleep on the job.  But let's not be too harsh on this judgement. I can understand why this happens in many cases.

Big publishing houses like Elsevier want to make as much money on free labor as possible.  So in many cases, not only are professional scientists giving free professional labor as reviewers, we're also serving as under-compensated editors. Editors are overworked, under appreciated, and have little time to manage their own research careers, let alone supervise the careers of others through the editorial process.

In a context like this, I can understand the sub-optimal feedback and oversight in the editorial process.

But what about higher impact journals with full-time professional editors? Well in my experience, those are just as poorly managed.  After numerous experiences submitting to journals with full-time and professional editors, I have never received the type of constructive feedback and oversight like what happened in the Good Story above.  In fact, I don't really believe that many editors at these journals even read the articles at all.

Of course, editorial oversight is just one (albeit significant) part of the problem.  Another part stems with the demand for reviews as well.  Reviewers are under increasing pressure to turn around reviews in faster timelines, often with little time to really digest any particular study.  This is on top of the increased quantity of reviews required to keep pace with the increased number of journals.

Scientists push to get out as many papers as they can and this increased volume of manuscripts requires even more reviewers.  Eventually the demand gets so great that quality review quality drops precipitously.

The military has a word for this: "clusterfuck."

The Unasked Question

So this is the state of our current publishing process in the field of cognitive neuroscience.  It's driven a lot of professional scientists to push for a change.  To fix the broken pipeline.

Many argue that we need to overhaul the entire process itself.  I view this opinion fondly, although I might hesitate to say that I am a "proponent".  Some argue for a complete open access model with unblinded reviews.  Others want a centralized database for submitting unreviewed manuscripts in order to crowd-source the evaluation of merit of individual studies.

If you ask five cognitive neuroscientists their opinion on the current publishing process, you'll get ten different recommendations.

But there's one question I think we're all neglecting and it's the most important question of all if we are going to try and address this so-called crisis.


What the hell do we want from the publication process?

Do we want to go back to the more traditional view where papers are a completed treatise on research topic?  Decades ago this was the central view of what a scientific paper should be.  Researchers would spend years investigating a single question, run all possible control experiments (dozens or more), and carefully construct a single paper after all those years that eliminates all other possible alternative explanations for the effect of interest.  These papers would be long, carefully constructed, and a nearly definitive treatise on a research topic.

This is essentially the tortoise model of scientific advancement.  The time-scale of publishing is very long, but also very reliable and consistent.  Here the key measure of a research program's efficacy is the long-term citation record of a given paper.  You might only publish one empirical article every few years, but it would be a definitive work.

The alternative is the hare model of scientific advancement.  Here publications should reflect a snap-shot of a research program's status.  Only one or two experiments are reported in a paper (although thoroughly analyzed) and repeated publications all tie together over time to tell a larger story.  This is a very accelerated model of scientific progress.  Articles are less a treatise on a core theoretical question and more of a "status-update" of a research program.

Over the last couple of decades, the push in cognitive neuroscience (and other fields) is to move away from the slower quality-focused model to the faster quantity-based model.  There are many reasons for this, but you can primarily thank the increasingly heated "publish-or-perish" academic culture for this change.

Right now we are, unfortunately, stuck in a hybrid paradigm where we have the expectations of the tortoise model with the demands of the hare model.  In my opinion, this bipolar paradigm is what is driving researchers crazy these days.

So I return to my original question.  What do we want from scientific publishing?  We can't be both a tortoise model or the hare model?  We can't have both (or realistically we can't have the mechanics of one and the expectations of the other).

Unfortunately, there is no good answer to this.  I see equally valid pro's and con's to either.  Also, this is a decision that has profound implications well beyond how we structure the peer-review process.  Departments and universities will have to completely revise how they evaluate the progress of faculty and students.  Similarly so for granting agencies.

However, these changes are necessary.  It's just a matter of deciding what the expectations need to be.

Therefore answering this question comes down to us talking together as a field.  We need to decide which model we want to use and commit to it (with all the implications that follow).  Until we make this decision, we can propose as many new publishing systems as we want, but they'll end up being nothing more than intellectual thought experiments that describes the world we wish could be, rather than real ways to fix the world that we have.