What goes on behind closed doors in journal editorial offices is becoming more and more of a ‘hot topic’ as publications are encouraged to move towards open access, and authors seek justification for the amount they pay to publish their research. One particular issue brought to light by Ben Goldacre’s recent book, Bad Pharma, is the idea that there is an editorial bias towards publishing positive studies over negative ones. In his book, Goldacre rejects the notion that editors discriminate against negative studies, finding that the probability of any given paper being published was more or less the same, whether its findings were positive or negative.
So, where’s the problem? According to Professor Stephen Senn of the Centre de Recherche Public de la Santé in Luxemburg, the issue lies in the quality of the work.
In an interview with F1000 Publisher, Ian Stoneham (see video), Senn explains the reasoning behind his research, and that the flaw in the argument of ‘no editorial bias’ is the assumption that the papers submitted to any given journal are more or less of the same quality, regardless of whether they are positive or negative. What we’re apparently overlooking here are the papers that aren’t being submitted.
After reading his latest paper, ‘Misunderstanding publication bias: editors are not blameless after all‘ F1000Research 1:58.v1, I asked Dr Senn to explain his theory further. It seems that in order to study publication bias properly, we must appreciate that missing data are just as important as those that are present.
Dr Senn offers the following analogy to explain:
“A fine example of the importance of thinking carefully about how we get to see what we see is provided by Abraham Wald’s study of planes during the Second World War. He had information about the location of bullet holes in a number of planes that had got back to base having been shot. He argued that the place where one needed to put extra armour was where there were no bullet holes. The missing bullet holes were very important, since these were holes in planes that did not get back to base, probably because the shot had struck a vital point. So the key was thinking about the planes one did not see. Similarly, those studying publication bias of editors should have thought about the papers the editors did not see and what this implied about what they were seeing.”
I have a number of friends working on PhDs and postdocs, and one of their chief complaints is that they can spend many months trying to make an experiment work, only to find out that it has been already attempted by a scientist in another institution with similar negative results. Obviously, it would have saved them considerable time if these negative findings had been published; however, they are still reluctant to try and publish their own results – even though this could prevent future unnecessary repetition – because they fear their work isn’t significant enough to pass an editorial board’s scrutiny.
Senn’s paper highlights this seldom discussed, but widespread issue in science publishing that F1000Research hopes to eliminate by encouraging publication of all sound research, regardless of whether the results are negative. Hopefully, as the significance and utility of publishing negative and null findings becomes more apparent, other journals will open their closed doors and more readily support the publication of all research, not just what they think their readers are interested in.