This is a rehash of a piece I wrote in September 2011, but the sentiment of what I said then stays the same. However, I wanted to change the focus onto one of the sub-issues I discussed which, nearly two years later, is no closer to being resolved – that of journals still insisting on ‘significant’ results in articles submitted for publication.
Back then I wrote ‘I believe that academic journals, and the academic community in general, have played a role in fraud and it needs to be recognised and addressed‘. I think it has been recognised – at least by the research community. Social media over the last year in particular has been awash with researchers calling for an overhaul in the way that research is published. Has it been addressed by the journals and reviewers? I don’t think so, or at least certainly not as much as it could be. Yes, some journals are now asking for raw data to be submitted as standard along with the written article, but this still leaves a huge issue not addressed – that of the reporting of non-significant results.
A big problem is that publishing research articles is not done in isolation – a lot can ride on it, whether this be completing the grant funding the research, justifying further research in the field, and securing future funding/academic positions. ‘Publish or perish’ is as prevalent now as it has ever been, if not more so. But what do you do when your research doesn’t turn out perfectly (anyone ever had a perfect study finding?) or you have null results, even if they are predicted (more of this later) and journals reject your paper based upon these findings even when you have good findings and a story to tell in there as well. Fortunately most researchers, certainly those that I know, take it on the chin (usually after some ranting over several beers, admittedly) and move on in exactly the same way as before, researching and reporting honestly… but the temptation is being put there to behave in another way; to drop the null findings, to get rid of a couple of troublesome datapoints that are keeping you just the wrong side of .05. Not because you’re an evil and reprehensible individual but because just doing something fairly simple and a bit innocuous suddenly gets you that paper in the prestigious journal, gets you the keynote at that conference you always attend, gets you that job you were after.
And from that point on it gets easier. How do I know? Because I’m a psychologist and psychologists wrote the bloody book on it! That’s the irony – we know what causes people to behave in certain ways yet we don’t apply it to our own systems. I don’t know Diederik Stapel personally but I think it is pretty safe to say that he didn’t enter academia with the intention of faking things. But in all likelihood he did it the first time and it brought him benefits with no adverse consequences. Moving away from Stapel in particular, the way the system is set up at the moment the adverse consequences can come from not behaving in an underhand manner (your study doesn’t get published). That means that we’re even beyond operant conditioning, we’re in nudge theory territory – but a negative nudge! What sort of a system promotes that as a form of incentive?
I want to go back to the null results issue before I finish – I was brought up with the principle that you should publish results to inform others of what you have found; and null results can be important, whether because you have a good reason for predicting that they will occur or simply to inform others that they may be wasting their time following the same path you have trodden. A nice principle, but I’ve had what I consider to be a decent study sat doing nothing for 18 months because it has a big null result in it that I expected, but without reporting it the rest of the findings make no sense. No-one will publish it. This then leads to the further irony that I spend all my time telling my undergraduates for their dissertations that ‘significant findings aren’t important, it is how you carry out and report the study that matters’; as soon as they become postgrads am I supposed to say ‘forget that, give me results at all costs’?
At the end of the September 2011 piece I said ‘[I]t is a complex issue that shouldn’t be subject to knee-jerk reactions by either social psychologists or anyone else – we need to look at all of the issues surrounding data collection and reporting, and move forward from this sensibly‘. I think a lot of researchers are of the same opinion, but I also think that anyone who thinks people like Stapel exist because they are ‘bad’ researchers/people are being extremely naive. Our systems for the reporting of research must change… because the next time a Stapel comes along in our discipline or anyone else’s the shocked hand-wringing won’t cut any ice with many.