Some people I know think I was a bit harsh on Twitter calling Mark van Vugt’s article at Psychology Today, about Diederik Stapel’s scientific misconduct charge, ‘poor’ (see http://t.co/fyNMWMY for the article). However, I stand by it and I want to set out why.
I have no problem with supporting whistleblowers, I have always thought that this should be done in all workplaces as long as there is a line drawn between true whistleblowing and malicious persecution – but that is for another argument entirely. However, I do have an issue with this being done by an ethics officer, and this relates to the next point…
The article suggests that one of the problems may be the issue of deception used in social psychology. This is akin to suggesting that social psychologists, through dabbling with ‘the dark side’ of deception, somehow get sucked into a world of deceipt which then extends to their use of the data.
The simple fact is, this has nothing to do with ethics – what we are talking about is pure and blatant fraud. A researcher could carry out the most ethical study in the world, or go through every step possible to gain ethical approval, and still commit fraud with the data they get at the end of it. Conversely, a highly deceptive and ethically dubious study could have data reported perfectly – there is simply no correlation or reason for supposing one between deception/ethics and data fraud.
Rule 3 of the article suggests that only published articles should make it to the media. Whilst the unpublished work of Kanazawa and some of Stapel’s work making it into the media when they may not have made it through peer-review are reasonable, I also have some grave misgivings about this, hence the title of this piece – because I do believe that academic journals, and the academic community in general, have played a role in fraud and it needs to be recognised and addressed. Here’s why…
The simple facts are, there are a lot of postgrads/postdocs et al., out there and not many permanent academic jobs – it is the same across all disciplines at the moment. The need for publications under your belt, whether it be to get that first job or to seek promotion, is growing all the time – ‘publish or perish’ is as alive now as it ever was, possibly more so. At the same time we have peer-reviewed journals who are largely only interested in publishing significant results. And, of course, the grant money is getting harder and harder to come by.
So there you are – a promising post-doc/researcher/whatever, and you’ve spent all of your grant money running a study that has taken weeks/months to set up, collect data, and analyse. Back come the results, and there is nothing there/a smallish effect – certainly not enough to get you a good publication, if any. The scientific paradigm says you should either refine your study and try again, or accept that nothing is happening and move on (it also says you should publish these results to inform others of what you have found, but good luck with that one). The bank balance and the career prospects, however, could be saying something very different. And that is where the temptation must be. Failure, certainly at the lower levels in many sciences, is almost not an option any more.
Got to get the significant results to get the papers to get the job to climb the ladder in academia – that is the system we have set up. Not only does it encourage fraud, it encourages ‘safe’ science – who can afford to take the risks any more? (again, that is a topic for another discussion). But this is where I think academia has to shoulder some of the blame – there has to be an outlet for researchers to be able to report their findings whether they be good, bad, or indifferent. Not only might this mean that people won’t feel the need to commit fraud to get research out there, but surely it will expand our knowledge of the true nature of our research. I’d love to know why so many ‘robust’ findings that I or my students have attempted to replicate produce nothing of the sort – are we simply doing it wrong or are others finding the same? If that is the case, are these findings as robust as stated? I don’t know whether open-access or instituion-pays ideas are the way forward, but something has to change because I truly believe that the way things are at the moment are a part of the problem – the idea of ‘a few bad apples’ is too simplistic and ignores the wider issues.
I’m fortunate that I’ve never been in the position that I’ve had to contemplate doing what Stapel did (and I’ve got the boxes of unpublished data from countless failed experiments to prove it!). I don’t think I would do what he did, but who can tell unless they are in that position? That isn’t condoning it by any means, merely trying to understand it. It is a complex issue that shouldn’t be subject to knee-jerk reactions by either social psychologists or anyone else – we need to look at all of the issues surrounding data collection and reporting, and move forward from this sensibly.