I came across an interesting post on the Science News website which explores the concept of reproducibility in all science – and how we shouldn’t be so hard on ourselves if we struggle with it.
It’s common amongst all areas of science that published work is often found difficult to reproduce by other researchers, but how much of a problem is this, and should the scientists who published the work be punished for it?
Adam Fetterman, a social psychologist from the University of Essex, thinks issues with replication of work often become too personal, when “It should be about the research and not about researcher”. He also comments that “There are always going to be things that are wrong in our science. No one should be blamed for having things be wrong or demonized for trying to correct them”.
He has a fair point. As a researcher myself, I know how difficult it can be to replicate work, and how the nature of science itself often means that the tiniest difference can have a dramatic effect on the outcome of an experiment. Sometimes the slight variance in result makes very little difference – such as a higher or lower yield of compound – but sometimes the entire findings can be put into question. This can lead researchers to feeling like frauds, or fretting over how their reputation will be affected. Should they feel this way, or is this merely part of the process? They’ve carried out their work and reported what they found, is it their fault that this result is anomalous?
Fetterman and his colleagues carried out a study to delve more into these ideas, and it appears that we assume far worse consequences of our work being unable to be replicated than is necessary. It would seem that scientists would not judge other researchers as harshly as we’d expect when their work came into question, and they viewed academics who admitted any issues much more favourably than those who stood by their questionable results.
The nature of science and of research in general often means that unusual answers are thrown at us, and we can’t always be sure we’ve got it right the first time. The irreproducibility of some results can in fact be interesting in their own right. If the same experiment keeps throwing up different results, we might be interested in why this is, and this leads to the development and improvement of a procedure, or to the discovery of entirely new science.
I myself have a colleague who has carried out the same reaction 22 times. Around half of those times he has successfully synthesised his desired product in high conversion, the others he has apparently randomly had significantly less success, with lower conversions and unseparable mixtures. If the first two or three of those reactions hadn’t have worked, he’d have given up altogether and assumed the reaction didn’t work. It just so happens that the first reaction worked perfectly, so he knew it could be done, but nothing seems to explain the variable results. This is just one example of how seemingly perfectly replicated experiments can give completely different results, and it would be really interesting to find out why.
We’ve all had fluke experiments which we’ve discounted as anomalous, but the issue of reproducibility is something that continues to penetrate all scientific research, and we need to decide how we’re going to view it if we want to continue moving forward.