Ever since the 17th century, when modern scientific research was born, methods have been refined and improved. But for the most part, research is the same now as it was then: collecting observations that are thoroughly analysed and interpreted. When other researchers can repeat the same experiment with the same results, we talk about reproducibility and evidence – two important foundations of science. But what role does the choice of analytical method play in research in ecology and evolutionary biology? A new study published in BMC Biology analysed this question among 174 research teams. Comparing two unpublished datasets - one on the relationship between the number of siblings and the growth of blue tits and one on how grass cover and trees affect the growth of eucalyptus plants – revealed different results.
”The choice of analytical method turned out to play a major role. We also found that researchers did not agree on which methods were good or bad. It's a bit worrying that we researchers seem to have difficulty distinguishing between reliable and less reliable methods," says Jessica Abbott, biology professor at Lund University who participated in the study.
Some research results seem to be due to chance
For the blue tit dataset, the results were quite consistent. But there were also researchers who got completely opposite results depending on the statistical model they chose. For the plant dataset, the results were much more diverse. The findings highlight an important problem. If we are to trust research, we need to be able to scrutinise how the results are arrived at and understand the basis on which researchers choose a particular method. It seems that some high-profile results are due in whole or in part to chance, because the same results are not obtained when repeated.
The choice of statistical method is very much a matter of taste - there are rarely hard and fast rules, says Abbott.
”In this case, the focus was on how the datasets were analysed. The choice of statistical method is very much a matter of taste - there are rarely hard and fast rules. The study is important because it shows that the choice of methodology matters more than many people realise," says Abbott.
So what practical use can the new results have? Abbott says they will hopefully encourage researchers to try several different analytical methods before drawing conclusions. If you get the same outcome by using different methods, the certainty of the results is probably greater.
”It's about the credibility of science. If it is not possible to repeat the same study from the beginning, then researchers should at least try several methods of analysis before concluding that a result is certain. This has been discouraged by many scientists because there may be a risk of presenting only the analysis that gives the “best” result. But if some results are less reproducible, this needs to be taken into account, for example before policy decisions are made," says Jessica Abbott.