Criteria for funding and promotion lead to bad science

Criteria for funding and promotion lead to bad science
Wanderer, Caspar David Friedrich. Credit: Photographic reproduction by Cybershot800i. (Diff), Wikimedia Commons

Scientists are trained to carefully assess theories by designing good experiments and building on existing knowledge. But there is growing concern that too many research findings may in fact be false. New research publishing 10 November in open-access journal PLOS Biology by psychologists at the universities of Bristol and Exeter suggests that this may happen because of the criteria used in funding science and promoting scientists which, they say, place too much weight on novel, eye-catching findings.

Some scientists are becoming concerned that published results are inaccurate—a recent attempt by 270 scientists to reproduce the findings reported in 100 psychology studies the Reproducibility Project: Psychology found that only about 40 per cent could be reproduced.

This latest study shows that we shouldn't be surprised by this, because researchers are incentivised to work in a certain way if they want to further their careers, such as running a large number of small studies, rather than a smaller number of larger, more definitive ones. But while this might be good for their careers, it won't necessarily be good for .

Professor Marcus Munafò and Dr Andrew Higginson, researchers in psychology at the universities of Bristol and Exeter, concluded that scientists aiming to progress should carry out lots of small, exploratory studies because this is more likely to lead to surprising results. The most prestigious journals publish only highly novel findings, and scientists often win grants and get promotions if they manage to publish just one paper in these journals, which means that these small (but unreliable) studies may be disproportionately rewarded in the current system.

The authors used a mathematical model to predict how an optimal researcher who is trying to maximise the impact of their publications should spend their research time and effort. Scientific researchers have to decide what proportion of time to invest in looking for exciting new results rather than confirming previous findings. They also must decide how much resource to invest in each experiment.

The model shows that the best thing for career progression is carry out lots of small exploratory studies and no confirmatory ones. Even though each experiment is less likely to identify a real effect if it's there, they are likely to get some false positives, which unfortunately are often published too.

Dr Higginson said: "This is an important issue because so much money is wasted doing research from which the results can't be trusted; a significant finding might be just as likely to be a false positive as actually be measuring a real phenomenon."

This wouldn't happen if more publications, rather than one or two high profile ones, mattered to scientists' careers, nor if novel findings weren't prized so much more than confirmatory work that confirms previous findings, say the researchers.

So is there any way to overcome this problem of bad scientific practice? There could be immediate solutions, as Professor Munafò explained: "Journal editors and reviewers could be much stricter about good statistical procedures, such as insisting on large sample sizes and tougher statistical criteria for deciding whether an effect has been found."

There are already some encouraging signs - for example, a number of journals are introducing reporting checklists which require authors to state, among other things, how they decided on the sample size they used. Funders are also making similar changes to grant application procedures.

"The best thing for scientific progress would be a mixture of medium-sized exploratory studies with large confirmatory studies," said Dr Higginson. "Our work suggests that researchers would be more likely to do this if agencies and promotion committees rewarded asking important questions and good methodology, rather than surprising findings and exciting interpretations."

More information: Higginson AD, Munafò MR (2016) Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions. PLoS Biol 14(11): e2000995. DOI: 10.1371/journal.pbio.2000995

Journal information: PLoS Biology

Citation: Criteria for funding and promotion lead to bad science (2016, November 10) retrieved 28 March 2024 from https://phys.org/news/2016-11-criteria-funding-bad-science.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

The irreproducibility crisis – an opportunity to make science better

1131 shares

Feedback to editors