Papers that cannot be replicated are cited 153 times more because their findings are interesting
University of California - San Diego
Papers in leading psychology, economic and science journals that fail to replicate and therefore are less likely to be true are often the most cited papers in academic research, according to a new study by the University of California San Diego's Rady School of Management.
Published
in Science Advances, the paper explores the ongoing
"replication crisis" in which researchers have discovered that many
findings in the fields of social sciences and medicine don't hold up when other
researchers try to repeat the experiments.
The
paper reveals that findings from studies that cannot be verified when the
experiments are repeated have a bigger influence over time. The unreliable
research tends to be cited as if the results were true long after the
publication failed to replicate.
"We also know that experts can predict well which papers will be replicated," write the authors Marta Serra-Garcia, assistant professor of economics and strategy at the Rady School and Uri Gneezy, professor of behavioral economics also at the Rady School. "Given this prediction, we ask 'why are non-replicable papers accepted for publication in the first place?'"
Their
possible answer is that review teams of academic journals face a trade-off.
When the results are more "interesting," they apply lower standards
regarding their reproducibility.
The
link between interesting findings and nonreplicable research also can explain
why it is cited at a much higher rate -- the authors found that papers that
successfully replicate are cited 153 times less than those that failed.
"Interesting
or appealing findings are also covered more by media or shared on platforms
like Twitter, generating a lot of attention, but that does not make them
true," Gneezy said.
Serra-Garcia
and Gneezy analyzed data from three influential replication projects which
tried to systematically replicate the findings in top psychology, economic and
general science journals (Nature and Science). In psychology, only 39 percent
of the 100 experiments successfully replicated. In economics, 61 percent of the
18 studies replicated as did 62 percent of the 21 studies published in
Nature/Science.
With
the findings from these three replication projects, the authors used Google
Scholar to test whether papers that failed to replicate are cited significantly
more often than those that were successfully replicated, both before and after
the replication projects were published. The largest gap was in papers
published in Nature/Science: non-replicable papers were cited 300 times more
than replicable ones.
When
the authors took into account several characteristics of the studies replicated
-- such as the number of authors, the rate of male authors, the details of the
experiment (location, language and online implementation) and the field in
which the paper was published -- the relationship between replicability and
citations was unchanged.
They
also show the impact of such citations grows over time. Yearly citation counts
reveal a pronounced gap between papers that replicated and those that did not.
On average, papers that failed to replicate are cited 16 times more per year.
This gap remains even after the replication project is published.
"Remarkably,
only 12 percent of post-replication citations of non-replicable findings
acknowledge the replication failure," the authors write.
The
influence of an inaccurate paper published in a prestigious journal can have
repercussions for decades. For example, the study Andrew Wakefield published in
The Lancet in 1998 turned tens of thousands of parents around the world against
the measles, mumps and rubella vaccine because of an implied link between vaccinations
and autism. The incorrect findings were retracted by The Lancet 12 years later,
but the claims that autism is linked to vaccines continue.
The
authors added that journals may feel pressure to publish interesting findings,
and so do academics. For example, in promotion decisions, most academic
institutions use citations as an important metric in the decision of whether to
promote a faculty member.
This
may be the source of the "replication crisis," first discovered the
early 2010s.
"We
hope our research encourages readers to be cautious if they read something that
is interesting and appealing," Serra-Garcia said. "Whenever
researchers cite work that is more interesting or has been cited a lot, we hope
they will check if replication data is available and what those findings
suggest."
Gneezy added, "We care about the field and producing quality research and we want to it to be true."