Psychology papers that fail to replicate are less likely to be cited, according to a new study.
The paper’s authors say their findings suggest a possible upside from the purported replication crisis: that scholars might be self-correcting their fields.
The study, published in the Proceedings of the National Academy of Sciences, analyses a sample of 228 psychology articles that were published between 1978 and 2021 and later failed to replicate.
Contrary to previous studies, which found mixed results regarding the relationship between replication failures and citations, the analysis shows that failure to replicate predicted declines in future citations and that this relationship increased over time.
The paper says that scholars have long been worried about a replication crisis in certain fields, reflecting concerns about the reliability and validity of scientific findings.
But the findings suggest a possible upside, lead author Cory Clark, a behavioural scientist at the University of Pennsylvania, told Times Higher Education.
“Depending on the quality of a replication effort, failed replications should cast some doubt on original papers and subsequently reduce scholars’ reliance on original papers in their own theorising. Our findings suggest this might be happening and that scholars might be formulating their own hypotheses on more reliable – or, at least, less unreliable – research,” she said.
“By discovering which findings and methodologies are unreliable, scholars can pursue research questions and discover truth more efficiently.”
In a 14-year post-publication period, the authors estimated that the publication of a failed replication was associated with an average citation decline of 14 per cent for original articles.
The findings suggest that scholars notice effects that fail to replicate and subsequently reduce their reliance on non-replicable findings in their own work.
“These findings suggest that the publication of failed replications may contribute to a self-correcting science by decreasing scholars’ reliance on unreplicable original finding,” say the authors.
Co-author Paul Connor, a postdoctoral scholar in Pennsylvania’s department of psychology, said that, after papers failed to replicate, their yearly citations decreased gradually over time compared with papers that had not failed to replicate.
“Extrapolating these different trajectories into the future, our models predict that the difference in citations associated with failing to replicate will continue to become larger and larger over time,” he said.
The most plausible explanation for this is that information about failed replications spreads and is acted on quite gradually throughout the research community, he added.
The researchers said that future research should test whether citation declines were steeper for papers that “catastrophically failed” to replicate compared with papers with only moderate or ambiguous evidence of failed replication.