However, the use of fake peer reviewers has increased tenfold over the past decade. There has also been an eightfold rise in publications linked to so-called “paper mills”, which are businesses that provide fake papers for a fee.
Chart:
The ConversationSource: Retraction WatchGet the dataEmbed Download imageCreated with DatawrapperIn the past decade, there have been more than 39,000 retractions, and the annual number of retractions is growing by around 23% each year.
Nearly
half the retractions were due to issues related to the authenticity of the data. For example, in August the United States Office of Research Integrity found that Richard Eckert, a senior biochemist at the University of Maryland, Baltimore, faked data in 13 published papers. Four of these papers have been corrected, one has been retracted and the remainder are still awaiting action.
Plagiarism was the second most common reason research papers were retracted, accounting for 16% of retractions.
Fake peer review was another reason why research papers were retracted.
Typically,
when a publication is submitted to a journal, it undergoes peer review by experts in the same field. These experts provide feedback to improve the quality of the work.
In 2022, up to 2% of all publications were from paper mills.
Genuine mistakes in the scientific special data process accounted for only roughly 6% of all retractions in the last decade.
More pressure, more mistakes
One reason for the surge in retractions over the last decade may be that we are getting better at finding and detecting suspicious data.
Digital publishing has made freelance job sites to find graphic designers in 2024 it easier to detect potential fabrication, and more scientists are making a brave stand against these dubious practices. No doubt, the current number of retractions is an underestimate of a much larger pool.
But the intensification of the “publish or perish” culture within universities also plays a major role.
Nearly all academic staff are required to meet specific publication quotas for performance evaluations, while institutions themselves use publication output to boost their rankings. High publication counts and citations enhance a university’s position in global rankings, attracting more students and generating income from teaching.
The prevailing reward system in academia loan data often prioritises publication quantity over quality. When promotions, funding, and recognition are tied to the number of papers published, scientists may feel pressured to cut corners, rush experiments, or even fabricate data to meet these metrics.
Changing the model
Initiatives such as the San Francisco Declaration on Research Assessment are pushing for change. This initiative advocates for evaluating research based on its quality and societal impact rather than journal-based metrics such as impact factors or citation counts.
A shift in journal policies to prioritise the sharing of all experimental data would enhance scientific integrity. It would ensure researchers could replicate experiments to verify others’ results.
Old issues of Science journal lined up on library shelf.
Academics face increasing pressure to publish journal articles to advance their careers. Protasov AN/Shutterstock
Also, universities, research institutions and funding agencies need to improve their due diligence and hold those responsible for misconduct accountable.
Including a simple question such as,
“Have you ever had or been involved in a retracted paper?” on grant applications or academic promotions would improve the integrity of research by deterring unethical behaviour. Dishonest answers could be easily detected, thanks to the availability of online tools and databases such as Retraction Watch.