Last week, Domingo Docampo, Professor of Signal Theory at the University of Vigo and an old friend of the EC3 group, visited us. As he usually does when he comes to Granada, he gave a magnificent research seminar, co-organized by U-CHASS and #YoSigo. This time, he explored the complicated world of science, focusing on citation cartels and their relationship with scientific evaluation practices. He titled his presentation «Citation Cartels and Scientific Evaluation: Chronicle of a Dangerous Discrepancy,» and showed us, through the example of Mathematics (his field of research), how the value of citations has deteriorated in recent times. In his opinion, research quality needs a different way to understand and measure the value of its most currently used indicator, that is, citations.
As an expert in Mathematics, Docampo demonstrated how, in this specific field, researchers with the most prestigious awards are no longer the most cited. He established a clear relationship between what he called «citation cartels»—groups of researchers who agree to indiscriminately increase citations among themselves—and the editorial practices of questionable journals that accept their manuscripts for publication despite containing countless unjustified references. He also expressed deep concern about how these trends could affect an entire new generation of scientists.
Docampo’s talk was as interesting as it was worrying, due to the evidence he presented, and provocative, judging by the heated debate that followed. One conclusion that many of us reached is that fraudulent publishing and citation practices begin with the impossible standards imposed by evaluation systems. In a context where growing in a scientific career requires constant publishing in Q1 journals and becoming increasingly highly cited, the temptation to distort what is produced becomes a survival strategy for many. Being aware of the system’s flaws and exposing unethical cases are very important steps in the right direction, though in times when Artificial Intelligence is starting to clumsily burst into paper writing, conceiving concrete solutions becomes more urgent than ever. Unfortunately, no one seems to have the answer, national evaluation agencies and major publishers included.
Docampo advocated for a new approach to scientific evaluation, where the responsibility for carrying out the necessary changes does not fall solely on, for example, the Spanish ANECA (National Agency for Quality Assessment and Accreditation), but also on the teachers and researchers who make decisions of this kind in their very own spheres of action. How can we urge younger researchers to be patient and strive for quality when their CVs do not grow in number of publications as quickly as demanded? How can we explain to them that science takes time, that thinking and reflecting carefully is not only advisable but crucial for their development, while we simultaneously judge them by the number of citations they have received? Given such contradictions and the manipulation of quantitative indicators with few or no consequences at all, the key would seem to lie in qualitative methods, although these can be very costly in time and human resources. Could the solution be found in the combination of both approaches, intersected by Artificial Intelligence? We are undoubtedly at a breaking point regarding scientific evaluation, which is increasingly impotent in the face of fraudulent practices and must take the lead towards concrete and creative decision-making to promote the science we want and, consequently, the world we want.