Are journal citation indices the right measure of research evaluation? What are the problems and alternatives? 

A

Journal citation indices are used to assess the quality of researchers and the importance of their work, but they’ve been criticized for a number of problems. They suffer from statistical pitfalls, fail to reflect the specificities of each discipline, and skew toward articles from a small number of popular journals. To address this, the scientific community is looking for a new metric.

 

The scientific community uses the journal citation index to evaluate the quality of researchers and the importance of their papers. However, this practice has come under increasing criticism in recent years. Last May, more than a hundred science and technology researchers gathered in San Francisco to issue the San Francisco Declaration on Research Assessment. Today, tens of thousands of researchers have joined the movement. They point out some of the fatal flaws of the journal citation index-based methodology. In this article, we’ll take a closer look at the criticisms of journal citation indexes and argue that they are not the way to evaluate scientific research.

The journal citation index is a number that quantifies the impact of a journal. It was originally designed for librarians. They needed to evaluate the relative importance of different candidates in order to choose which journals to subscribe to on a regular basis. The calculation of the journal citation index is simple. For example, if a journal is called Science and Technology, and there are 20 articles published in Science and Technology in the last two years, and these articles have been cited a total of 200 times, then the citation index for Science and Technology is 200/20, or 10. The meaning of a citation index of 10 is that the average number of citations for articles published in Science and Technology in the last two years is 10. In this way, the journal citation index is a quantitative measure of the importance of a journal.

However, the problem is that it is not the only way to assess the importance of an individual article. The value of the journal’s citation index is the impact score of the articles published in that journal. For example, if there are two journals, Science and Technology with a citation index of 10 and Monthly Engineering with a citation index of 90, all articles published in Science and Technology would be scored as 10 and all articles in Monthly Engineering would be scored as 90. This scoring naturally extends to the authors of those papers. So, for example, Researcher A in Science & Technology would receive a score of 10, while Researcher B in Engineering Monthly would receive a score of 90. Regardless of the quality of the researcher or the originality of the paper, there is a gap of 80 points between A and B.

The first problem with journal citation indexes is the “statistical trap”. Journal citation indexes are averages, which means that articles published in the same journal can have different citation counts. The impact of an individual paper is not simply proportional to the citation index of the journal in which it is published. For example, an article in A may have a citation index of 10, but it may actually be cited dozens or hundreds of times. However, the overall average is lowered to 10 because other articles in Science & Technology have very few citations. On the other hand, B’s article may have been cited only once or twice, but it may have benefited from the fact that the other articles in Engineering Monthly were cited so many times that the citation index jumped to 90. In this situation, it’s pointless to consider the journal citation index to compare studies A and B. Instead, it makes sense to compare the number of citations for each of A and B papers to see if A has a much higher impact. According to the San Francisco Declaration, less than 25% of the articles in a journal account for 90% of all citations, which means that there will inevitably be some articles that lose out and some that gain.

Second, the evaluation of research based on journal citation indexes does not reflect the unique characteristics of each discipline. For example, in medicine and biology, once a theory is proposed, numerous experiments are conducted to validate it. The citation index of biology and medicine journals is higher than other natural science and engineering fields because clinical trials related to a single paper are repeated in subsequent studies. On the other hand, in the field of pure math, a single paper is usually complete in itself. Therefore, math papers are cited relatively few times and the journal citation index of pure math journals is low. In addition, the number of citations is relatively low even when the number of researchers is small because it is a very specialized field. On the other hand, a journal with a large field, active research, and a large number of researchers will have a relatively high number of citations. The problem with this approach is that it does not take into account the fundamental differences in the nature of each discipline.

The third problem is that the journal citation index is biased toward a small number of popular journals. Researchers who are about to publish a paper naturally want their work to appear in the world’s most prestigious journals, such as Cell, Nature, and Science. These journals have the highest journal citation indexes. This skewing toward a few journals can lead researchers to believe that getting published is more important than the research itself. This culture of only recognizing papers published in top journals has become a global phenomenon. If this “prestige mentality” continues, which devalues the diligent research process and emphasizes tangible results, it can distort the very essence of science. In addition, some journals encourage “self-citation,” i.e., citation of articles published in their own journals, in order to increase their own citation index. In this way, journal citation indexes create an unreasonable and unethical competition.

Of course, journal citation indexes are not without their drawbacks. One of the reasons they are so popular is that they provide a quick and easy way to evaluate researchers. The editors of each journal act as “expert evaluators”. They quickly categorize important and noteworthy research among the flood of research output. In today’s rapidly changing and expanding scientific community, this is a benefit that cannot be ignored.

However, we’ve already seen that there are some serious problems with this convenience. The first is the pitfall of statistics, which is that the journal citation index can be different from the actual number of citations an individual article receives. The second is that citation counts don’t take into account the characteristics of individual disciplines. In some disciplines, the number of citations has little to do with impact. The last problem is that this methodology creates a monopoly of a few top journals, which is destructive to the scientific community.

In order to overcome the above problems and achieve the right scientific development, the scientific community is currently looking for new evaluation criteria. The simplest proposal is to reflect the number of citations of individual researchers in the evaluation. This is an alternative that compensates for the pitfalls of statistics. Another way is to use a correction index that reflects the specifics of each discipline. Using the average citation count of the top 20% of journals in a given field divided by the journal index of the paper as a correction factor allows for normalization to reflect the specifics of each discipline. A more radical alternative would be to promote peer review to develop qualitative metrics rather than quantitative ones such as journal index or citation counts. What the scientific community needs now is self-reflection and communication to devise reasonable evaluation methods.

 

About the author

Blogger

I'm a blog writer. I like to write things that touch people's hearts. I want everyone who visits my blog to find happiness through my writing.

About the blog owner

 

BloggerI’m a blog writer. I want to write articles that touch people’s hearts. I love Coca-Cola, coffee, reading and traveling. I hope you find happiness through my writing.