New Canadian research shows that the highest-impact papers are increasingly being published outside the highest-impact journals.  Photo credit: rodrigovco via stock.xchng
New Canadian research shows that the highest-impact papers are increasingly being published outside the highest-impact journals. Photo credit: rodrigovco via stock.xchng

“Congratulations on the new paper! By the way, what’s the impact factor on that journal?” Scientists get this question more often than they would care to mention. Despite numerous critiques since it was first developed in the 1960s, today the impact factor remains the gold standard for judging the reputation of a given scientific journal, and is often used in funding decisions, in some cases even to calculate scientist’s salaries. But according to new research from Université de Montréal, information technology is slowly rendering the impact factor irrelevant. Increasingly, the highest-impact papers are being published outside the highest-impact journals.

In its simplest form, the three-year impact factor is really just a ratio: take the number of papers published by a given journal in the two years previous to the current year, and call that B. Then, take the number of times in the current year that those same papers were cited by others and call that A. The ratio of A/B is (supposedly) a measure of the average number of citations a paper can expect to get by being published in that particular journal. In 2011, the impact factor of Nature was 36.28 and that for Science was 31.20, but one should not necessarily use these as benchmarks. Many perfectly respectable journals have an impact factor below 10, while some specialized journals have managed to crank theirs as high as 50 or more.

“One problem with the impact factor is that the distribution of citations is skewed,” says Vincent Larivière, assistant professor in the School of Library and Information Science at Université de Montréal and author of the latest study. “You might have 80 per cent of citations received by 20 per cent of the papers; most papers published in a journal won’t even receive the average.” In other words, the impact factor doesn’t actually measure what it’s supposed to. Other problems include the fact that some disciplines publish or cite more frequently than others: in engineering an impact factor of 3 might be pretty good, whereas in medicine it’s not so good. The same is true of impact factors across time. “Around 1900, it was common to have 4 or 5 references per paper,” says Larivière. “Today the average is 40.” This drives up impact factors across the board.

Larivière is certainly not the first to criticize the impact factor, and alternatives have been suggested. My favourite is this one by humourist Adam Reuben, suggesting a system based on “whether your grandparents think they’ve heard of the journal.” But Larivière was interested in another effect: whereas in the past researchers subscribed directly to key journals in their field, today they use online databases like Web of Science, Scopus and Google Scholar to search the literature for relevant keywords. “Basically when you search for scientific information, your unit is no longer the journal, but the paper,” says Larivière. If the content is good and the keywords are well written, modern scientists can expect their paper to be found by a wide audience even without being published in a well-known journal.

In a paper published in the Journal of the American Society for Information Science and Technology, Larivière and his colleagues analysed a database of over 28 million papers and 819 million citations from 1902 to 2009. They tracked the correlation between the impact factor (the predicted number of citations per paper) and the actual number of citations per paper for each year. Interestingly, this correlation was never particularly strong: the r2 value ranged from 0.10 at the turn of the last century to a peak of about 0.30 in 1990 (an r2 of 1 would indicate a perfect correlation). About this time, online databases were first introduced, and the correlation between impact factor and actual citations began to decline; today it’s back where it was in the early 1980s. Another way to look at it was to examine how many of the top 10% most cited papers appeared in the top 10% most cited journals. Between 1990 and 2009 this figure dropped from 5.25 to 4.5 per cent.

Despite its shortcomings and waning influence, the impact factor is still the simplest way to predict the future significance of a given paper. As a result, people will likely continue to watch Nature and Science more closely than other journals. But for those scientists who can’t get published there (i.e. most of them) Larivière’s findings offer some comfort. “Science and Nature have a broad audience, but the space that they give you for publication is smaller, and you won’t be able to go into details in the methods or the details in the implications of your results,” he says. “Ultimately, it’s a personal thing: the best journal for your paper is the journal where you will reach the best audience, not necessarily the biggest.”

No comment yet, add your voice below!

Add a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.