Should anyone be worried about the number of scientific research papers that are never cited?

Nature looks at the myth that a large fraction of scientific research goes uncited in a piece by Richard Van Noorden: “The science that’s never been cited”. Compiling a number of stories from researchers who have looked into citation rates, the piece concludes that around 10% of scientific research papers in journals tracked by Web of Science will go uncited, a bit more than this if self-citations are excluded. That number varies greatly by technical field, with engineering disciplines much higher.

None of these compare to the urban myth about citations, which is that half of papers go uncited. But really, I’ve heard that said about humanities papers, not scientific papers, and Van Noorden’s article acknowledges that research in the humanities tends to be more independent, with a higher fraction of research that goes uncited by other workers. The Nature piece traces to urban myth to Science:

The idea that the literature is awash with uncited research goes back to a pair of articles in Science — in the 1990 one and in another, in 1991. The 1990 report noted that 55% of articles published between 1981 and 1985 hadn’t been cited in the 5 years after their publication. But those analyses are misleading, mainly because the publications they counted included documents such as letters, corrections, meeting abstracts and other editorial material, which wouldn’t usually get cited. If these are removed, leaving only research papers and review articles, rates of uncitedness plummet. Extending the cut-off past five years reduces the rates even more.

Should we even worry about citations? After all, basic research is its own reward. The Nature article at some points reads like a support session for scientists who aren’t feeling the citation love, as it goes through one reason after another that research may be valuable even if no one ever cites it.

Most of the reasons discussed in the article are totally legitimate, and should be part of any conversation about the value of lines of research that don’t provoke lots of additional work on exactly the same model.

Still other articles might remain uncited because they close off unproductive avenues of research, says Niklaas Buurma, a chemist at Cardiff University, UK. In 2003, Buurma and colleagues published a paper about ‘the isochoric controversy’ — an argument about whether it would be useful to stop a solvent from contracting or expanding during a reaction, as usually occurs when temperatures change11. In theory, this technically challenging experiment might offer insight into how solvents influence chemical reaction rates. But Buurma’s tests showed that chemists don’t learn new information from this type of experiment. “We set out to show that something was not worth doing — and we showed it,” he says. “I am quite proud of this as a fully uncitable paper,” he adds.

A good amount of research in human evolution may fall into this category.

Case-control studies on excavation practices, for example, are very rare. It would be tremendously valuable to know whether some common variations in practice make a difference to data recovery, or whether innovations might result in better data. Failed experiments that nonetheless reinforce the value of existing practice are valuable knowledge, but probably shouldn’t need to be cited again and again.

All in all, it’s wrong to think about “landmark findings” giving rise to lots of citations. The most cited papers are those that establish new experimental (or computional) methods, and those that provide datasets useful for other researchers. Those are very good things, but not the only things!