Date

This article continues my series of three articles on how to defend yourself from bad scientific communication perpetrated by non-scientific newspapers. The first post detailed the scientific article and the mechanism of citations. In this post I will proceed detailing the peer reviewing process, and two numbers, the Impact Factor and the h-Index, to obtain a very rough estimate of the authoritativeness of journals and scientists. I want to stress this point, and I will stress it even more: using these values give a very rough evaluation, which is not an absolute verdict. It is just a potential signal that, together with other evidence, may give an insight of the trustworthiness of a scientific finding.

Peer reviewing

Bringing an article from a draft to a polished publication for a scientific journal requires going through a process called "peer reviewing". Peer reviewing means that your submission is scrutinized by other experts (known as referees) before it is accepted for publication. This process aims to satisfy the following needs:

  • filter out articles that are not appropriate for the journal (e.g. an article on lab synthesis on a journal for computational sciences)
  • check if the article satisfies basic requirements for rigorous scientific investigation, allowing it to be reproducible, verifiable, and with proportionate claims
  • check if mistakes have been performed in the procedure, such as performing an incorrect collection of samples (e.g. checking water quality of a lake by collecting samples from only one point of the lake, far from the source of pollution), incorrect analysis of data (e.g. not enough samples, like testing a drug on only one person) introduction of errors (e.g. using an unreliable analysis kit) and many others.
  • request additional proof for a claim to be considered valid, typically because the claim made by the authors is too general for the amount of data available.
  • point out similar techniques, or missing citations.
  • ask general questions to the author about some aspects of the paper.

Peer review is generally done anonymously: when you submit an article, the journal editor chooses two or three referees considered suitable to give a sensible opinion on your claims. The editor collects their feedback and forwards them to you as anonymous reports. Your name, on the other hand, is generally known to the referees, although this may not always be the case.

Claiming that something has been peer reviewed is not necessarily a guarantee for certified scientific quality. Suppose I decide to start a journal, call it with a sciencey name, and have my mom (who is not a scientist) do the peer review. Although this technically would be a peer reviewed journal, what is published on it will not necessarily be authoritative. Thus, claiming something is published on a "peer reviewed journal" says little about its scientific value and correctness: it just says that someone else took some kind of look at the article before publication. Sounds far fetched? It happened (Link 1, Link 2).

Even with recognized journals, the peer reviewing process can vary from very strict to lax, depending on editorial policies and choice of referees. Different journals have different levels of strictness: some journals want just the "cream of the crop" of science, being ruthless on what gets on their pages, and selecting not only for scientific excellence but also for interdisciplinary impact. Journals such as Nature and Science belong to this category. Other journals may focus on a very narrow scientific field, with editors delegating to a pool of highly reputed, very tough referees who pretend a certain level of importance in your claims and destroy your paper to splinters before accepting it. Finally, you may have journals accepting articles with low impact on the discipline, or even experimental or methodological errors.

Needless to say, the process of peer reviewing is not perfect. There are many objections to peer review as a process, and we won't debate them here, but at the moment it's the best compromise for the task. The aim is toward basic filtering and methodological quality, not necessarily the correctness of the claim and the obtained experimental values. The referee does not try to reproduce the experiment: he/she just checks if the obtained results are possible to reproduce with the given information, and if the paper makes scientific sense and provides new information to the discipline. The scientific community will then evaluate the claim, comparing it to other methods, and eventually finding out a new insight, a methodological error, or an intentionally fraudulent activity. Regarding the latter, it is taken very seriously by the scientific community. I've seen Ph.Ds titles revoked and head people resign over fraudulent scientific activity performed by others under their direct management.

Measuring (rough) journal authority: the Impact Factor

The authoritativeness of a journal comes from the reputation it accumulated inside and outside the community it writes for; This community is made of people who are both readers and authors at the same time. One rough solution to measure the reputation of a journal is represented by the Impact Factor (IF). Before explaining how it works, remember that:

  • it is not a perfect method
  • it is not the only one
  • it acts "by proxy", meaning that it measures something else which generally is assumed to be correlated with reputation, but this is not the rule

This post is for giving rough tools, and Impact Factor is an sufficiently appropriate tool to get at least a rough idea about a journal's reputation.

From the part 1 of this article, we discussed about citations. An article which is cited a lot by other papers had some impact on the scientific community, which reacted by investigating more; an article with is cited poorly was received with a "meh" and people moved on. A scientific journal publishes tens of articles every issue, and each of these articles will be eventually cited by others in the near future. The Impact Factor of a journal can be intended as the average number of citations an article on that journal collects, averaged on the last two years.

For example, suppose a journal publishes 300 articles in 2006 and 200 articles in 2007, for a total of 500 articles in this two years period. In 2008, you check the total number of citations these 500 articles collected from other articles in the same or other journals. Suppose this number is 1500. Then the impact factor of that journal in 2008 is 1500/500 = 3.0. This value is the average number of citations collected by a single article in that journal.

How do you increase the Impact Factor ? Publish very few articles which get cited a lot. How do you decrease it? Publish a lot of poorly cited articles. Peer reviewing strictness can influence the impact factor, as well as editorial shift towards review articles (which are cited more). It is also important to note that impact factors cannot be compared across disciplines: the highest impact factors you may find in theoretical chemistry journals is lower than typical impact factors found in biology journals.

To sum up, Impact Factor measures a network of citations: it gives a measure of the interest the scientific community has for the average paper published on that journal, not necessarily how good is the average article, although it may be claimed that these two concepts are somehow correlated.

Where do you find Impact Factors ? Generally on the journal website. In alternative, you need to ask the ISI Web Of Knowledge database, which requires a subscription. You may therefore need an academic friend or a visit to your university library.

Measuring (rough) scientists authority: the h-Index

As Impact Factor measures (with clauses) the importance of a journal, h-Index measures (with clauses) the importance of a scientist. A scientists' career is about producing papers, either by himself in the first years of his career, or through others, such as Ph.Ds, Postdocs, and collaborators. Clearly, as a scientist becomes more experienced and more involved in the scientific progress he gets more articles, and more citations from other colleagues working in the field. h-Index addresses both these factors at the same time.

h-Index is a number, and I will explain its meaning with an example: I have an h-Index of 7 (see citation metrics) not high, but in line with friends who did more research than I. This value of 7 means that I have 7 publications that have at least 7 citations. I actually have 15 publications, but the remaining 8 have less than 7 citations. More generally, an h-index of N means that the researcher has N papers with at least N citations.

h-Index is far from perfect, but its point is to measure cumulative productivity and visibility of a researcher in its field. Let's see these two limit cases to understand why:

  • A researcher publishes 100 papers in his career, but he receives only 2 citations in one article. All his remaining articles are not cited. His h-index is therefore 1. He has one article with at least one citation (two citations). His h-index is not 2, because he does not have two articles with at least 2 citations each.
  • A young Ph.D. student publishes one disruptive paper collecting 200 citations. His h-Index is 1 because he has one paper with at least one citation. As in the previous case we see how the stress is on both productivity and impact at the same time.

There are many objections to be made to h-Index. This beautiful but very deep post "Who Is Today's Einstein? An Exercise In Ranking Scientists" by Johannes Koelman explains in a lot of details what is the problem of ranking scientists, and why h-Index is flawed. It compares, in particular, a very young Einstein-like genius (such as the one above, 1 paper with 200 citations, h-Index = 1) who loses its chance of continuing its scientific career to Mr. Mediocre, a guy with 5 papers having 5 citations each (h-Index = 5). I repeat, h-Index gives a measure of productivity and impact of a scientists, which may represent its authoritativeness, especially if he is old in the field. Note the additional point that h-Index cannot be compared across disciplines, because it depends on the number of citations, which in turns depend on the size of the scientific community in your field.

Summing up

In this post, I described two rough metrics to evaluate journals (the Impact Factor) and researchers (the h-Index). These two metrics are far from perfect, but they may give a signal about the authoritativeness of a scientific claim, by checking how the community respond to the general level of a given journal or researcher.