This article continues my series of three articles on how to defend yourself from bad scientific communication perpetrated by non-scientific newspapers. The first post detailed the scientific article and the mechanism of citations. In this post I will proceed detailing the peer reviewing process, and two numbers, the Impact Factor and the h-Index, to obtain a very rough estimate of the authoritativeness of journals and scientists. I want to stress this point, and I will stress it even more: using these values give a very rough evaluation, which is not an absolute verdict. It is just a potential signal that, together with other evidence, may give an insight of the trustworthiness of a scientific finding.
Bringing an article from a draft to a polished publication for a scientific journal requires going through a process called "peer reviewing". Peer reviewing means that your submission is scrutinized by other experts (known as referees) before it is accepted for publication. This process aims to satisfy the following needs:
- filter out articles that are not appropriate for the journal (e.g. an article on lab synthesis on a journal for computational sciences)
- check if the article satisfies basic requirements for rigorous scientific investigation, allowing it to be reproducible, verifiable, and with proportionate claims
- check if mistakes have been performed in the procedure, such as performing an incorrect collection of samples (e.g. checking water quality of a lake by collecting samples from only one point of the lake, far from the source of pollution), incorrect analysis of data (e.g. not enough samples, like testing a drug on only one person) introduction of errors (e.g. using an unreliable analysis kit) and many others.
- request additional proof for a claim to be considered valid, typically because the claim made by the authors is too general for the amount of data available.
- point out similar techniques, or missing citations.
- ask general questions to the author about some aspects of the paper.
Peer review is generally done anonymously: when you submit an article, the journal editor chooses two or three referees considered suitable to give a sensible opinion on your claims. The editor collects their feedback and forwards them to you as anonymous reports. Your name, on the other hand, is generally known to the referees, although this may not always be the case.
Claiming that something has been peer reviewed is not necessarily a guarantee for certified scientific quality. Suppose I decide to start a journal, call it with a sciencey name, and have my mom (who is not a scientist) do the peer review. Although this technically would be a peer reviewed journal, what is published on it will not necessarily be authoritative. Thus, claiming something is published on a "peer reviewed journal" says little about its scientific value and correctness: it just says that someone else took some kind of look at the article before publication. Sounds far fetched? It happened (Link 1, Link 2).
Even with recognized journals, the peer reviewing process can vary from very strict to lax, depending on editorial policies and choice of referees. Different journals have different levels of strictness: some journals want just the "cream of the crop" of science, being ruthless on what gets on their pages, and selecting not only for scientific excellence but also for interdisciplinary impact. Journals such as Nature and Science belong to this category. Other journals may focus on a very narrow scientific field, with editors delegating to a pool of highly reputed, very tough referees who pretend a certain level of importance in your claims and destroy your paper to splinters before accepting it. Finally, you may have journals accepting articles with low impact on the discipline, or even experimental or methodological errors.
Needless to say, the process of peer reviewing is not perfect. There are many objections to peer review as a process, and we won't debate them here, but at the moment it's the best compromise for the task. The aim is toward basic filtering and methodological quality, not necessarily the correctness of the claim and the obtained experimental values. The referee does not try to reproduce the experiment: he/she just checks if the obtained results are possible to reproduce with the given information, and if the paper makes scientific sense and provides new information to the discipline. The scientific community will then evaluate the claim, comparing it to other methods, and eventually finding out a new insight, a methodological error, or an intentionally fraudulent activity. Regarding the latter, it is taken very seriously by the scientific community. I've seen Ph.Ds titles revoked and head people resign over fraudulent scientific activity performed by others under their direct management.
In this post, I described two rough metrics to evaluate journals (the Impact Factor) and researchers (the h-Index). These two metrics are far from perfect, but they may give a signal about the authoritativeness of a scientific claim, by checking how the community respond to the general level of a given journal or researcher.