Table of Contents
Introduction
The media are reporting that a study published on February 18 in JAMA Internal Medicine shows that ivermectin does not work to prevent severe disease or death among COVID-19 patients, but that’s not what the study shows.
Here are some example media headlines about the study:
- “Ivermectin doesn’t prevent severe disease from Covid-19, new study finds” — CNN
- “Ivermectin ineffective in preventing severe COVID-19-study” — Reuters
- “Ivermectin Does Not Stop Progression to Severe COVID: Randomized Trial” — Medscape
- “Ivermectin Flops Again for COVID, This Time in High-Risk Adults” — Medpage Today
You get the drift. As Igor Chudov notes in this helpful post about the study, CNN reporter Ann Cabrera has spread the following approvable misinformation on Twitter: “NEW: Ivermectin doesn’t prevent severe disease from Covid-19 any more effectively than a placebo, according to a new study published in JAMA Internal Medicine.”
The study’s actual findings, however, starkly contrast with the media’s reporting.
First, it was not a placebo-controlled study. Evidently, Cabrera didn’t bother to read even the abstract before rushing to cite the study as though proof of ivermectin’s ineffectiveness. The control group in the study received “standard of care” treatment, whereas the experimental group received standard of care plus ivermectin. (And, no, the standard of care was not a placebo.)
Second, the study’s findings in fact show that patients treated with ivermectin did have a relatively lower risk of severe disease and death, but the risk reduction didn’t reach statistical significance.
As news consumers, it’s important that we be able to understand study findings, including so we don’t become duped by misinformation about “the science” that incessantly emanates from the mainstream media. So, let’s take a deeper look.
Some Background Knowledge to Understand the Study’s Findings
In the newly published study, the hypothesis being tested is that ivermectin is effective for preventing severe COVID-19 as defined by the measured outcomes. The “null” hypothesis is that ivermectin has zero effect, that it makes no difference. There is also the hypothetical possibility that ivermectin increases the risk of severe disease.
To be able to see how the media are misreporting the study’s findings, a basic understanding is required of terms like “statistical significance”, “confidence intervals”, and “p-values”.
As explained by data scientist Raymond Willey, a p-value represents “the probability of finding observed evidence when the null hypothesis is true. The lower the p-value, the greater the significance of the evidence. The significance (or confidence level) is calculated as one minus the p-value.” If you multiply that confidence level by 100, you can express the confidence level as a percentage.
The p-values, which represent the outcome of hypothesis tests, are related to the “confidence intervals”, which indicate the precision of the measurement. Narrower confidence intervals indicate a more precise estimate, whereas wide confidence intervals indicate imprecise measurement and hence a lower level of confidence in the findings.
Typically, 95% confidence intervals (CI) and a p-value of .5 are used to determine whether a finding reaches “statistical significance”. However, as noted in this paper discussing proper interpretation of confidence intervals, “Although 95% CI are commonly used in many applications, the choice of whether to use a 90% or 95% CI is somewhat arbitrary, and depends on the level of ‘confidence’ that the investigator wishes to convey in his or her estimate.”
As noted in this paper on the problem of frequent misinterpretation of data, “Misinterpretation and abuse of statistical tests has been decried for decades, yet remains so rampant that some scientific journals discourage use of ‘statistical significance’ (classifying results as ‘significant’ or not based on a P value).”
Furthermore, “in most scientific settings, the arbitrary classification of results into ‘significant’ and ‘non-significant’ is unnecessary for and often damaging to valid interpretation of data”, whereas “estimation of the size of effects and the uncertainty surrounding our estimates will be far more important for scientific inference and sound judgment than any such classification.”
Keep that information in mind as you examine the study’s findings for yourself. Most importantly, understand that just because this ivermectin study failed to find a statistically significant association between ivermectin and lower risk of severe disease does not mean that such an association does not exist.
It could simply be that the study was underpowered, meaning that the researchers did not have enough subjects in the study for the observed relative risk reduction to have reached the level of confidence that the authors predetermined for achieving statistical significance.
The Ivermectin Study’s Findings
To understand the findings I’m about to show you, you’ll also need to know a few acronyms. “RR” stands for “relative risk”. “CI”, again, stands for “confidence intervals”. In this case, confidence intervals that include the value of 1 indicate statistical non-significance. If the range of values were less than 1, it would indicate a “significant” relative risk reduction, whereas if the confidence intervals were greater than 1, it would indicate a significantly increased risk.
Curiously, for their primary analysis, the authors defined “severe disease” as having required supplemental oxygen. According to this analysis, more patients in the ivermectin group progressed to “severe disease”. Here are the reported findings from the abstract:
Among 490 patients included in the primary analysis (mean [SD] age, 62.5 [8.7] years; 267 women [54.5%]), 52 of 241 patients (21.6%) in the ivermectin group and 43 of 249 patients (17.3%) in the control group progressed to severe disease (relative risk [RR], 1.25; 95% CI, 0.87-1.80; P = .25).
So, as you can see, 21.6% of patients in the ivermectin group required oxygen versus 17.3% in the control group. The relative risk was 1.25. A relative risk of 1 would indicate no difference in risk, whereas this reported value indicates that those treated with ivermectin had a 25% greater risk of requiring oxygen.
However, you can also see that that result was not statistically significant, with confidence intervals containing a value of 1 and a p-value of .25. Essentially, the confidence intervals mean that we can be 95% confident that the true relative risk falls somewhere between 0.87 and 1.80. If you subtract the p-value of .25 from 1 and multiply by 100, you can understand the confidence level in terms of a percentage: we can have a confidence level of 75% that ivermectin treatment was associated with a 25% greater risk of requiring oxygen.
Notice that this is a fairly imprecise estimate and means that, while weighing more heavily in favor of the hypothesis that ivermectin increases the risk, this finding remains consistent with the hypotheses that ivermectin has zero effect or reduces the risk.
We therefore cannot conclude, as the media are doing, that this finding shows that ivermectin does not work (or that it does more harm than good).
Importantly, the authors also did a secondary analysis that was arguably far more meaningful: they defined “severe disease” as requiring mechanical ventilation, being admitted to the intensive care unit (ICU), or dying.
For this analysis, the authors report, “For all prespecified secondary outcomes, there were no significant differences between groups.” However, look at the non-significant differences in risk that they found:
Mechanical ventilation occurred in 4 (1.7%) vs 10 (4.0%) (RR, 0.41; 95% CI, 0.13-1.30; P = .17), intensive care unit admission in 6 (2.4%) vs 8 (3.2%) (RR, 0.78; 95% CI, 0.27-2.20; P = .79), and 28-day in-hospital death in 3 (1.2%) vs 10 (4.0%) (RR, 0.31; 95% CI, 0.09-1.11; P = .09).
The relative risk reduction for mechanical ventilation was 0.41, or a 59% lower risk of mechanical ventilation for patients in the ivermectin group. Likewise, use of ivermectin was associated with a 22% lower risk of ICU admission and a 69% lower risk of death.
Here was the conclusion drawn by the study authors:
In this randomized clinical trial of high-risk patients with mild to moderate COVID-19, ivermectin treatment during early illness did not prevent progression to severe disease. The study findings do not support the use of ivermectin for patients with COVID-19.
However, that conclusion is not actually supported by their findings. By claiming that ivermectin “did not prevent progression to severe disease”, they are claiming to have falsified the hypothesis that ivermectin reduces the risk of severe disease, but they did not.
The authors are misrepresenting their own findings.
(Researchers drawing conclusions that are not actually supported by their own data is not uncommon in the scientific literature. Especially when there is a political agenda possibly being served, which is nearly always, it’s important to be skeptical of what you hear in the media about what “studies show”, including because the media will simply parrot conclusions without any kind of critical analysis of the actual findings.)
While the media are likewise reporting that this study falsifies the hypothesis that ivermectin is effective for reducing the risk of severe disease among COVID-19 patients, in fact, the data are consistent with the hypothesis that ivermectin works.
Reasonably Interpreting the Study’s Findings
Again, just because this study failed to find a statistically significant association between ivermectin use and a reduced risk of severe disease does not mean no such association exists. It could simply be that the study was statistically underpowered to detect the risk reduction.
After all, more than twice as many people not treated with ivermectin wound up on a mechanical ventilator; that’s 4% in the control group versus only 1.7% in the ivermectin group, a 59% greater relative risk for this outcome compared to the ivermectin group.
More importantly, look at the outcome of death. More than three times as many people not treated with ivermectin died in the hospital within 28 days; that’s 4% in the control group versus only 1.2% in the ivermectin group; a 69% greater relative risk of dying with “standard of care” compared to standard of care plus ivermectin.
For this outcome, the p-value was .09 and so was considered non-significant based on the authors’ predetermined choice of .05 as the p-value cutoff for achieving statistical significance.
In essence, what the media headlines are claiming is that since we can only have a 91% rather than a 95% level of confidence that the 69% lower risk of dying observed among patients treated with ivermectin indicates a true association, therefore ivermectin doesn’t work.
That, of course, is a non sequitur fallacy. The conclusion does not follow from the premise. Again, the conclusion drawn by the authors and propagated by the media cannot be drawn from the study’s actual findings.
Notice also that the opposite findings from their primary and secondary analyses—an increased risk versus a decreased risk for “severe disease”, respectively—are not necessarily contradictory. There is a simple way to reconcile these varying results.
It might just be that oxygenation was skipped for those with more severe disease, who instead were put straight onto ventilators, admitted to ICU, and were otherwise more likely to die; whereas oxygenation may have been sufficient treatment for those with lower risk of progressing to severe disease as defined in the secondary analysis.
In other words, the increased odds of patients in the ivermectin group being treated with oxygen could itself simply be evidence that ivermectin was effective for preventing these patients from progressing to even more severe disease.
That would be consistent with the findings of other studies that ivermectin is effective for preventing severe disease and death, such as this large observational study from Brazil. The data overwhelmingly favor use of ivermectin according to a real-time meta-analysis of ivermectin studies accessible at https://c19ivermectin.com/. Here are images from that site summarizing the current data (as of February 19, 2022):


The appropriate conclusion from the new study in JAMA Internal Medicine would be to say that the findings indicate that ivermectin might be effective for preventing severe disease and death, but a larger randomized trial with greater statistical power would be required to confirm this.
The claim by the authors and the major news media that this study shows that ivermectin doesn’t work is false. Simply stated, CNN et al. are spreading anti-ivermectin misinformation.
A logical corollary is that, to avoid being duped by propaganda, we must all be savvy enough news consumers to understand how to read study abstracts for ourselves to glean whether what we’re being told “science says” is actually what the science says.


I’ve read many research results that involved nutritional interventions and frequently the abstracts did not accurately portray the reasons for the results. In some cases the details of the studies led to other logical conclusions than what was stated in the abstract. This is certainly misleading if not downright deceptive.
What is most shocking is not that this occurs but that it is so common in the medical literature. There is such deep indoctrination and confirmation bias that even when findings contradicting their beliefs is staring researchers in the face, they still cling to their predetermined conclusions.