What do the vaccine efficacy numbers really mean?

This week, Johnson & Johnson began distributing millions of doses of its coronavirus vaccine in the United States after receiving an emergency use authorization from the Food and Drug Administration. The central point for getting the green light was a trial that Johnson & Johnson conducted to measure the effectiveness of the vaccine.

Efficacy is a crucial concept in vaccine testing, but it is also complicated. If a vaccine has an effectiveness of, say, 95%, that does not mean that 5% of people who receive that vaccine will have COVID-19. And just because one vaccine ends up with a higher efficacy estimate than another in testing does not necessarily mean it is superior. Here’s why.

For statisticians, effectiveness is a measure of how much a vaccine reduces the risk of an outcome. For example, Johnson & Johnson noted how many people who received a vaccine, however, took COVID-19. They then compared this to how many people contracted COVID-19 after receiving a placebo.

Sign up for the New York Times newsletter The Morning

The risk difference can be calculated as a percentage. Zero percent means that vaccinated people are just as at risk as those who received the placebo. One hundred percent means that the risk has been completely eliminated by the vaccine. At the trial site in the United States, Johnson & Johnson determined the effectiveness to be 72%.

Effectiveness depends on the details of a test, such as where it was performed. Johnson & Johnson tested in three locations: in the United States, Latin America and South Africa. The overall effectiveness was less than in the United States alone. One reason for this seems to be that South Africa’s trial came after a new variant spread across the country. Called B.1.351, the variant has mutations that allow it to avoid some of the antibodies produced by the vaccination. The variant did not make the vaccine useless, however. Far from it: in South Africa, Johnson & Johnson’s effectiveness was 64%.

Effectiveness can also change when scientists see different results. The Johnson & Johnson vaccine had an 85% effectiveness rate against severe cases of COVID-19, for example. It is important to know this, because it means that the vaccine will prevent many hospitalizations and deaths.

When scientists say that a vaccine has an effectiveness of, say, 72%, this is known as a point estimate. It is not an accurate prediction for the general public, because the tests can only look at a limited number of people – in the case of the Johnson & Johnson test, about 45,000 volunteers.

The uncertainty surrounding a point estimate can be small or large. Scientists represent this uncertainty by calculating a range of possibilities, which they call the confidence interval. One way to think about a confidence interval is that we can be 95% sure that the effectiveness is somewhere within it. If scientists reached confidence intervals for 100 different samples using this method, the effectiveness would fall within 95 confidence intervals.

Confidence intervals are tight for trials where many people get sick and there is a big difference between the results in the vaccinated and placebo groups. If few people get sick and the differences are minimal, confidence intervals can explode.

Last year, the FDA set a target for testing for coronavirus vaccines. Each manufacturer would need to demonstrate that a vaccine was at least 50% effective. The confidence interval should not be less than 30%. A vaccine that meets this standard would offer the kind of protection found in flu vaccines – and therefore save many lives.

So far, three vaccines – manufactured by Pfizer and BioNTech, Moderna and Johnson & Johnson – have all been authorized in the United States after their tests showed that they exceeded the FDA limit. AstraZeneca and Novavax, which have studies underway in the US, have published results of the effectiveness of studies in other countries. Meanwhile, manufacturers of the Sputnik V vaccine have published the results based on their tests in Russia.

For a number of reasons, it is not possible to make an accurate comparison between these vaccines. One vaccine may have a higher point estimate than another, but its confidence intervals may overlap. This effectively makes your results indistinguishable.

To complicate matters further, vaccines were tested on different groups of people at different stages of the pandemic. In addition, its effectiveness was measured in different ways. Johnson & Johnson’s effectiveness was measured 28 days after a single dose, for example, while Moderna was measured 14 days after a second dose.

What is clear is that all three vaccines authorized in the United States – made by Johnson & Johnson, Moderna and Pfizer and BioNTech – greatly reduce the risk of contracting COVID-19.

Furthermore, all vaccines appear to be highly effective against more serious outcomes, such as hospitalization and death. For example, no one who received the Johnson & Johnson vaccine needed to go to the hospital because of a COVID-19 infection 28 days or more after receiving the injection. Sixteen people who received the placebo, yes. This translates into 100% effectiveness, with a confidence interval of 74.3% to 100%.

A clinical trial is just the beginning of research on any vaccine. After it becomes widely used, researchers monitor its performance. Instead of effectiveness, these scientists now measure effectiveness: how much the vaccine reduces the risk of disease in the real world, in millions of people instead of thousands. Early studies on the effectiveness of coronavirus vaccines are confirming that they provide strong protection.

In the coming months, researchers will keep an eye on this data to see if it becomes less effective – either because the vaccine’s immunity decreases or because a new variant appears. In both cases, new vaccines will be created and manufacturers will provide new measures of their effectiveness.

This article was originally published in The New York Times.

© 2021 The New York Times Company

Source