- About UsOur Story, Our Team & Support Information
- What We DoAdvocacy to Achieve the End of AIDS
- Advance HIV/SRH Integration
- Advocate for Access to High-Impact Prevention
- Improve Research Conduct
- Product Innovation & Availability
- Promote Effective HIV Prevention Policy
- Strengthen Global Advocacy Networks
- Track and Translate the Field
- Our FocusInterventions to End the Epidemic
- ResourcesPublications, Infographics, Events & More
- MediaInformation & Resources for the Press
- Our BlogPrevention News & Perspective
by Bill Snow, AVAC Senior Advisor
The challenge of HIV prevention is far from solved. Current options—condoms, counseling, medical male circumcision and oral PrEP—are all efficacious, but have practical limitations, and approximately 1.7 million new infections still occur every year. It’s clear that even highly protective interventions can fail to meet the needs of many. The answer is options that can meet diverse needs. A number of new approaches that could be longer lasting, safer or offer easier adherence are in advanced development. Any of these novel alternatives ultimately may prove efficacious and preferable for some people in real-world settings, but obtaining results that demonstrate efficacy has become complex given that one of the current options, oral PrEP, is nearly 100 percent effective for individuals when taken properly.
HIV prevention trials offer PrEP to participants as a part of a package of care and prevention— known as the standard of prevention—in order to meet an ethical imperative to protect participant health first and foremost. How do trials offer PrEP, which can be nearly 100 percent effective if taken correctly, and still learn if an unproven approach offers a measurable benefit?
To explore possible answers to this complex question, a large, multi-organizational workshop was held in Seattle in November 2018 to discuss the challenges of designing future HIV prevention trials in the presence of PrEP [1]. Aimed principally at statisticians and trial developers, the meeting focused largely on innovations in trial design to accommodate the need to assess, in a scientifically rigorous and ethically sound fashion, numerous additional or alternative approaches even as the standard of prevention continues to improve. Ten papers from the workshop were published in the journal Statistical Communications in Infectious Disease, 2019 (see box at end).
The meeting presentations and resulting articles are technically written, and largely independent of one another, making it difficult for non-professionals to read and compare them. What’s more, current thinking has advanced somewhat since then. This summary report is intended to give a unified overview of the multiple ideas proposed in a way that non-specialists can stay informed and engaged in future trial design and review.
The meeting and publications point to the conclusion that there is no simple or single solution to replace the gold standard of trial design: two parallel arms, randomized and placebo-controlled, each offering the standard of prevention “in the background,” including PrEP for all. This gold standard has become logistically challenging, lengthy and expensive, so each trial and the choices involved in evaluating each product,] must be considered in context, taking into account the characteristics of each experimental approach. Awareness of the full range of available options, their pros and cons, will give communities, sites and individuals the knowledge they need to provide crucial input into the design and usefulness of future trials.
Clinical research basics
Clinical trials are the only way to approach an accurate assessment of how well a new biomedical intervention will work. Any trial should pose a relevant question in such a way that the result will predict how well the intervention can be expected to improve individual and public health. A clinical trial, at its core, measures an experimental condition against a control condition with no bias toward one outcome or another, using blinding and randomization to remove bias and achieve objectivity.
Framing the question precisely is essential and not trivial. Once posed as a hypothesis, a set of critical decisions must then be made to predictably prove or disprove that hypothesis. Those decisions are: who to enroll and exclude, the number of participants and length of time needed to assure a robust answer, and how trial participants will be engaged, informed and protected. Those decisions are reflected in a strict protocol, followed closely, to document, contextualize and validate an outcome that will have relevance for future research and use.
New medical interventions must be evaluated in a manner that meets the ethical standards of institutional review boards (IRBs). An ethical trial design ensures timely and reliable insights to improve public health while carefully safeguarding the autonomy, rights and safety of study participants.
In addition to presentations on trial design, the 2018 workshop discussions highlighted other, broader dimensions of the challenge at hand, above and beyond robust statistical design:
- The need for trials that make good use of finite research resources.
- The importance of effective communication and community buy-in.
- The importance of informed choice among prevention options.
- The distinction between the efficacy of a product (best case scenario, e.g. when used as directed) and the effectiveness of the product in real-world circumstances. We know from previous HIV prevention trials that efficacy and effectiveness can be vastly different, e.g. daily PrEP for women.
As just explained, every trial aims to confirm or negate a hypothesis posed to answer that study’s primary question; the hypothesis shapes and informs the trial design and statistical plan.
If one doesn’t ask the right questions, complications will arise during the trial and afterward with implications for licensure, rollout, and the ability to demonstrate that the product is proven to work in all populations that need it. A case in point is the DISCOVER trial, testing the antiviral Descovy (F/TAF) as a new PrEP agent. DISCOVER was designed to test the relative efficacy of daily oral F/TAF in men and trans women compared to daily oral TDF/FTC, leaving the question of efficacy for all other women unresolved and controversial. It’s an object lesson on the necessity of integrated planning for trial design, implementation and rollout, with a singular focus on prioritizing vulnerable populations—cisgender women, transgender individuals, adolescents, pregnant and breastfeeding women and drug users. Because data collection can be challenging in diverse populations, highly innovative—and possibly complex or controversial—trial designs must be explored. The FDA did not approve Descovy for cisgender women and is requiring a follow-up trial before licensure can be extended to cisgender women.
Discussed below are the essential elements of trial design: what’s the primary study question, what populations to include and exclude, what level of proof can be obtained and what post-trial contingencies to prepare for.
Experimental and control conditions
Whatever form the new intervention takes, and what resulting background prevention that can then be offered are major determinants of appropriate designs. In particular, decisions must be made if placebos can be used to keep the investigators and recipients double-blinded without unacceptable consequences, if oral PrEP should be offered to all participants, or if the experimental intervention would be a substitute for it.
Placebos are inert substances utilized to blind both participants and investigators when randomizing participants to an experimental or control arm of a study. In placebo-controlled designs, at-risk individuals are randomized to either placebo or an experimental intervention, and all participants are offered a background prevention package as ethically mandated. The alternative to blinded placebo controls would be an unblinded (open-label) trial, which sacrifices the benefits of blinding to achieve other benefits.
Instead of placebo, sometimes experimental HIV prevention products are compared to approved oral PrEP (called the “active comparator” or “active control” in trial design). This happens when the experimental product is determined to be an adequate alternative, usually because it is another form of ARV-based prevention. In that case, the experimental option must demonstrate some additional benefit when compared, such as potentially more desirable characteristics. Each approach, placebo vs active comparator, entails making a distinctly different set of comparisons.
With both approaches, the differential between the arms is all that can be determined directly. Statisticians can look for “superiority” (the experimental product is statistically better than the control) or “non-inferiority” (the experimental product is statistically at least as good as the control). Both designs measure efficacy based on the risk people face after they’ve been offered the background package of prevention.
Inclusion and exclusion criteria
Enrollment criteria set the conditions that define the outcome. Too-loose criteria may not allow for an accurate answer, and too-tight criteria will not be generalizable. The high efficacy of oral PrEP and ethical standards would require extraordinarily large trials to get statistically significant comparisons in placebo-controlled trials. Questions about the feasibility and cost of running trials at this scale mean variations on the traditional placebo-controlled trial design are being considered seriously.
One concept is to restrict enrollment to individuals who have a demonstrated “unmet need,” in the sense that they do not want to or can not use existing effective HIV prevention tools. That approach is scientifically and ethically compelling as it ensures that individuals with the most potential to benefit from a new prevention method are enrolled. The ongoing MOSAICO trial of the Ad26 mosaic HIV vaccine is using this approach. This type of design could also include a “run-in period” before randomization, which would allow potential participants to evaluate existing prevention tools before enrollment, with randomization following. For example, participants have the opportunity to try oral PrEP for a set period of time, if they reject it they are randomized into the trial testing an experimental option.
Another variation is to add an additional parallel arm for observational purposes of those who choose not to enter the main trial or who are ineligible due to use of highly effective prevention methods. Those participants would be followed in a similar way to those enrolled in the main trial. Data collected from this “observational cohort” would allow the trial results to be interpreted across a broader population.
Population, size and duration
Clinical trial investigators rely on statisticians to establish the optimal size and duration of the trial. These are key elements of the study design. For example, when trial participants are offered PrEP, the trial design must factor in reduced incidence based on an anticipated rate of effective PrEP use. The lower the HIV incidence in the trial population, the larger and longer in duration the trial must be to answer the scientific question of interest. Past experience in similar populations may serve as a guide if available, but when a trial goes on for too long, background norms and the value of the outcome will change.
Comparative trials
Regulators typically consider non-inferiority to another proven and effective product sufficient to grant a license, but establishing non-inferiority leaves several important questions unanswered. The selection of a non-inferiority trial involves establishing an agreed upon margin for error. Differences between the experimental and proven product that fall within this margin show the new product is “just as good.” A rigorous process must be used to set the margin to deliver a meaningful trial result. Falling inside the margin defines an acceptable measure of similarity. A trial result for efficacy that falls outside the margin defines an unacceptably worse outcome. Comparative differences may also be unacceptably worse in terms of safety, acceptability and eventual cost. Paradoxically, establishing non-inferiority of a new product may mean setting a higher bar for that product than for the existing product to which it’s compared. Demonstrating true superiority, however, requires a statistically significant difference that may be too high a bar to achieve and not realistic.
Both non-inferiority and superiority trials often do not capture all relevant effects of an intervention. People need information on qualitative as well as quantitative measures to make informed choices and recommendations. Pharma companies thrive on selling “new and better” treatments, as standard treatments are often generic and generally much cheaper. But neither “different” nor “as good” are useful for making decisions. What is needed in HIV prevention are not only more choices, but more informed choices with real world public health value. Such choices become even more complex if trial results are unclear because of a confusing or controversial trial design.
In March 2019, the US FDA issued a guidance on developing drug products for HIV pre-exposure prophylaxis. It recommended that future PrEP trials in men could be non-inferiority trials. But PrEP trials in women are another story. Some trials have yielded inconsistent data on the efficacy of PrEP in women— with evidence that adherence was a significant issue. Questions also exist about how PrEP drugs are distributed in vaginal tissue. For these reasons the FDA guidance recommends future PrEP trials among women are designed for superiority.
External data or correlations in place of a comparative arm
One approach under investigation involves active-controlled trials relying on data obtained outside the trial to evaluate what the level of HIV incidence would have been in the trial population had a placebo arm been included (so-called counterfactual placebo incidence). The external data might be population surveillance data gathered by public health entities, HIV incidence from recent trials in a similar population that did include placebo arms, or possible correlates HIV exposure in the trial population, such as incidence of STIs that co-circulate with HIV.
If counterfactual placebo incidence could be estimated reliably, an experimental intervention could be compared to the estimate of “background incidence” and compared to an active control. Such approaches are speculative and controversial, to say the least.
Trial design should anticipate how to apply the results in the real world
Anyone in medical practice and public health will also want to know a tested product’s relative benefit, usability, risks, and cost to individuals and society. National regulators and WHO are expected to establish guidelines for use, while doctors and users must sort out what people prefer and how side effects, ease of use and individual benefits compare.
The need for statistical and practical significance
To summarize, the goal of clinical research is not simply to provide those pursuing safe and effective interventions a choice, but rather an informed choice. While pursuing greater efficiency and speed, any acceptable design must still ensure that the trial provides sufficiently reliable and convincing insights about efficacy and safety, so health care providers and the public are properly informed and can make meaningful choices.
Trial designers need to maintain the long view and anticipate how the intervention would ultimately be used. That will depend on what other interventions become available during development and after. Understanding long-term implications for the use of a given product should also be incorporated into a proposed product profile, which, at a minimum, will lay out the following: users, conditions of use, safety, practicality, delivery and cost. All too often, an efficacy trial ends with no clarity whether to proceed with the intervention as is, improve it, or set it aside. A clear picture of efficacy, time to availability, cost, ease of use and acceptability inform a range of critical decisions and can make the difference between minimal progress or strong advances in HIV prevention.
As explained here, a wide variety of decisions must fit together to get clear answers to such complex questions, including good communication and involvement of all stakeholders in trial design and conduct. In this era of expanding prevention approaches, new trial design methods will be necessary despite their complexities and uncertainties.

Prevention Trials Glossary
Experimental and Control Arms
- Experimental arm, participant randomized to an experimental agent
- Control arm, participants randomized to a placebo or comparator
- Active control, the comparator is an active agent
- Crossover, participants switch arms at a predetermined point of the trial so every participant experiences the experimental and control conditions
Placebos and Blinding
- Placebo, an inert substance used for blinding purposes
- Single blind, participants are blinded, investigators are not
- Double blind, participants and investigators are blinded
- Open label, participants and investigators know which intervention they receive, usually because of difficulty in blinding, such as medication compared to surgery
- Double-dummy double-blind, use of two placebos when interventions differ, such as injections compared to pills; each group gets an active agent and a placebo
Randomization
- Randomized, participants are assigned to arms by chance
- Non-randomized, introduces the possibility of bias in selection, may be used after licensure or for historical studies
Endpoints
- Superior, the experimental intervention is shown to be statistically superior to the control
- Non-inferior, the experimental intervention is statistically equal or better than the control, within predetermined limits, accepted for licensure
- Inferior, the experimental intervention is shown to be statistically inferior to the control
Prevention Provided to All
- Background prevention, prevention assistance is offered to all participants in this trial, which may not include oral PreP in the experimental arm if testing new ARV-based prevention that is expected to have equal or better efficacy
- Standard of prevention, best prevention package available including oral PreP and any subsequently approved prevention method
- Prevention exclusion, randomization occurs after candidates have the opportunity to try and decline oral PreP; they may choose to begin PreP after the trial begins, benefits participants who need a new option
Statistical Communications in Infectious Diseases (SCID)
Special Issue, 2019.
If you wish to read the original papers, please be in touch at avac@avac.org.
Janes, Donnell & Nason, Designing the Next Generation of HIV Prevention Efficacy Trials: Synopsis of a 2018 Symposium
Donnell, Current and Future Challenges in Trial Design for Pre-Exposure Prophylaxis in HIV Prevention
Gilbert, Ongoing Vaccine and Monoclonal Antibody HIV Prevention Efficacy Trials and Considerations for Sequel Efficacy Trial Designs
Wittes, The Modern Randomized Clinical Trial: Is it Time to Sharpen a Blunt Instrument
Dunn & Glidden, The Connection between the Averted Infections Ratio and the Rate Ratio in Active-control Trials of Pre-exposure Prophylaxis Agents
Murray, Regulatory Perspectives for Streamlining HIV Prevention Trials
Glidden, Advancing Novel PrEP Products—Alternatives to Non-Inferiority
Fleming, deGruttola & Donnell, Designing & Conducting Trials to Reliably Evaluate HIV Prevention Interventions
Islas & Brown, Crossover and Repeated Randomization in Event Driven Trials for HIV Prevention: Addressing the Impact of Heterogeneity in Risk in the Trial Design
Follman, Tomorrow’s HIV Prevention Trials of Vaccines and Antibodies
Based on November 5, 2018 meeting on how to design future HIV prevention efficacy trials and subsequent discussions with organizers. Thanks to Holly Janes of FHCRC for input and assistance.