On how to test more drugs
Some personal takeaways from organizing a workshop focused on policies for making clinical trials cheaper and faster.
There comes a time in every biologist’s life when their tech friends will ask, with well-meaning but perceptibly pitiful curiosity, why medical progress is so frustratingly slow. Such moments were the inspiration behind a post I wrote in July this year titled: “Why haven’t biologists cured cancer?”. I laid the blame there on “long feedback loops”: the time from hypothesis to result and then iteration is just much longer in biology than in software engineering. Thankfully, we are getting better at shortening some of these feedback loops: for example, via large-scale, multiplexed experiments, where many outcomes are measured in the same experiment. But we are not getting better and many are arguing we are indeed getting worse, at others: for example, clinical trials, the final and essential step in the validation of any therapeutic, remain very long, expensive and bureaucratic.
One thing led to another and in early September my friend Willy Chertman, a biotech fellow at the Institute for Progress (IFP), invited me to co-organize a policy workshop on Clinical Trial Abundance, that we hosted in October, with support from IFP and Renaissance Philanthropy. To this end we invited experts, including Sir Martin Landray, co-lead of the RECOVERY trial, which uncovered the effectiveness of existing drugs (e.g. dexamethasone) in record time and with relatively low costs.
We helped each expert create a memo in their specific area of interest and now they are published on the IFP website. The accompanying manifesto, highlighting the broad spirit and aims of the endeavour, is a guest post in
’s newsletter.To accompany this, we have written our personal takeaways and open questions on what we have learned about the underlying issues that prevent progress in the area.
More trials, not laxer standards
We think that in discussions around regulatory overreach, people often overindex on the FDA’s refusal to approve potentially beneficial drugs. While this might be true in some cases, we can also find examples where the opposite can be credibly argued1.
We think the main problem lays upstream of this, in the difficulty and cost of carrying such trials in the first place. At the end of the day, we want to discover the best available therapies and our chances of achieving this are better the more shots on goal we have. Focusing on how to accomplish this is more important than approving more marginally effective drugs. There is also another upside to cheaper trials: answering “practical questions about drugs and devices that are already in use”, as Milos Miljković writes here.
Culture matters
One way of going about fixing inefficiencies related to clinical trials is proposing policies one by one. But in many ways doing *just this* feels insufficient. Throughout our explorations and talking to various people working in the industry, a broad, unifying theme emerged, which pervaded specific examples of misguided policies.
The issue was one of culture. Any policy proposal arrives on a diffuse, decentralized, but ultimately powerful cultural prior and its effectiveness will be bounded by the limits of that prior. And the culture around clinical trials and biotech more broadly tends to be be safetyist and paternalistic. In general, there is much more concern about downsides of actions as opposed to possible upsides. Not to speak of downsides of inaction, which seem to rarely even be considered. If the tech ethos of “move fast and break things” stands at one end of the spectrum, the biotech ethos would be at the other, with very little concern for the downside of slowness. Of course, some of this is justified: when dealing with human life, we should be slower and more cautious. But we think we are well past the point where this mentality has become counterproductive.
One concrete example of the impact of cultural attitudes pertains to participant payment. Clinical trial participation requires substantial time and effort from patients, yet participants are often undercompensated due to concerns about "undue influence" from IRBs (institutional review boards.) The worry about “undue influence”, in turn, derives from the Common Rule, which states:
An investigator shall seek informed consent only under circumstances that provide the prospective subject or the legally authorized representative sufficient opportunity to discuss and consider whether or not to participate and that minimize the possibility of coercion or undue influence [emphasis added].
In theory, IRBs must work within the limits set by the Common Rule, but if it is vague, often err on the side of overcompliance. Respected, mainstream bioethicists have argued against this overly cautious interpretation of “undue influence”. However, IRBs continue to interpret this guidance in a maximalist way, often denying increases in payment. While such concerns might be warranted in exceptional circumstances, drugs already have to go through a safety check profile before being tested in humans, which makes it hard to understand what the concrete downsides of increased compensation could be.
This approach is both patronizing and counterproductive. Patient recruitment and retention are both important barriers in clinical trials. Adequate compensation could improve trial retention and completion. During the workshop, patient advocate Allison Foss highlighted how insufficient payment particularly affects those with chronic conditions like myasthenia gravis, who face additional hurdles in transportation and logistics. Concerns about undue influence should be balanced with concerns about the actual well-being and financial security of participants. Rather than letting abstract ethical concerns dictate policy, we should recognize that fair compensation enables broader participation and ultimately serves both scientific progress and patient interests.
What’s even more ridiculous is that sometimes it is actively detrimental for participants to take part in clinical trials, because there are no tax exemptions and they can be removed from disability benefits, despite the fact that taking part in trials is a socially beneficial action. To these issues we proposed policy changes — including modifications to the tax code. However, it is hard to imagine how any policy change alone can stop every overzealous IRB in the country.
Another failure mode we have observed is one where a generally valid perception of regulatory harshness leads players to overshoot, even in cases where rules are actually favourable to disruption. This leads to an overall tendency to stick to the status quo.
Such an example relates to reluctance on the part of CROs to implement cost-cutting measures like Risk Based Monitoring (RBM). Traditional clinical trial monitoring relies heavily on in-person site visits and comprehensive data verification and they are estimated to consume around 30% of trial budgets. RBM offers a smarter alternative by focusing resources on high-risk areas that most impact trial quality and patient safety. Studies show RBM can cut monitoring costs by more than half while maintaining trial integrity.
Yet despite FDA endorsement in 2013 and proven cost benefits, RBM adoption has been tepid. By 2019, while 53% of trials incorporated some RBM elements, few embraced its core components – only 10% implemented centralized monitoring and 15% reduced their data verification requirements.
Contract Research Organizations (CROs) swiftly pivoted to remote monitoring practices when on-site visits became impossible. While remote monitoring is not the same as central monitoring (a component of RBM), this adaptability suggests that broader RBM adoption is feasible given the right incentives. Still, the shift is far from complete: only 43% of studies starting in 2021 implemented centralized monitoring, and 27% did both centralized monitoring and reduced SDV/SDR. In surveys of the CRO industry trying to get at the bottom of this issue, fear of regulatory risk is a very often cited reason for not implementing these measures, something that was confirmed to us in 1:1 conversations with various actors in the space.
And it's not just that CROs themselves refuse to innovate. Sponsors (e.g. big pharma) are reluctant to face any risks (e.g. FDA rejecting their trial design), given the high cost of potential failure, and prefer the trodden path regardless of extra costs. I talked to Meri Beckwith, who runs a start-up called Lindus Health, focused on running faster and more efficient clinical trials:
We see a surprising reluctance from pharma to accept more innovative trial designs and methods (like centralized remote monitoring, risk based monitoring and decentralized/hybrid trial designs), even when these are specifically encouraged in FDA guidance, and demonstrably lead to lower costs and higher quality data. This stems from the massive amounts of inertia in these huge organisations, limited competitive pressure to change.
This echoes what we have heard from others who have wished to stay anonymous and a remark made in the now famous post which explains the incentives pervading the industry:
Because the cost of failure is so high, well-capitalized companies will always favor established methods, even if they are slow and inefficient, as long as they present a reasonable chance of getting the job done eventually
To overcome such reluctance, we did propose policy changes: more specifically that the FDA should not only signal acceptance for the incorporation of RBM, but actively incentivize and encourage it. However, there are probably many other ways in which various actors (CROs, pharma companies) err too much on the side of caution when such caution might not even be necessary. It’s also worth noting that the FDA guidance encouraging the use of such methods was published 11 years ago, in 2013, and if all changes are so slow to be adopted, even when the FDA ratifies them, it points to a bigger issue.
Opacity
The world of clinical trials is frustratingly opaque. In conversation, even academics who specialize in understanding clinical trial costs bemoan the lack of transparency around this, a lack of examples of what the average CRO-sponsor contract looks like, metrics on how much different components of trials contribute to cost and delays, etc.
We believe that more information in these areas is an unalloyed good. There are other interesting questions where it’s still unclear to us what the right balance should be. One such example is to what extent the FDA should offer more guidance early in the process of trial design and be more clear about what designs they would not accept.
In this regard, someone who oversaw the organization of clinical trials for a big pharma company brought up the example of Lykos Pharmaceuticals. Despite achieving its primary endpoint, Lykos - a pioneer in psychedelic therapy for mental health - saw the FDA reject their lead compound due to trial design flaws. The aftermath was brutal: 75% of staff laid off, and millions spent on a Phase III trial that might have been salvaged with more feedback earlier in the process. Since then, Lykos has hired pharma industry veteran, and former Johnson & Johnson executive, Dr David Hough, as a senior medical advisor, tasked with overseeing the clinical development programme and FDA engagement for the resubmission of their lead compound, midomafetamine.
Overall, this could have a negative impact on the industry as a whole, by favouring incumbents. In this regard opacity and the culture of safetyism might go hand in hand, as sponsors will always err on the side of caution and avoid the risk of implementing innovative approaches.
It also makes the disruption of the CRO industry harder, because they will end up competing not so much on efficiency, but on experience with niche situations. This is again nicely explained in the post I mentioned before and might offer some clues into why tech-based solutions to making trials cheaper have made disappointingly little impact so far:
The massive amount of undocumented specialist knowledge that you need to efficiently run a clinical development program strongly favors incumbents, preventing new market entrants from easily competing on the basis of cost or competenceーe.g., it doesn’t matter if your firm has 170 IQ engineers if they simply don’t know all the One Weird Tricks about how to get around the FDA’s catch-22s.
However, a counterargument to this is that trials are always going to have unexpected situations. It is virtually impossible for anyone to predict all the ways in which a trial could go wrong or foresee all the possible issues with design. As such, it is good that the FDA has the ability to reject designs at any point in the process and pre-specifying all the conditions for failure would be next to impossible. There is probably a tight balance to be walked here and something we will think more about.
In any case, increasing regulator capacity and ability to respond to requests seems like an unambiguously positive thing. The problem with short staffing is highlighted by Matt Glines, CEO of Roivant Sciences, in a podcast where he describes how:
Before COVID19, it used to be that you request a meeting with the FDA, you got a meeting with the FDA. Now, in many cases you request a meeting with the FDA and you get a written response which is just a different process. And this is because they are short-staffed.
He also highlights how during COVID19, the opposite was true: due to a sense of urgency, requests were dealt with very swiftly, suggesting that change is possible with more capacity. Our stance is that each drug is an urgent matter!
On vibe shifts
Saying that there is a cultural problem sounds handwave-y and unsatisfying. How is that even actionable? I would have maybe agreed two years ago, but having lived through a massive vibe shift that seems to have happened overnight, I do now believe vibe shifts are possible.
At the moment there is a general fuzzy consensus that medical innovation is good. Nobody would say, if asked directly, that “No, I don’t want more cancer drugs.” But messaging around the actual steps that are needed to accelerate it has for a long time been contrary to progress, such that whenever there is a trade-off between accelerating innovation and literally anything else (e.g. a perceived risk of undue influence), it seems that the other concern wins. In practice, society is pro medical innovation to the extent that such innovation is carried out by self-disinterested, non-profit making entities with absolutely zero risk whatsoever when it comes to any privacy related issue and given maximum paternalism. Which in practice translates to not very pro medical innovation.
This is related to Peter Kolchinsky’s point on the price of branded drugs. While branded drugs make up just 8% of U.S. healthcare spending, they face disproportionate criticism in the media and political sphere as a symbol of healthcare's cost problems. This 8% likely represents one of the most efficient uses of healthcare dollars. Unlike hospital stays, procedures, or other medical costs that never become cheaper, drug spending has a unique characteristic - it's ultimately self-liquidating. When patents expire, drugs become generic, creating permanent cheaper access to affordable versions that can benefit patients for decades to come:
Consider what that 8% does. Firstly, it pays for the expensive novel branded medicines that are out there today treating cancer and autoimmune disease and depression and countless other conditions. The high prices of today’s branded drugs generate revenues and profits for the biopharmaceutical industry, which signal to investors and innovators that there could be significant reward for them if they bring new treatments and cures in the future. And as today’s branded drugs steadily go generic, typically about 14 years after coming to market, that 8% we spend shifts towards paying for newer drugs, generating profits for their inventors and their investors. The end result is the expansion of our generic armamentarium, which continues to work for all of us. It’s not “out with the old and in with the new.” It’s the new plus the old.
The root problem in this is the same one we have seen when it comes to a lack of urgency related to making clinical trials faster/more efficient: it seems that in practice we are not good enough at conveying the real cost of not developing new drugs, so other concerns always win out. That’s where the vibe shift needs to happen.
And it’s important that this is the way the vibe shift actually happens. Many of those calling for more transparency into e.g. clinical trials costs also often share the belief that such cost estimates should be used to advocate for negotiating lower branded drug prices in a centralized manner. As explained in another post, focusing on estimates of R&D for individual drugs to justify price negotiations is misguided, since pharma companies need to recoup the huge R&D costs of failed programs. In an ideal world, discovering drugs would be easier and cheaper and this is something that would automatically drive prices down. It is also something that needs to generate sufficient profit such that companies are incentivized to participate in R&D.
For example, in a recent paper on the topic, economists Ariel Pakes & Kate Ho attempt to estimate the impact of price negotiation policies and conclude:
Our calculations indicate that currently proposed U.S. policies to reduce pharmaceutical prices, though particularly beneficial for low-income and elderly populations, could dramatically reduce firms’ investment in highly welfare-improving R&D. The U.S. subsidizes the worldwide pharmaceutical market. One reason is U.S. prices are higher than elsewhere.
People are eager to help for the social good
To end this on a more positive note, we were impressed by the eagerness of people from all across the industry to share their time and experience for little direct personal benefit. This would not have been possible without the help of those willing to share their experiences and thoughts, both anonymously and publicly. We thus thank everyone who contributed their time and expertise!
Conclusion
There is a further distinction based on timescales. Things that are hard constraints in the short run may become more malleable to intelligence in the long run. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn in vitro what used to require live animal experiments, or to build the tools needed to collect new data (e.g. the bigger particle accelerator), or to (within ethical limits) find ways around human-based constraints (e.g. helping to improve the clinical trial system, helping to create new jurisdictions where clinical trials have less bureaucracy, or improving the science itself to make human clinical trials less necessary or cheaper).
(Dario Amodei)
We believe that this is an immensely complicated problem that is not going to be solved easily or by one method alone. In the end, it’s probably going to take policy change, cultural change and technological disruption to overcome. In an ideal world, this would create a virtuous cycle, where the policy changes would enable the disruptors to win by altering the competitive landscape they are faced with.
However complicated this might be, it’s an issue worth tackling. As Dario Amodei explains above, in an era of accelerated technological advancements, barriers to progress, especially in the “world of atoms”, might come from systems designed by humans.
IRBs are fundamentally broken and need to be eliminated.
Lots of aspects of medical ethics make normal people squirm -- even triage makes us feel uncomfortable -- but we also don't like letting people come to harm because of our unjustified discomfort. IRBs let us avoid that uncomfortable guilt by excusing our inaction -- it's what the authority said was morally necessary.
But don't we need them to prevent research misconduct and horrible things like the Tuskegee study? NO because there was never any reason to think IRBs would guard against that shit. The worst abuses didn't happen in the dark, they happened because the same class of people who would have been on an IRB didn't see a problem with it. In 50 years we'll be looking back at all the suffering we allowed to continue as a result of IRBs with the same moral horror.
Sure, it's important to have more than 1 person look at a study. Have an *informal* group of profs/docs or employees in other departments do that kind of thing but there is no reason to think the IRB process is more morally sound.
The best case scenario is the incentivizes for IRBs are either to minimize risk of lawsuit (the for-profit ones) or risk of bad PR (for hospitals and universities). The worse case is their incentives are to provide cover for powerful people at those institutions to avoid pressure to face uncomfortable choices. So why would you think they are a superior way to vet human studies?
Interesting article.
I know this comment is a little off-topic, but Randomized Controlled Trials can and should be expanded way beyond the medical drugs field. RCTs can make an even bigger difference in social welfare programs, education, and law enforcement.
I believe that we should radically ramp up the use of RCTs, so we can identify programs that actually work instead of spending trillions of dollars per year only to find out later that the programs either do not work or do not do so in a cost-effective manner.
I go into more detail here:
https://frompovertytoprogress.substack.com/p/the-case-for-randomized-trials-in