I have been thinking about how to make clinical development more efficient for a while now, at least since September last year, when I helped co-organize the Clinical Trial Abundance Initiative. But the process since has been torturous. One might ask: “Why? Why is it so torturous to think about how clinical development can be made more efficient?”.
The simple answer lies in the fact that everything surrounding this topic lies in the world of whispers. You talk to someone working as a practitioner about which regulations are too burdensome and they tell you x and y. You then ask: “ok, could you provide more examples of this? Ideally, would you be willing to give an official quote, so I do not sound like a lunatic that proposes ideas that can kill people? You know, if someone who is a Professor in this field says the risk-benefit profile of regulation X is suboptimal, people are more willing to trust them than they would a random graduate student.” And then… silence.
Nobody wants to actually do that: put their names behind an idea that might have (gasp!) trade-offs, because, usually, it would involve getting their name associated with said idea and risk being further slowed down in their research by whatever admin facility is trying to already squash research under overly burdensome compliance requirements. Their colleagues might look down upon them for arguing against patient safety: they would be committing a breach of medical ethics, which must always be concerned with potential downside and never with potential upside.
You then suggest maybe they can provide you with some useful alternative, in the form of an official source where you can read more about this: “Has *anyone* at all argued about this specific regulatory process, written about this, quantified the risk-benefit profile of it?”. And it usually turns out that no, nobody has. The next step is asking for some direct proof in the form of some receipts. This is also very hard to get: those operating in the world of medicine are again very reluctant to share anything that can be traced back to them. After endless reassurances that no, you are not going to reveal their identity, you will, if you are lucky, get some concrete evidence about why X regulation is bad and how. Maybe some chain of emails about a particularly annoying IRB. Again, I do not blame these friends. In fact, I am very grateful to the few who are willing to share even such information, as they are very rare.
To give a concrete example, a friend of mine who is an academic immuno-oncologist wrote to me that while some of the proposed policies are great, to truly make clinical development faster and more efficient we need more radical ideas. This is something that I felt too, so naturally, I asked him what the more radical ideas are.
He immediately suggested that in his field of cell therapy, small, bespoke phase I trials are encumbered by onerous requirements for full-GMP standards of each reagent. While such standards make sense for larger, industrial scale trials, they are not necessary for the kind of academic trials that he carries out, as careful testing of the final product has to be done anyway in a bespoke fashion, something that is not feasible when manufacturing at large scale.
These requirements apparently decrease the amount of cell therapies that can be tested in small numbers of patients by a factor of around 10x and lead to minimal if not zero decreases in patient safety. We could be testing 10x “cures that work in mice” than we currently are…. That is a big deal. And we could be doing it in people who are suffering from terminal cancer and who would gladly accept a therapy that has 0.0001% more risk associated with it versus nothing. But we don’t.
And since Phase I trials are the funnel to everything else downstream, it means we are severely limiting our capacity to test potentially transforming therapies at a critical step. It is important to see, at least in principle, if an idea works in humans and then, if it does, scale it up with full GMP in larger trials — this is what Phase I is for. But if we only test 1/10th of these ideas, we are just giving ourselves less chances.
This is infuriating to me!
I will quote my friend exactly:
“As an example, testing a new CAR T cell therapy approach in cancer patients requires $2-3 million between procurement of GMP grade DNA ($100,000+ for GMP, $5,000 for research grade with extra testing) and viral vector ($1,000,000+ for GMP, <$50,000 for research grade), and required engineering runs, meaning multiple millions of dollars are needed to test a new idea in 5 patients. The initial proof of principle that transferring T cells into patients could treat disease (in this case a viral infection in immune compromised patients) was done on a standard NIH grant due to less stringent regulation at the time of cell therapies.
With higher regulatory barriers, this proof of principle would never have happened. The expense of early phase clinical trials means that in the absence of venture capital support, most academic oncologists working in cellular therapy are effectively forced to work in less relevant mouse models, as those are the only experiments that are feasible with realistic funding from NIH, DOD, or foundation sources. While foundation and NIH sources of funding exist for trials, they are not nearly on the scale that is necessary. As a scientist developing cell therapies and a doctor who treats patients with these therapies, I believe that 2nd and 3rd generation T cell therapies will transform cancer treatment. There are 100s of ideas for how to make these therapies work better from mouse and in vitro models, but these models are poorly predictive of efficacy in patients, and the path forward is to facilitate the rapid testing of multiple approaches in patients, to get more shots on goal. The patients themselves very much want this also.”
I am very grateful to my friend, as this is a lot of evidence. But still not enough to write a policy memo or convince anyone who does not already agree with you that this should be done.
He also explained to me that while the FDA allows for some flexibility regarding GMP requirements for Phase I trials, in practice someone (?) at his university ends up imposing much stricter rules because… they can. I posted about this on linkedin and was connected to an ex-FDA person who told me that yes, their regulations are actually more flexible than what my friend is forced to implement (albeit still not flexible enough according to my friend). When I asked why they are implemented in the most maximalist and safetyist possible way by local compliance officers at universities, the ex-FDA person shrugged. I then talked to a manufacturing consultant about this and she told me that some manufacturing consultants (but not her) impose overly strict rules that are beyond what the FDA itself requires. “Can you suggest what the FDA could do to change this? Can you tell me how you would change this?”, I then asked. And this is the point at which I got ghosted.
In a desperate move, I said: “F*** this, we just need to release some full regulatory filings”. Since regulatory filings are not open to the public at the moment, this could at least create a common, shared language and we can begin to quantify how these trade-offs and costs stack against each other, how the FDA requirements are implemented in practice and so on, instead of talking about everything behind closed doors. This is how the “Biotech’s lost archive” piece around releasing full CTDs was born. But there is a broader point to be made here.
Small c cancel culture slows down innovation
“This quest may be attempted by the weak with as much hope as the strong. Yet such is oft the course of deeds that move the wheels of the world: small hands do them because they must, while the eyes of the great are elsewhere.”
(Lord of the Rings)
Tolkien’s choice of hobbits as the creatures that ended up destroying The Ring wasn’t accidental. He wanted the agents of history to be the least obvious ones. They are chosen because they are willing to do a quiet, unglamorous job for the right reason, even when their place is at home, chilling out with their families, not in Mordor. Sam and Frodo are not supposed to be Heroes, Aragorn is. But it is the hobbits who ultimately save the world.
In the last year I have come to appreciate “hobbitian courage”. Much has been made in the last couple of years about cancel culture, a real phenomenon whereby people lose their jobs because they express an unpopular opinion about some widely accepted orthodoxy. When people talk about cancel culture, however, they usually refer to people taking unpopular stances in “big”, visible debates. I do not wish to undermine the courage that these people must exhibit in order to do what they do. However, the returns are also outsized: many of those who criticized some crazy idea from the last years, ended up as popular media figures.
But a more prevalent force, and what I have been banging my head against, trying to figure out how to propose ideas to speed up drug development is a more mundane form of cancel culture: let’s call it small-c cancel culture. It’s not the public, spectacular shaming we see on social media. Rather, it’s the steady, private pressure to never risk disapproval, to never put your name to an idea that might be messy or controversial. The quieter, hobbitian form of courage that clinical development reform (or any other hard systems problem) requires is humble: a researcher agreeing to let you cite them, an administrator willing to deviate from an inherited checklist, a policymaker ready to question a default. Small-c cancel culture starves exactly those acts, not through a single blow but through a thousand unspoken signals that say don’t stick your neck out.
I have floated a similar idea in an essay from last year titled: “Intellectual courage as the scarcest resource”. There, I distinguished between two types of courage: the low potential upside courage versus high potential courage. Telling a random biofoid that might leak your name why full GMP is bad and giving her receipts of it is low upside, hobbitian courage. But it’s much needed.
“It seems to me that there are two types of courage: high potential upside courage and low potential upside courage. Going to war or taking part in any form of physical confrontation, at least for the nobility, was a high potential upside endeavour: one would be showered in glory if they succeeded. Yes, there was a lot of risk involved, but also, much to gain. And what was to be gained was tangible and widely reinforced by society: status, something we all crave. It was not some subjective thing dependent on deep conviction. I think we still have that type of courage in our society. It’s rare, indeed, but not impossible to find. And while it does not manifest itself in people going to war, it shows up in those who take risks to pursue ideas with low probability of success but huge upside in terms of status. If you follow me, you probably recognize that I am talking about the world of start-up founders. Quitting your cushy job to pursue the life of a founder requires a lot of courage, with a lot of comfort to lose, but also with the promise of very tangible rewards for success. Our start-up founders are the warriors and kings of old.
What we do lack though is courage that has low potential upside: that is, not much to be gained from it. And I am increasingly convinced this type of courage has always been even more rare than the high potential upside type. It requires one to not only have high risk tolerance, but also ignore one’s self-interest — two traits that are already rare to begin with, and even rarer together. It requires one to be a sort of Joan of Arc of intellectual life.
Being intellectually honest about a topic your academic (or journalistic, or any other type of intellectual) colleagues disagree with falls into the low potential upside courage bracket. There is stuff to be lost but relatively little to be gained. What’s worse, you will most likely not even be awarded the dignity of being openly cancelled: most likely, your career will become a bit shittier with each open disagreement you have, a dreary slog you cannot even wear as a badge of honour. You’ll become that which most ambitious people fear the most: a no-name. And you won’t even be able to tell where this comes from, to point to a culprit. If you are even a tiny bit ambitious the calculus is clear: shut up and agree. You need to be a bit mad to do it”
I worked in acute mental health settings for forty years, and one of the major issues was measuring patient compliance when they were out on a conditional discharge/outpatient commitment. Weekly bloodwork is expensive, and even more so for medications that every lab does not test for. Blood needs to be sent out to specialised labs. Someone suggested that simply putting a harmless dye in medications could be easily and inexpensively monitored. The researchers from Geisel Medical School at Dartmouth shook their heads. Each new compounding would have to be tested separately to be declared safe enough. Adding the dye made it a whole new ballgame.
I hope they were wrong and there were and are ways around this, but I fear they were all too accurate.
I thought this was cool: https://www.research.chop.edu/cornerstone-blog/in-landmark-study-chop-penn-team-treats-newborn-with-base-editing-therapy
And I thought about how it was only possible specifically because it was a N=1 condition where the patient was going to be totally fucked if they didn’t get the experimental treatment, where a Hail Mary can get through the regulatory process. Were this to be something that affected more people, paradoxically this wouldn’t be attempted due to risk.