IRBs are fundamentally broken and need to be eliminated.
Lots of aspects of medical ethics make normal people squirm -- even triage makes us feel uncomfortable -- but we also don't like letting people come to harm because of our unjustified discomfort. IRBs let us avoid that uncomfortable guilt by excusing our inaction -- it's what the authority said was morally necessary.
But don't we need them to prevent research misconduct and horrible things like the Tuskegee study? NO because there was never any reason to think IRBs would guard against that shit. The worst abuses didn't happen in the dark, they happened because the same class of people who would have been on an IRB didn't see a problem with it. In 50 years we'll be looking back at all the suffering we allowed to continue as a result of IRBs with the same moral horror.
Sure, it's important to have more than 1 person look at a study. Have an *informal* group of profs/docs or employees in other departments do that kind of thing but there is no reason to think the IRB process is more morally sound.
The best case scenario is the incentivizes for IRBs are either to minimize risk of lawsuit (the for-profit ones) or risk of bad PR (for hospitals and universities). The worse case is their incentives are to provide cover for powerful people at those institutions to avoid pressure to face uncomfortable choices. So why would you think they are a superior way to vet human studies?
I know this comment is a little off-topic, but Randomized Controlled Trials can and should be expanded way beyond the medical drugs field. RCTs can make an even bigger difference in social welfare programs, education, and law enforcement.
I believe that we should radically ramp up the use of RCTs, so we can identify programs that actually work instead of spending trillions of dollars per year only to find out later that the programs either do not work or do not do so in a cost-effective manner.
100% agree but people often strongly oppose the idea. It's the same shit as with trials in healthcare. Apparently it's perfectly moral to just pick what policy you're going to use on a whim without telling anyone but God forbid you actually design a system that can collect useful data to improve things without informed consent.
Apparently randomization via a well-designed study is dangerous in ways randomization by which office you visit, year it is or agent you see isn't.
Yes, there is some resistance, but I think the main reason is that this is not perceived as something government does. I think it is more bureaucratic inertia than any real misgivings.
If more people started talking about the benefits of doing it, I think we could get some traction.
I agree there is low hanging fruit but RCTs are harder out of medicine. We actually end up with quite a bit of data in education and it just turns out that interventions never scale -- in medicine the masses get the same drug as the test subjects but in school the teachers who need to roll out the national program never look like the test contingent.
And as a note of caution consider the fact that companies still refuse to use internal prediction markets in the obvious cases -- for instance estimating project completion dates. All big software, hardware etc companies could easily gather much more accurate info on how long it will take to finish things and they consistently choose not to do so.
Presumably the reason is that people hate to be corrected especially by lower status/ranking individuals. And RCTs raise many of the same threats -- maybe worse.
I mean if you're a bigwig at an executive agency yes you want to make the country better but you also want to improve your career. What if you run an RCT and it says your party's ideas won't work (or the other side will). You could have just implemented those programs your side likes gotten praised and a cushy job -- now people hate you either way. Or what if it just shows what you've been crusading for won't succeed.
Ultimately as long as it's considered acceptable NOT to do RCTs there are lots of incentives not to do them.
But yes, if we start small maybe we can cultivate a culture that doesn't see them as optional.
> However, a counterargument to this is that trials are always going to have unexpected situations. It is virtually impossible for anyone to predict all the ways in which a trial could go wrong or foresee all the possible issues with design. As such, it is good that the FDA has the ability to reject designs at any point in the process and pre-specifying all the conditions for failure would be next to impossible.
It would be good for an organization that understands cost-benefit analysis and acts accordingly to have that power. But it does not seem like the FDA is such an organization.
1) How important is it really for the big pharma companies to innovate rather than letting the smaller startups undertake those risks and buying up the ones which succeed. After all the existence of bankruptcy often favors doing risky things in small companies that can go poof if things go bad.
2) It seems like it should be easier to frontload the parts of trials most likely to result in failure.
Absolutely, that comment was just about the narrow question of big pharma being conservative. I saved my screed against IRBs for another comment.
I mean I agree with most of what you said, I just think that it's inherent in the IRB model. The conservativity and avoiding saying anything that makes people feel uncomfortable should be done is a design feature of the system not an error. I just don't see how you fix it without addressing those fundamental incentives so I was just quibbling on the edges.
Descriptively, the current "buy up the ones which succeed" model means big pharma buying up startup drugs after their Phase 2 (investigate/suggest efficacy) trials and before their Phase 3 (large-scale efficacy+safety) trials. At this point, the drugs are generally >50% to in fact succeed, and still are usually hundreds of millions of dollars and many years away from seeing a commercial patient.
A world where "buy the ones that succeed" meant after FDA approval and so the startup risk-takers could run things all the way through the clinic would be great for the reasons you suggest, but it's not the world we have now.
Could these improvements be implemented in China? It appears to me that the root of the problem is political; the personal incentives for any one individual are to be risk averse to some kind of failure. But China has some unique characteristics; the government can fund people to take risks, and if it wants it can keep failures out of the news. They also are facing the prospect of an economic depression and a lack of career opportunities for young people. It would cost some money to train more people to do biological and clinical research, but it would be more productive than letting young people lie flat.
Their leadership was very (is very?) risk averse. I think understandably so. Doing challenge trials would have required infecting people, with the possibility that people in the trials would inadvertently spread it around.
IRBs are fundamentally broken and need to be eliminated.
Lots of aspects of medical ethics make normal people squirm -- even triage makes us feel uncomfortable -- but we also don't like letting people come to harm because of our unjustified discomfort. IRBs let us avoid that uncomfortable guilt by excusing our inaction -- it's what the authority said was morally necessary.
But don't we need them to prevent research misconduct and horrible things like the Tuskegee study? NO because there was never any reason to think IRBs would guard against that shit. The worst abuses didn't happen in the dark, they happened because the same class of people who would have been on an IRB didn't see a problem with it. In 50 years we'll be looking back at all the suffering we allowed to continue as a result of IRBs with the same moral horror.
Sure, it's important to have more than 1 person look at a study. Have an *informal* group of profs/docs or employees in other departments do that kind of thing but there is no reason to think the IRB process is more morally sound.
The best case scenario is the incentivizes for IRBs are either to minimize risk of lawsuit (the for-profit ones) or risk of bad PR (for hospitals and universities). The worse case is their incentives are to provide cover for powerful people at those institutions to avoid pressure to face uncomfortable choices. So why would you think they are a superior way to vet human studies?
Interesting article.
I know this comment is a little off-topic, but Randomized Controlled Trials can and should be expanded way beyond the medical drugs field. RCTs can make an even bigger difference in social welfare programs, education, and law enforcement.
I believe that we should radically ramp up the use of RCTs, so we can identify programs that actually work instead of spending trillions of dollars per year only to find out later that the programs either do not work or do not do so in a cost-effective manner.
I go into more detail here:
https://frompovertytoprogress.substack.com/p/the-case-for-randomized-trials-in
100% agree but people often strongly oppose the idea. It's the same shit as with trials in healthcare. Apparently it's perfectly moral to just pick what policy you're going to use on a whim without telling anyone but God forbid you actually design a system that can collect useful data to improve things without informed consent.
Apparently randomization via a well-designed study is dangerous in ways randomization by which office you visit, year it is or agent you see isn't.
Yes, there is some resistance, but I think the main reason is that this is not perceived as something government does. I think it is more bureaucratic inertia than any real misgivings.
If more people started talking about the benefits of doing it, I think we could get some traction.
I agree there is low hanging fruit but RCTs are harder out of medicine. We actually end up with quite a bit of data in education and it just turns out that interventions never scale -- in medicine the masses get the same drug as the test subjects but in school the teachers who need to roll out the national program never look like the test contingent.
And as a note of caution consider the fact that companies still refuse to use internal prediction markets in the obvious cases -- for instance estimating project completion dates. All big software, hardware etc companies could easily gather much more accurate info on how long it will take to finish things and they consistently choose not to do so.
Presumably the reason is that people hate to be corrected especially by lower status/ranking individuals. And RCTs raise many of the same threats -- maybe worse.
I mean if you're a bigwig at an executive agency yes you want to make the country better but you also want to improve your career. What if you run an RCT and it says your party's ideas won't work (or the other side will). You could have just implemented those programs your side likes gotten praised and a cushy job -- now people hate you either way. Or what if it just shows what you've been crusading for won't succeed.
Ultimately as long as it's considered acceptable NOT to do RCTs there are lots of incentives not to do them.
But yes, if we start small maybe we can cultivate a culture that doesn't see them as optional.
> However, a counterargument to this is that trials are always going to have unexpected situations. It is virtually impossible for anyone to predict all the ways in which a trial could go wrong or foresee all the possible issues with design. As such, it is good that the FDA has the ability to reject designs at any point in the process and pre-specifying all the conditions for failure would be next to impossible.
It would be good for an organization that understands cost-benefit analysis and acts accordingly to have that power. But it does not seem like the FDA is such an organization.
A few thoughts:
1) How important is it really for the big pharma companies to innovate rather than letting the smaller startups undertake those risks and buying up the ones which succeed. After all the existence of bankruptcy often favors doing risky things in small companies that can go poof if things go bad.
2) It seems like it should be easier to frontload the parts of trials most likely to result in failure.
Start ups also need to carry out trials.
Absolutely, that comment was just about the narrow question of big pharma being conservative. I saved my screed against IRBs for another comment.
I mean I agree with most of what you said, I just think that it's inherent in the IRB model. The conservativity and avoiding saying anything that makes people feel uncomfortable should be done is a design feature of the system not an error. I just don't see how you fix it without addressing those fundamental incentives so I was just quibbling on the edges.
Descriptively, the current "buy up the ones which succeed" model means big pharma buying up startup drugs after their Phase 2 (investigate/suggest efficacy) trials and before their Phase 3 (large-scale efficacy+safety) trials. At this point, the drugs are generally >50% to in fact succeed, and still are usually hundreds of millions of dollars and many years away from seeing a commercial patient.
A world where "buy the ones that succeed" meant after FDA approval and so the startup risk-takers could run things all the way through the clinic would be great for the reasons you suggest, but it's not the world we have now.
Thanks that told me something I didn't know.
Would Recovery have even worked without the NHS? I wonder how insurmountable this is in the US due to the fragmented healthcare system.
Could these improvements be implemented in China? It appears to me that the root of the problem is political; the personal incentives for any one individual are to be risk averse to some kind of failure. But China has some unique characteristics; the government can fund people to take risks, and if it wants it can keep failures out of the news. They also are facing the prospect of an economic depression and a lack of career opportunities for young people. It would cost some money to train more people to do biological and clinical research, but it would be more productive than letting young people lie flat.
Curiously China too refused to conduct challenge trials of the covid vaccines, which to me was a bit surprising.
Their leadership was very (is very?) risk averse. I think understandably so. Doing challenge trials would have required infecting people, with the possibility that people in the trials would inadvertently spread it around.