During the last days, I saw, floated on X, the idea of doing some form of affirmative action for conservatives in academia, with the aim of achieving 50/50 representation of Democrat/Republican beliefs. While this seems wrong and also unlikely to be achievable (academics are extremely skewed towards voting Democrat, and I think it’s a temperamental thing), a better and probably more impactful initiative is to guarantee some form of protection against career damage for academics who are willing to testify with respect to inefficiencies introduced by stringent regulations. This would also serve as an affirmative action of sorts, at least tangentially, for libertarian-leaning academics, if not outright conservatives.
This is a joke and it is not. I explained in a previous post that we lack “hobbitian courage”: a massive issue in trying to propose anything related to clinical trial reform is that you need deregulation. But respectable academics do not want to put their names behind official criticisms of current regulations, especially if such criticisms can be construed as not “maximizing safety”. This is because such academics can incur career damage in a sort of diffuse and low potential upside way (you do not become a media celebrity by criticizing your local institutional review board or IRB). It thus becomes very hard to argue for reforms towards policymakers in a coherent and credible way that does not make you sound like a lunatic.
This idea was spurred by the fact that I received an email from a Professor (I can see his name and check his affiliation, but of course, will keep it anonymous here), in response to my previous post, where he suggested I look at IRBs. Of course I had looked at IRBs — and I am preparing some ideas on that front. But again, something that’s very hard is to find any study that in some way proves how onerous and annoying IRBs can be for academics and that no, they are not the barrier preventing doctors who would otherwise perform dangerous experiments on children from doing so.
An institutional review board (IRB), is a committee at an institution that applies research ethics by reviewing the methods proposed for research involving human subjects, to ensure that the projects are ethical. Now, everyone wants to be ethical and of course, being seen as arguing against ethics is BAD, so saying anything that might be construed as criticizing ethics enforcing mechanisms like IRBs will make you look like a BAD person.
I will copy the relevant part of his email below:
I agree with your main concerns regarding the tremendous inefficiencies of clinical trials. Should it really cost a billion dollars plus to bring a drug to market?
While at the hospitals, I served on IRBs; that is why I am writing to you today. Your concern with 'efficiency,' i.e., optimizing the risk/reward ratio could also be directed at IRBs. From my perspective, very few patients are 'saved' by the hundreds of thousands (plus?) cumulative hours spent on IRB activity. Those hours and resources, if spent on research, would help many more people.
When I discuss this with others, I am almost always directed to the Tuskegee and Gelsinger cases. Those happened a half and a quarter century ago, respectively. Since then, societal attitudes have evolved to make abuses even less likely.
I do not think “ the concept of an IRB” is always a bad idea: for example, it’s probably good to have someone double check that, in an interventional trial, the trial design is ethical and the control arm includes standard of care as opposed to placebo, if a standard of care exists. But of course, IRBs have evolved into monstrosities that go far beyond that, eating away from the time of both those who carry research and those who sit on IRB review boards.
And I think the cultural point is very good. For those who are not familiar with the Tuskegee study: it was a U.S. Public Health Service experiment, carried out between 1932 and 1972, in which researchers followed hundreds of Black men with syphilis in Macon County, Alabama, without telling them their diagnosis or offering effective treatment, even after penicillin became the standard cure. It is now regarded as one of the most notorious examples of medical ethics violations, highlighting racism and lack of informed consent in research and is often given as an example whenever someone complains about how onerous IRBs have become.
But things have changed in many ways between now and 1972, when the study ended. For example, Judge Ruth Bader Ginsburg often recalled being summoned to her Harvard Dean’s office and asked why she was occupying a place that could have gone to a man. This kind of behaviour would be unthinkable in an elite university today. But this is not because an IRB stands behind all male Professors whenever they interact with a female student and makes sure that the student has given informed consent to being talked to and the male Professor has signed 10000 forms acknowledging that he should not tell a female student her place is in the kitchen and not in a university. No, the reasons are much simpler: culture has massively changed. It would be unthinkable for most Professors to do that, because people are subject to mechanisms of behavioural influence outside pure bureaucracy. Besides that, there are other mechanisms available that ensure if someone said this, they would generally suffer professional consequences. In the same way, the Tuskegee study would not happen today because it would be pretty much unthinkable, the doctors would go to prison and so on. It’s not an IRB that stops anyone from doing Tuskegee II. Now, as I mentioned before, I think when it comes to interventional studies in particular, having a second pair of eyes to check that everything is correct, as mistakes can happen even if no harm is intended, is not a bad idea.
Perhaps not coincidentally, the best explainer of how IRBs have metastasized into something far beyond what they were intended to do, comes from at a time anonymous writer:
. His “My IRB nightmare” piece is the perfect justification that should be included at the start of any policy memo, but of course, it look weird to justify a policy change using evidence from an internet anon. To compensate for the lack of “credibility” of such a source, you have to scramble for “economic studies” and “official surveys” that basically capture, in a much worse way, what Scott wrote in his post. Perhaps the solution is to do affirmative action for people coming forward under their real names….