During the last days the idea of doing some form of affirmative action for conservatives in academia has been floating on X. Some even support achieving 50/50 representation for Democrat/Republican beliefs. Leaving aside whether aiming to do this is desirable, it is unlikely to be achievable, at least at a 50/50 ratio: academics are extremely skewed towards voting Democrat, and I think it’s a matter of deep temperamental differences, which are not going to change soon enough.
However, this made me think of an actually semi-realistic proposal: a whisteblower protection program against career damage for academics who are willing to testify with respect to inefficiencies introduced by stringent regulations. This would also indirectly serve as a political affirmative action of sorts, at least tangentially, for libertarian-leaning academics, if not outright conservatives.
This is throwaway idea and not sure how implementable it is (which is why I am writing on my blog), but, as I explained in a previous post, academia and sciences in general suffer from a bureaucracy problem that decrease efficiency. What we lack “hobbitian courage” or everyday courage: not extreme acts of political rebellion, but simply individuals willing to put their name behind specific complaints and proposals, such that they are made legible to policymakers. There is basically no incentive at the moment to do so, because academics can incur career damage in a diffuse and low potential upside way: you do not become a media celebrity by criticizing your local institutional review board or IRB, but your life can be made substantially harder by your local compliance officers. In fact, I would say this was literally the biggest barrier when trying to propose ideas for clinical trial reform.
This idea was spurred by the fact that I received an email from a Professor1, in response to my previous post, where he suggested I look at IRBs. An institutional review board (IRB), is a committee at an institution that applies research ethics by reviewing the methods proposed for research involving human subjects, to ensure that the projects are ethical. Now, everyone wants to be ethical and of course, being seen as arguing against ethics is BAD, so saying anything that might be construed as criticizing ethics enforcing mechanisms like IRBs will make you look like a BAD person.
I will copy the relevant part of his email below:
I agree with your main concerns regarding the tremendous inefficiencies of clinical trials. Should it really cost a billion dollars plus to bring a drug to market?
While at the hospitals, I served on IRBs; that is why I am writing to you today. Your concern with 'efficiency,' i.e., optimizing the risk/reward ratio could also be directed at IRBs. From my perspective, very few patients are 'saved' by the hundreds of thousands (plus?) cumulative hours spent on IRB activity. Those hours and resources, if spent on research, would help many more people.
When I discuss this with others, I am almost always directed to the Tuskegee and Gelsinger cases. Those happened a half and a quarter century ago, respectively. Since then, societal attitudes have evolved to make abuses even less likely.
I wish established academics could say this stuff openly, as it would carry more weight than someone relatively junior, because he is fundamentally right.
I do not think “ the concept of an IRB” is always a bad idea: for example, it’s probably good to have someone double check that, in an interventional trial, the trial design is ethical and the control arm includes standard of care as opposed to placebo, if a standard of care exists. But of course, IRBs have evolved into monstrosities that go far beyond that, eating away from the time of both those who carry research and those who sit on IRB review boards. Incidentally, this particular topic actually benefits, unlike many others, from some academics having actually articulated the problem: for example, Prof Simon Whitney in From Oversight to Overkill: Inside the Broken System That Blocks Medical Breakthroughs—And How We Can Fix It.
And I think the cultural point is very good. For those who are not familiar with the Tuskegee study: it was a U.S. Public Health Service experiment, carried out between 1932 and 1972, in which researchers followed hundreds of Black men with syphilis in Macon County, Alabama, without telling them their diagnosis or offering effective treatment, even after penicillin became the standard cure. It is now regarded as one of the most notorious examples of medical ethics violations, highlighting racism and lack of informed consent in research and is often given as an example whenever someone complains about how onerous IRBs have become.
But things have changed in many ways between now and 1972, when the study ended. For example, Judge Ruth Bader Ginsburg often recalled being summoned to her Harvard Dean’s office and asked why she was occupying a place that could have gone to a man. This kind of behaviour would be unthinkable in an elite university today. But this is not because an IRB stands behind all male Professors whenever they interact with a female student and makes sure that the student has given informed consent to being talked to and the male Professor has signed 10000 forms acknowledging that he should not tell a female student her place is in the kitchen and not in a university. No, the reasons are much simpler: culture has massively changed. It would be unthinkable for most Professors to do that, because people are subject to mechanisms of behavioural influence outside pure bureaucracy. Besides that, there are other mechanisms available that ensure if someone said this, they would generally suffer professional consequences. In the same way, the Tuskegee study would not happen today because it would be pretty much unthinkable, the doctors would go to prison and so on. It’s not an IRB that stops anyone from doing Tuskegee II. Now, as I mentioned before, I think when it comes to interventional studies in particular, having a second pair of eyes to check that everything is correct, as mistakes can happen even if no harm is intended, is not a bad idea.
Even though IRBs represent one of the few areas that has been publicly criticized by academics, it is still telling that, perhaps not coincidentally, what I consider the most visceral and compelling explainer of how IRBs have metastasized into something far beyond what they were intended to do, comes from at a time anonymous writer:
. His “My IRB nightmare” piece is the perfect justification that should be included at the start of any policy memo, but of course, it looks weird to propose a policy change using evidence from an internet anon with an n of 1 (now Scott is *doxxed*, but still…). To compensate for the lack of “credibility”2 of such a source, you have to scramble for “economic studies” and “official surveys” that basically capture, in a much worse way, what Scott wrote in his post. Perhaps the solution is to do affirmative action for people coming forward under their real names….I can see his name and check his affiliation, but of course, will keep it anonymous here.
By this I mean “official” credibility: Scott is very well respected by a large number of very smart/succesfull/good people.


I recommend a couple of books:
The Censor's Hand (arguing that IRBs do more harm than good): https://www.amazon.com/Censors-Hand-Misregulation-Human-Subject-Bioethics/dp/0262028913
Ethical Imperialism: https://www.amazon.com/Ethical-Imperialism-Institutional-Sciences-1965-2009/dp/0801894905
Scott Alexander has a depressing but hilarious story of his attempt to deal with the IRB.
https://slatestarcodex.com/2017/08/29/my-irb-nightmare/