11 Comments
User's avatar
Peter Tillman's avatar

Keep up the good work. Frustrating, I know.

Mac Black's avatar

In behavioral health the problem of cohort discovery can shift to the extreme example: implementation challenges that are entirely informatic/bureaucratic in nature. Let’s say I want to study whether peer counseling and contingency management (micropayments for self-administration of home drug tests on a secure interface) can reduce substance-related hospitalizations. There is essentially zero intervention risk, no unknowns. But IRB would still govern the trial, and the friction of this process is a key determinant of the accessibility and cost of the information yield from the trial. What if the cohort of all the potential research participants could recruit -ambiently- and the protections of the IRB could be replaced with a universal outcome measure dataset negotiated structurally for everyone. All the individual participants do is choose whether to enroll and share their data (the risk profile, the outcome data). If the transaction cost of research can be optimized toward zero, while protecting the right to consent and exit structurally?

Peter Gerdes's avatar

Fundamentally IRBs are the problem. Using other forms of IRBs -- ones which are implicitly incentivized to make it easier to get approval -- is essentially a work around of the whole idea of IRBs so if we are willing to do that it's time to rethink the whole idea.

The very structure of IRBs themselves is problematic in several ways.

1) It treats the very fact that you are deriving useful information from an intervention as itself an ethical concern.

My wife was once talking to a bunch of social scientists about a philosophy paper that involved asking other philosophers how they thought about some issue in their discipline (commonly in philosophy surveys about how philosophers think about certain questions are published) and they were horrified at the idea that one might use information from talking to colleagues for a paper without going through an IRB.

Apparently if a doctor just decides randomly what drug you get that's fine but if you pair up those doctors so it happens in a systematic way that lets you learn then nope.

2) IRBs are fundamentally biased toward considering the ethical harms from conducting the study and not the ethical harms from not doing the study. If you asked an IRB to evaluate the impact of requiring IRBs it would never be approved. Also see 1.

3) IRBs treat the subjects in a trial as fundamentally different than those not in it. They essentially internalize the deeply wrong moral idea that if you touch something you are morally responsible for it.

---

And no one has ever produced a shred of evidence that IRBs are particularly effective at stopping the kind of unethical research that motivated the system. The horrible experiments like Tuskegee weren't done by 1 or 2 lone sociopaths, quite a few people were involved because it was a societal blindspot. Why assume that IRBs would be good at catching these things.

We don't need better IRBs we need to burn the system to the ground.

Andrew Buck's avatar

The IRB system was created with the intent of addressing an egregious, but rare problem of medical professionals being deceptive or abusive to their patients. But in attempting to address this problem, it has created a new problem of impeding every single medical professional genuinely trying to help patients.

The key to any well functioning system is having feedback-based change. Without oxygen sensors to provide feedback about combustion efficiency, car engines function quite poorly and the same concept is equally true for organizational processes and regulations. Without a mechanism for determining if regulations are achieving positive effects, regulations are doomed to damage society.

Getting rid of IRBs and instead passing a simple law requiring informed consent for all human studies, with criminal penalties for researchers who fail to comply, would provide protection for patients without being an impediment to progress and treatment for informed people who want it.

Chris Buck's avatar

This is an important point. In most areas, the law establishes penalties for the commission of defined crimes. The Common Rule is more like Minority Report - where IRBs are essentially chartered to police poorly defined pre-crimes. The fact that we don't have precogs to consult means the check-and-balance feedback systems you're calling for ought to be seen as paramount. Instead, we have a situation where physicians and scientists have allowed themselves to be cowed into reflexively kissing the ring of papal IRB authority - no matter how nonsensical their unilateral edicts may be. We can resolve to stop doing that.

Chris Buck's avatar

I appreciate and wholeheartedly support your efforts to develop improved policies, but I've also begun entertaining the hope that we could make some progress simply by implementing the policies we already have in ways that are more faithful to lawmakers' original intent.

The Common Rule, which implements current IRB systems, grew out of the response to The Tuskegee Study of Untreated Syphilis in the Negro Male. The term "untreated" was at the center of the scandal - which could be succinctly summarized as an institutional decision to withhold penicillin from syphilis patients. In other words, a stated objective of the Common Rule is to prevent institutional scientists from secretively withholding medicine from people.

The bitter irony is that secretive institutional medical withholding is exactly what happened in The Study of Untreated Mastocytoma in Rosie the Dog, The Study of Untreated Osteosarcoma In Sid, and The Study of Untreated Throat Cancer in Jake. We've gotten so far upside down from the original intent of the Common Rule that modern IRBs aren't routinely preventing the central abuse of the Tuskegee Study, they're routinely replicating the abuse.

Lawmakers didn't mean for things to be this way. The statutory language builds on the 1978 Belmont Report's clear exclusion of individual healthcare activities from the narrow legal definition of "Human Subjects Research." Even if individual healthcare activities include elements that might colloquially be thought of as "research-like," the law states that they fall outside the IRB's purview. The idea that IRBs should be interfering with Rosie, Sid, and Jake's individual self-care activities is a grotesque mis-reading of the law.

The basic trouble is that most research scientists are too scared to directly read the law for themselves, and even when we do we're often too scared to give the IRB the middle finger it lawfully deserves when it comes to managing personal healthcare activities. Getting over these fears could help move the ball forward.

https://cbuck.substack.com/p/the-ethics-of-self-experimentation

Chris Buck's avatar

Another place we could simply resolve to implement the existing law exactly as it was originally intended:

https://www.ecfr.gov/current/title-45/part-46/section-46.116#p-46.116(a)(6)

"No informed consent may include any exculpatory language through which the subject or the legally authorized representative is made to waive or appear to waive any of the subject's legal rights, or releases or appears to release the investigator, the sponsor, the institution, or its agents from liability for negligence."

TheAnswerIsAWall's avatar

You touched on a big issue with all of this, i.e., the US plaintiffs bar.

Rake's avatar

The reason this happens is because many jobs depend on this layer of bureaucracy existing. What we see currently is the desired outcome, and couldn't turn out any other way. It's a feature, not a bug. There are many job that are not only pointless, but actively harmful, and those are the ones that we really need to focus on eliminating as a society, no matter how painful it will be. I suspect we can find a way to be compassionate towards the massive number of people that would lose their jobs if we focused on eliminating harmful roles like this.

Dave Friedman's avatar

Two things might force the FDA to change: (1) national security related concerns (China starts developing drugs that cure various diseases that plague the United States;) (2) fiscal pressures (healthier lifespan, if not longer and healthier lives, reduces the expense associated with caring for the old and infirm). There might be other forcing functions that will effect change at the FDA, but these two seem to be obvious candidates.

Brian Moore's avatar

Also, theoretically Epic (The EMR) has a module (Cosmos) that is designed to facilitate research, containing anonymized medical records of... well, nearly everyone. It could be set up to enable exactly what we need here, but there are legal/regulatory barriers that would need to be removed.

https://cosmos.epic.com/about/