Randomized Controlled Trials Are The SSRIs of Trial Design
A The Frontier Psychiatrists Guest Post by Ben Spielberg.
The Frontier Psychiatrists, ironically, features writers who are not literal psychiatrists with some regularity. This has to do with the fact that the name of the newsletter is based on the name of an avalanche song that my wife decided we should use as the name of our clubhouse show before we changed it to New Frontiers (before that died a slow death from ignoring its creators). Newsletters, on the other hand, are all about their creators. Like any publication trying to do something daily, it's a lot easier if you don't have to write it all yourself. The original Clubhouse show that Carlene and I started also featured people who were not strictly speaking; psychiatrists, in many cases, weren't even mental health professionals. We were talking about mental health, but including perspectives from patients, other professionals, computer scientists, NFT creators, disability advocates, and even the founder of 8-Chan, Fred Brennan, joined us. Today's column is a contribution, with a little bit of light editing, from my friend and reader Ben Spielberg. Ben and I see a lot of things, but these are his opinions, and I'm only editing for the purpose of making the writing up to my rigorous standards. That's also kind of a joke because I have a lot of typos and terrible grammar. Thank you for reading, and if you're a reader and have something you wanna write, don't hesitate to reach out; my standards are not unreasonable! I think of the readership as a community, and the newsletter itself is just more than Owen. I'm kind of the custodian of the energy that the community of readers and authors brings together, and I hope it lives on well past my tenure as an editor!
Without further ado, here's a spicy take on randomized control trials as not the be-all and end-all of science.
Anyone who has taken a Psychology 101 course has likely heard that double-blinded, randomized controlled trials (RCTs) are the "gold standard of trial design." It is treated as dogma, rarely questioned, and regarded with the same certainty as the laws of physics. However, randomized controlled trials are not without their flaws. I fear that we have become so obsessed with controlling experiments that the results have become less transferable to real-world clinical populations. RCTs often miss individual variability; they are so tightly controlled that they become nearly impossible to replicate in real-world conditions, and while they may reveal "significance," they do so often despite infinitesimal effect sizes. Moreover, controlling for an ever-increasing placebo is Sisyphisian and a fool's errand at worst.
"Do not believe on the strength of traditions, even if they have been held in honour for many generations and in many places; do not believe anything because many people speak it; do not believe on the strength of sages of old times."
—The Buddha (or something)
The first modern RCT took place in 1948 under the direction of Sir Austin Bradford Hill, an English epidemiologist. He studied the impact of streptomycin, an antibiotic, on tuberculosis. Participants were randomized into a "treatment group," which consisted of streptomycin + bed rest, and a "control group," which consisted of only bed rest. Bradford Hill borrowed the study design from a colleague by the name of Ronald Fisher, who employed a similar experimental design in his trials...for plants. The trial was heralded as a huge success. Streptomycin became an evidence-based treatment for tuberculosis in a milestone clinical trial!
Except that, as it turns out, a number of participants' symptoms had worsened after they developed streptomycin resistance. Ironically, Bradford Hill is best known for his research on unearthing the link between tobacco and lung cancer... which was a series of case studies and case-control studies and not a randomized controlled trial.
An Issue of Variability
From a population health perspective, RCTs are very helpful. They tell us broadly whether an intervention might do something if thousands upon thousands of people adhere to that intervention. They can tell us if one intervention outperforms another intervention on average, and we can do fancy post-hoc analyses to attempt to dive deeper into the intervention's effect size and spread. Still, we do so at the risk of losing power at times. We can undoubtedly tease out specific criteria in an attempt to generalize the findings to the rest of the population--or specific sub-groups in a larger population. We have to be careful because we don't want the placebo effect to creep in. Large, blinded RCTs are very important studies that are also borderline useless for a suffering patient in the real world.
Placebo: The Boogey Man of Trials?
The placebo effect--that is to say, a trial participant's expectation that the trial will have a positive effect--must be controlled, or else the participant can have that positive effect without the intervention itself. In a randomized controlled trial, we don't want the placebo effect to creep in. If people think they will get better, they end up getting better, but we only want to see if the intervention makes people better. We don't want the expectation that getting better will actually make us better, even though the expectation that getting better is present most of the time when you go to a clinical professional.
The placebo effect is sneaky. We must control for the placebo effect! The placebo effect is getting stronger over time. The placebo effect watches you when you sleep. We used to only need one control condition for an adequate comparison. Now, we need multiple control conditions in a high-quality, appropriately powered, well-designed study. Someday, there might be so many placebo conditions we forget what we are researching in the first place. Perhaps instead of outwitting the placebo effect, we should approach trial expectations differently.
Let's consider neurofeedback. Neurofeedback is a little-known treatment that uses a brain-computer interface to reflect brain activity in real time, which can then be manipulated by the person connected to the device. There have been small studies and case studies using neurofeedback to treat most neuropsychiatric conditions, including ADHD, sleep, depression, and epilepsy. Neurofeedback is somewhat controversial for many reasons, one being that many high-quality studies have had difficulty separating active conditions from control conditions. The funny part is that neurofeedback was discovered on epileptic cats, who are pawsitively resilient to the placebo effect. The first case study using neurofeedback on a human was performed on a participant with severe treatment-resistant epilepsy, who was able to enjoy three full seizure-free months for the first time since childhood after a series of very rudimentary neurofeedback. Maybe it was just the placebo effect. Or maybe we just need to change the way we study psychiatric treatments. PRISM neurofeedback, for example, has a pretty large effect size without a large RCT to date.1
One way they addressed this was by using within-subject comparisons so that individuals were treated as their own controls in the research on the treatment.
Editor’s note: I podcasted about PRISM here:
Context is Key
An intervention does not exist in a vacuum. Rather, the success of the intervention is a result of the combination of the intervention within the context its delivered. We see this in psychedelics, where psychedelics help to kickstart a surge of neuroplastic mechanisms in the brain, but only within the proper context. Likewise, it is well-known that all oral antidepressant medications seem to "work better" if psychotherapy is performed alongside, providing an important contextual interaction. If the interaction is more important than solely the intervention itself, and we attempt to reduce the interaction as much as possible, what are we even measuring? In animal models, the effect of stimulants are moderated by the environment around them, whereby stimulating effects of the exact dosage are increased in stimulating environments compared to non-stimulating environments.
RCTs aren’t going away any time soon—they’re still an important tool. But like SSRIs, they can’t be the solution that we rely on as gold standard. It’s time to embrace individual differences and rethink trial design with expectation and context in mind.
Ben Spielberg is the Founder and Chief Executive Officer of Bespoke Treatment, a comprehensive mental health facility with offices in Los Angeles, CA, and Las Vegas, NV. He is also a PhD Candidate in Cognitive Neuroscience at Maastricht University.
Thanks, Ben, for sharing this with us, I tend to agree. It's hard to get past the dogma, especially given the remarkable success of the RCT as a tool, but it's not the only tool to solve every problem!
For more on clinical trials in psychiatric medications, look at my book Inessential Pharmacology. (amazon link).
For pieces by other TFP contributors, follow:
, , , , , , , Courtny Hopen BSN, HNB-BC, CRRN, and many others!Fruchter, E., Goldenthal, N., Adler, L. A., Gross, R., Harel, E. V., Deutsch, L., Nacasch, N., Grinapol, S., Amital, D., Voigt, J. D., & Marmar, C. R. (2024). Amygdala-derived-EEG-fMRI-pattern neurofeedback for the treatment of chronic post-traumatic stress disorder. A prospective, multicenter, multinational study evaluating clinical efficacy. Psychiatry Research, 333, 115711. https://doi.org/10.1016/j.psychres.2023.115711
We offer PRISM treatment at Fermata, my clinic in New York, and Ben offers it at Bespoke as well.
Happy to see more guest authors! Salient article. I have plenty of opinions about randomized controlled trials but this paper by Deaton and Cartwright explains them better than I ever could: https://pubmed.ncbi.nlm.nih.gov/29331519/