Whence, Placebo? Controlled Trials Beyond Double Blind RCTs
A meditation on trial designs and their discontents.
It is Easter Sunday for those who celebrate. I grew up with a theoretical Christian faith tradition, with parents who were ex-communicated Catholics. Today, we remember the sacrifice of bunnies, who were forced to hide eggs by Santa. Or something. In all seriousness, this is a holy day for those who believe those things. It is a day that is Sunday plus egg hunts for others.
This is a health-themed newsletter, and one of the remarkable tools of science, in which we have put our faith in trying to eliminate the role of faith as the healing element of treatments, is the placebo-controlled trial.
Humans have faith, at baseline. That is why we can’t give a treatment, and evaluate if it works. The power of faith is remarkable, and we use inactive compounds (placebo) or inactive tools (sham) in device studies.
One of the sticky wickets to wrangle with in psychiatry is the changing placebo response over time. There has even been a meta-analysis of the placebo effect over time.1
The power of placebo? It’s been increasing (according to experts raters) over there on the left. The top box? That is the effect size trend in antidepressant trials as rated by professional raters. The “potency” creep is actually higher in the placebo group than in the active treatment groups. The same trend has not been found in self-report, as seen on the left. It’s almost as if the patients themselves have not been fooled by placebo, in their own experience…but the experts have found something different.
This trend is from 10,000 feet, but it suggests, to your author, we are getting more biased in the wrong direction, when it comes to the “objective” standard of expert ratings.
Keep in mind, to demonstrate a treatment is effective in a clinical trial, the intervention has to beat placebo. And if placebo response is getting more and more robust (according to expert raters), that becomes harder.
Why don’t we just use these powerful placebos?! Well, the New England Journal of Medicine evaluated this approach (placebo compared to nothing at all)2, and found:
As compared with no treatment, placebo had no significant effect on binary outcomes, regardless of whether these outcomes were subjective or objective.
There was a small effect on continuous variables, and in pain indications:
For the trials with continuous outcomes, placebo had a beneficial effect, but the effect decreased with increasing sample size, indicating a possible bias related to the effects of small trials. The pooled standardized mean difference was significant for the trials with subjective outcomes but not for those with objective outcomes. In 27 trials involving the treatment of pain, placebo had a beneficial effect, as indicated by a reduction in the intensity of pain of 6.5 mm on a 100-mm visual-analogue scale.
Placebo is super-powerful…but only in clinical trials. Clinical Trials may be the potent intervention? Hope that you will get the fancy new thing? Faith that this will work…that is powerful.
Not all conditions have a meaningful placebo response. I’ve previously written about the “parachute” effect—the “response rate” of placebo parachutes in the “falling from a plane” group would be close to zero. “Breaks for Runaway Trains” also don’t need sham control. Those would be unethical studies.
Similarly, some trials are harder to blind than others. Psychedelic medicines have potent effects, and those are hard to blind without an active agent. One of the ways we evaluate a trial’s blinding is to simply ask people “which group they thought they were in” and if they are able to guess better than chance, the trial was not successfully blinded.
Nolan Williams, M.D. speaks eloquently about this effect in his recent interview with Tim Ferris, and I suggest a listen. In the SNT accelerated TMS trial for example, they went through great lengths to preserve the blinding—the sham coil created a zapping sensation on the forehead of the person getting the treatment that hurt a little, just like the “real thing.”
But Can’t You Just Tell, Sometimes?
In psychedelic medicine trials, this gets harder, as evidenced by a recent assessment of the MAPS studies on MDMA in PTSD. Along with other relevant concerns, the blind was routinely broken in the trial among participants, and no attempt to blind the therapists was evaluated:
It gets harder and harder to do a placebo-controlled trial with medicines that have obvious and robust immediate effects on sensory experience! This doesn't not mean it’s impossible, however, to evaluate complex interventions with potent non-specific effects.
I Am Evaluating the Man In The Mirror.
John Kane, M.D., for years has been arguing that the Placebo-Controlled RCT is a good design—and he should know as the investigator who ran the clozapine approval trials3—but not the only possible way to evaluate treatments reliably.
His argument, which I learned from him during my time at hillside, is that in some conditions, we want to use alternate designs like a Mirror Image study to capture change. Some problems are more like falling out of a plane—hard to blind, but not impossible to evaluate with a control.
To illustrate this approach—which uses individual as their “own control” and randomizes via looking at the same individuals before and after an intervention—let’s use the impossible to blind intervention of supported housing: