Today’s The Frontier Psychiatrists is lazy on my part, and blatantly commercial—but that is on brand for the content. I’m flying back from the Clinical TMS Society in London. I got to see the lecture hall in which Michael Faraday spoke!
Michael Faraday’s work birthed electromagnetic induction, the basis of TMS treatment of depression and OCD, which Mr. Al Lewis evaluated the data behind. He and found it credible and worth paying for in health plans. That technology lead to a Health Value Award win for Acacia Clinics— with its mental health benefit. Employers? Call me. That having been said—other mental health “solutions” have gone to some absurd lengths to misleadingly document their superiority, and Al takes them to task hilariously in his recent column, which I republish here with permission.
Aon channels Britney Spears in Lyra report
An open (and also sent, received, read and unobjected to) letter to Aon’s chief actuary, Ron Ozminkowski.
Dear Mr. Ozminkowski,
It seems that there are always some rookie mistakes in your analyses. Either that, or you are simply “showing savings” because your clients are oxymoronically paying you as “independent actuaries” specifically to show savings. I will assume that your mistakes are just rookie mistakes, rather than deliberate misstatements. Yet as I recall, you never fixed your Accolade analysis after it was pointed out that your own assumptions, when correctly analyzed by someone whose IQ possesses that critical third digit, inexorably led to the opposite conclusion: Accolade loses money.
Perhaps that bug is a feature for your clients, and indeed your job description is to “show savings.” Mine is the opposite: to demonstrate integrity.
If I am wrong and you are genuinely trying to do the right thing, I would be happy to fly out there and teach you people how to do arithmetic, because, in the immortal words of the great philosopher Britney Speers, oops, you did it again.This time for Lyra. With all the money they paid you, it seems like they should be able to expect correct analysis. They might be very disappointed in you.
On the off-chance that you’d like to see what a real study design looks like in mental health, Acacia Clinics would be a good one to review. Here is the Validation Institute report and here is the science underpinning it.
If you are quite certain your arithmetic is correct despite all indications to the contrary, I would invite you to bet. I say that Acacia Clinics study design and analysis is correct and your study design and analysis is wrong. You say the opposite. Here are the rules for the bet. If you won’t bet, you are of course conceding that Acacia’s analysis is correct and your analysis, to use a technical biostatistical term, sucks.
I am already finding five rookie mistakes, and I’ve only read the first five pages.
First take a looksee at this screenshot below. I was having a lot of trouble figuring out how the red dots showing something you’ve dubbed the “efficiency ratio” (a term which apparently has no meaning in health services research, as far as Google is concerned, while ChatGPT thinks it means something else altogether, but what do they know?) were related to the differences in the size of the bars. Then I realized you accidentally started the y-axis at $4000 instead of $0. A rookie mistake, which inadvertently makes the alleged savings look about 3 times higher than they are.
Meaning your so-called “efficiency ratio” is the value in the blue bar as a percentage of the gray bar, not the height of the blue bar as a percentage of the gray bar. Call me a traditionalist, but in my humble opinion those two ratios should be the same. (Note: apologies for the blurry screenshot. That’s how it’s reproducing.)
I did notice that later on, pretty much the same data in Figure 1 was reproduced as Figure 2, but this time you started the y-axis at $1000. So you’re definitely getting warmer!
Happy New Years!
Second, you may want to check your calendar, because it is now 2024. Your analysis ends at 2021. You’ve had almost two years and five full months plus a Leap Day to update it and yet, you cut it off in 2021. A cynic might conclude that you picked that end date because the alleged benefit you are claiming regresses further to the mean in 2022 and 2023.
Looks like you threw up in front of Dean Wormer
Third, speaking of regressing to the mean, the reason a cynic could infer that conclusion is, your so-called “efficiency ratio” already was regressing to the mean. Let’s assume, for now, the unassumable: that your “matched controls” are a legitimate study design. (If it were, the FDA would allow it.) Between 2018 and 2021, according to your own numbers on that chart, participant costs rose 31% while non-participant cost rose 22%. And yet somehow that statistic appears nowhere in your report, once again a rookie mistake.
Are you having connectivity issues?
Fourth, there are two types of outcomes researchers in our industry. Those who think “matched controls” are a valid study design for this kind of analysis, and those who have a connection to the internet. If you can’t afford my seminal book, try this article on the Validation Institute website which proves – using fifth-grade arithmetic – why that methodology doesn’t work. Period.
Perhaps the giveaway why “matched controls” don’t work in this case might be that the savings started on the first day of the baseline year. An employee has one phone call (yes, that was the cutoff point to get into the study group, though some people had many more) with one of Lyra’s “220,000 high-quality providers” and their medical spending drops precipitously. I’d also love to know what Lyra’s Secret Sauce is, that lets them retain 220,000 providers, all of whom are “high-quality.”
The following things change immediately as a result of that call, even though they are not part of the conversation and require a real doctor or in the case of ER visits, a great deal of luck:
Non-mental health emergency visits plunge by 30%
Generic drug scripts plunge by 30%
By 2021, even expensive specialty meds fall by more than 20%
You might want to retain a smart person to explain the difference between correlation and causation. Alternatively, perhaps you are concerned that this meteor almost hit the visitors center?
A mystery wrapped inside a riddle wrapped inside a seven-figure consulting fee
Fifth, consider that Aon has data for:
medical claims
diagnoses
professional mental health spending
inpatient mental health spending
outpatient mental health spending
spending on non-mental health
ED and inpatient visits
And consider that:
They did this study for Lyra
The study is called “Lyra Cost Efficiency [sic*] Results”
The “Workforce Mental Health Program information was provided by Lyra Health”
Yet somehow – despite having the aforementioned two years, five months and a leap day to prepare this study – they claim to have absolutely no idea how much Lyra’s services cost:
We suspect it is a lot, perhaps enough that mental health professional fees with Lyra for participants exceed mental health spending by non-participants. Because in addition to having to pay their “evidence-based therapists” (Lyra’s term), sales, marketing, overhead and profit, Lyra needs to pay off the benefits consultants too, to “partner with” them:
Finally, where’s the guarantee of credibility? Does Aon not stand behind its work? I guess that’s a wise move on your part, because if you did, I’d be rich. By contrast, Acacia Clinics was validated by the Validation Institute (VI). They do stand behind their work, so the VI’s findings on Acacia Clinics’ outcomes are backed by a $100,000 Credility Guarantee. That, of course, is in addition to my own guarantee.
Did Mr. Ozminkowski just damage Lyra’s reputation…and Aon’s own?
The irony here is that Lyra is considered (or was considered, until this report) a perfectly legit vendor that is providing a valuable service of connecting employees to mental health professionals that match their needs. That is especially useful these days, when mental health benefits are very skinny and mental health providers are hard to come by. The “ROI” is employee appreciation, and possibly higher productivity. Not magical reductions in medical spending completely unrelated to the issues they are calling about.
Paying off consultants (who coincidentally also send them business) to pretend otherwise could damage that reputation. A rookie mistake on their part.
Further, there are some really smart, really honest consultants at Aon. But just like one dirty McDonalds would sully all of them, organization as a whole suffers when one consultant goes rogue.
*It’s either “efficiency,” meaning the cost vs. the benefit, or “cost-effectiveness.” “Cost efficiency” is redundant. They really shouldn’t need me to tell them that – or, for that matter, anything else in their report.
Thanks for the Yucks, Al.
For employers, perhaps having a bit more rigor will go better in subsequent discovery when the endless ERISA lawsuits land on your desk too?