Is "Observational AI" the Next Big Thing?
Generative AI is flashy, but it's ability to observe may be more important
The Frontier Psychiatrists is a daily health-themed newsletter. Physician-author Owen Scott Muir writes it, and when he’s not writing this, he’s cranking away on AI-guided health-related research.
Sam Altman being abruptly ousted from OpenAI was jarring. It is easy to forget that AI is capable of things other than such disruptive board disputes— and that companies and scientists have been working with AI as a tool well before ChatGPT launched. It’s been a major part of my life for years!
My journey started because I was lucky to know one person — Dan Karlin, M.D. I remember what may be the most important lunch of my life—Arepas! And a delicious gazpacho!— in which we had a conversation about psychotherapy, assessment, and the ability of AI to bring a hypothetical measurement to the field of psychiatry. Dan is a bit of a philosopher-king to me. He convinced me that objective truth might exist, but we’d have to get better at measurement, which meant AI.
OpenAI, its founders, and its ChatGPT product are flashy and impressive. The ability of computers to pay attention, as opposed to generating, is less “disruptive.” Boring people (like yours truly) noticed the same thing fictional villains like Sauron did — Generative Platform Solutions—I’m looking at you, “The One Ring”—are but one side of the equation. Giant Lidless Eyes That Never Stop Looking are the other. I will call this “Observational AI.”
I Identify as an imperfectionist, when it comes to generating things like this newsletter. Owen, the author, is the Large Language Model around here. I don’t use AI to write because, frankly, it is not a good writer (yet). I use Grammarly to edit! And …that is my point. I consider Grammarly primarily Observational AI. It’s noticing my grammar mistakes, always, even when I’m not. It has some generated suggestions. Those are rarely crucial. It’s the missed comma that will get you!
Humans are bad—or at least I’m a specifically bad-at-things-human— at two subdomains.
1. “Getting started” on generating something.
This is the problem ChatGPT addresses. The second?
2. Paying endless attention.
Humans have limited attention resources. More attention more of the time has benefits. It means more pattern recognition can be brought to any task. I think this second use case is the most important for healthcare, where lapses in attention are both endlessly human and the source of many failures in the quest to alleviate suffering.
When I say, “This is a part of my life,”…well, some days I am more convinced by that fact than others. I have 4 examples in one day!
Yesterday, the following tasks filled my day or crossed my desk. First, a paper I co-authored was published:
It is about Safety Monitoring: we have elaborate risk evaluation and mitigation strategies (REMS) when deploying risky medicines. This started with Clozapine here in psychiatry, and this paper describes the use of AI-guided device-based monitoring in psychedelic medicine! The paper is called Evaluating Passive Physiological Data Collection During Spravato Treatment. I'll include the abstract to save you time:
The Abstract: Spravato and other drugs with consciousness-altering effects show significant promise for treating various mental health disorders. However, the effects of these treatments necessitate a substantial degree of patient monitoring, which can be burdensome to healthcare providers and may make these treatments less accessible for prospective patients. Continuous passive monitoring via digital devices may be useful in reducing this burden. This proof-of-concept study tested the MindMed Session Monitoring System (MSMS TM ), a continuous passive monitoring system used during treatment sessions involving pharmaceutical products with consciousness-altering effects. Participants completed 129 Spravato sessions with MSMS at an outpatient psychiatry clinic specializing in Spravato treatment. Results indicated high rates of data quality and self-reported usability among participants and healthcare providers (HCPs). These findings demonstrate the potential for systems such as MSMS to be used in consciousness-altering treatment sessions to assist with patient monitoring.
I was thrilled to see this paper published! However, I was at the office all day because I was enrolling research subjects in a study. Yes, you guessed it, that study involves…more Observational AI!
SAINT Neuromodulation in its Open-Label Dose Optimization Study— SAINT Neuromodulation is an FDA breakthrough treatment. Its Randomized Controlled Trial sample size was small. This is because it was deemed to be unethical to continue to withhold the active treatment from study participants, given its vastly superior outcomes (79% remission of previously treatment-resistant depression). Safety reviews at a study midpoint are a standard part of biomedical research to protect human subjects. Once a randomized trial is concluded, the next step is to continue providing more people with the unblinded”open-label” treatment! Answering the question “Is this better than sham/placebo?” isn’t the only important question in science.
This study is being run at several sites, including mine, called Fermata, here in New York City, and I’ve been administering treatment to a study subject today. AI takes a role in this protocol by using an algorithm to determine the target for precision neuromodulation in the brain. I’m thrilled to work with colleagues at Acacia Clinics — in Sunnyvale, California— for another site in the same multi-center trial!
Observational AI reads brain scans in SAINT (tm) to determine the right brain stimulation target. I have to say, it feels like the future.
Those treatments are 50 minutes apart. So, between that, I was doing some “expert rating” as part of my role in another Observational AI study!
In another trial in which I am an investigator, the NIH-funded work by my team at iRxReminder is evaluating our AI algorithm to assess Tardive Dyskinesia for those exposed to antipsychotic medication. This TDetect trial is done in partnership with Videra Health.
And published last month, another collaboration with colleagues at MindMed hit academic journals! Quantifying the Effects and Process of Psychotherapy at Scale is pilot data to build Observational AI in biomedical science to diagnose psychiatric conditions.
That was Saturday, and now I have to get back to reviewing data on the role of Observational AI in the Nightware device and the PRISM System, both of which are FDA Breakthrough Medical Devices to address PTSD.
Observational AI isn’t a “Polluter.”
One issue with Generative AI is that its output degrades model performance over time when fed to its input. Since it’s so good at cranking out endless content, it risks a data pollution effect. It’s the “toxic waste” of language data. Observational AI doesn’t have this same risk profile. It’s a problem for attention lapsing, not for creation. And I think the ability to observe, measure, and understand at scale is where my dime, at least in biomedical science, comes down for the immediate future. But then again, I’m biased! I’m a human, and from what I’ve heard, we are susceptible to that problem ourselves.
Join us! Live! We will have an in-person, live, pre-JPM Healthcare event in San Fransisco on January 7th, 2024!
Rapid-Acting Mental Health Treatment, 2024 (website)…with an associated Eventbrite Link (tickets).