The Frontier Psychiatrists

The Frontier Psychiatrists

Share this post

The Frontier Psychiatrists
The Frontier Psychiatrists
Curb Your Enthusiasm for Large Language Models

Curb Your Enthusiasm for Large Language Models

What Larry David could have taught AI developers

Owen Scott Muir, M.D, DFAACAP's avatar
Owen Scott Muir, M.D, DFAACAP
Aug 29, 2023
∙ Paid
8

Share this post

The Frontier Psychiatrists
The Frontier Psychiatrists
Curb Your Enthusiasm for Large Language Models
3
Share

The under performance of large language models was predictable. An issue with large language models is that what we say, on the Internet, has an audience of other humans. Or at least had an audience of other humans.

The language was generated by one group of humans, and intended for other humans. There is an assumption built into our pre-ChatGPT communication: human minds will be interpreting what we read.

When humans are communicating with other humans, we have two methods with which to communicate. The first is the explicit words we use. The second is the implicit models of the minds of our readers, and their presumed reaction to the words.

We're almost always calculating these two things at once. Humans think about whatever we said, and whatever we meant. There is some assumption of plausibility of error built into the assumed understanding of others. Error is an understood risk to both the writer and the reader. This uncertainty forms the substrate of our communication. Humans get that other humans might not get it.

That is not what I meant!

Sometimes we use this for humor, sometimes we use that as a scam, sometimes we fret about whether misunderstanding will be the case, but the intentions behind the writing? This is crucially important for our understanding of any words. The language alone is insufficient.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 The Frontier Psychiatrists
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share