The Disappointment with Generative AI...is Hilariously Human
A visually assisted essay on trust and intelligence, artificial, human, and otherwise
Welcome to the Frontier Psychiatrists newsletter. I'm on vacation, and knocked this out on my phone. Subscribe anyway! It not substantial today…but I hope, timely and fun.
If you are not financially in a place to subscribe but want to support my work, the following methods are meaningful:
You can review my podcast!
And review my books on Amazon!
I even have music your can follow along with and I will appreciate the support.
And of course…
We expected generative AI to do remarkable things. We expected that because humans tend to expect new things to be awesome. Humans are not, on average, awesome. That is how averages work. It takes the outliers, and it rounds them down. If you are training a model on humans, and you take out the thing that some humans are really good at—having some ability to distinguish nonsense from not too much nonsense—you will get a very brutal regression to the mean. At best, you're going to be presented with the average of what we think, with some guard rails about its awfulness.
I took the picture of Donald Trump from his mug shot. And I plugged it into generative AI. And I asked our collective unconscious, which is what I understand generative AI to be, to generate some pictures based on very simple prompts. I present them to you now…
The first was to ask it to create a picture based on the evil robot pictures from Battlestar Galactica, because I'm a nerd.
Next I asked it to generate a picture of a front man from the 90s rock band, because I love 90s rock bands. It did not disappoint.
It looks like he wants to push me around, and he will, and he will…am I right?
But does it get the difference between the front man and the Drummer? It was not hard to answer that question, and so I did.
Can it tell the difference between more substantial questions? Like guilt and innocence? Of course not. It can tell us what our collective unconscious might say about those questions.
This one is pretty clearly just a picture of Silvio Dante from the sopranos mashed up with the Donald, and we would expect that given how we have depicted racketeering in visual format over the years.
“Innocent” looks a heck of a lot less Italian, that is all I'm saying.
It does a surprisingly good job of capturing the subtle sense of surprise and uncanny discomfort of the above prompt. And then I asked it to do some thing more complicated, psychologically, which was the following:
I think the stars are my favorite part of that one.
Originally, I was going to end with something cheap— which was just to put the unaltered picture of the Donald with a prompt as satire.
But, on reflection, I don't have to. There's just nothing to satirize with this person. Artificial intelligence is only as useful as the data is trained on, and the data we trained it on…it’s the average of us. We are only as useful as we are on the internet. The Donald understands this better than most.
With no ability to discern good versus evil, good versus bad, intentional versus accidental, hopeful versus deeply cynical, we lose what has allowed us to get through the day and through the millennia. Humans are aware that sometimes we are funny, and sometimes we are polite, and sometimes were lying to steal, and sometimes were lying to save somebody's feelings, and sometimes we are insincere to avoid punishment or jail. Sometimes we are mendacious, sometimes we are gracious.
We all get that there's a subtext to everything we do. We are using trust— our understanding of what are the source of information—as the gate keeper for what we're going to choose to believe. There are barricades up for things that might change our opinions. This fundamental mind-mindedness is the human general intelligence magic. We are taking any new information and we're running it through a filter.
It is built on “it's relevant to us” and “from a trustworthy source,” this is the secret sauce that keeps humans from being as dumb as a ChatGPT— given much less data. Stable diffusion, similarly, is based on the average of what we've seen and chosen to document, without the acknowledgment that that those moments were selected by human minds with intentionality.
It was minds and intentions behind what we put on the Internet that we have been analyzing as humans. Not just the surface. This matters for AI models! I’d argue we need the same of our humans. There is a startling similarity to mendacious politicians and poorly designed generative AI.
I'm not going to say anything cheaper or more manipulative than that. I'm just going let you draw your own conclusions. Because we are endlessly capable of that.