We Shouldn't Fear Artificial General Intelligence more than Human General Intelligence
The OpenAI board missed the point. And in doing so, made the point...humans are terrifying!
The Frontier Psychiatrists is a daily newsletter. Mostly—this is Health themed. I'm guessing the end of humanity and a threat to all existence from giant space robots or malicious artificial intelligence counts as a threat to the health of humans so I can write about it in good faith!
There was a breaking story about the potential reason that Sam Altman was fired from OpenAI. I will use the story to make a point because it's convenient for the point I want to make.
Effective altruists are a great example of everything wrong with human intelligence. It's worth noting that the intelligence assessing whether artificial intelligence is a grave threat is, at least at this point, made up of human intelligence.
I hate to tell you that human intelligence… is flawed and very dangerous.
It would be really hard to argue that the board of AI stuck the landing on the firing of Sam Altman. “Effective altruist” may be some of the most self-aggrandizing and misleading brandings I've seen. Effective… in the eye of the beholder.
According to Wikipedia, a product of a lot of human minds:
Effective altruism is a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis".[1][2] People who pursue the goals of effective altruism, called effective altruists,[3] may choose careers based on the amount of good that they expect the career to achieve or donate to charities based on the goal of maximising impact.
The most famous effective is Sam Bankman-Fried. This man believed that if there were a tiny chance he could save humanity by blowing up billions of dollars, it would be an effective use of that money and time. And empirically, it's plausible that this is correct. It's also not how anybody lives their life in their right mind. There's a reason that most humans are not “SBF-style Effective Altruists” throughout human history.
The following will be blunt: It is a dumb way to live.
In the case of Sam Altman's ouster by the OpenAI board, the media has recently reported the following:
The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.
Among the people who can't “do the math” is—I’m arguing— the OpenAI board.
The thing that was so impressive is that they finally figured out how to make an AI model do math without fucking it up totally. Right now, AI is really bad at math. Do you know who else is bad at math? A lot of people. But they don't mess it up as much as panicky AI researchers. I bet all the people who are bad at math could still figure out that firing Sam Altman when he created a $90 billion valuation company with an additional $10 billion from Microsoft would not fly with either Microsoft or anybody else who cared about the money—and their minds needed to be taken into account. Program a large language model to program a robotic hand to use a Texas Instruments calculator: problem solved. But human minds are bad at math, and math-ready AI? That has the potential to solve its miscalculation problem.
The threat to human life might be artificial intelligence in the future. It might be dangerous. But the actual everyday threat to human life? It is human intelligence.
The actions of the OpenAI board were the erratic and dangerous intelligence they were worried about. We built these things modeled on us. The call was coming from inside the house! Artificial intelligence is human intelligence. Human intelligence is dangerous. It's dangerous every day. It's been killing humans for millennia. It's been making bad decisions as the room is full of humans together on boards across companies daily and thousands of meetings over hundreds of years. It was the stuff of the first science fiction story, don’t you know?
Humans are jumpy. We panic. We get scared. Reliably, we fail to think things through. Seeing the potential of the end of humanity in the artificial general intelligence that the OpenAI board may have been justly terrified of, they ran through the probability that they are justly interested— and maybe even obsessed — that they might be ending the world.
The problem with this question is that it induces terror in the minds of everybody because that's exactly what it should do. Annihilating all that is should be terrifying. Terrified people make fear-based terrible decisions on the regular. What is the probability that firing Sam Altman would put the genie back in the bottle?
It's a serious—empirical!—question. What is the probability that the firing of Sam Altman would not only prevent the death of all humanity but I do so definitively, with that one decision, and not lead to other unexpected catastrophes? I don't know; it's the kind of thing that you could ask a large language model to come up with ten different stories about, and it could do it. Because there are a lot of possibilities, some of those answers would be good, and some would be wonky. If you ask current LLMS to do the math and calculate the probability, they might get it wrong. Things don't go as we expect. When we act based on terror, we get unexpectedly bad outcomes. Human terror is a feature that allows us to survive and will be baked into the linguistic DNA of artificial intelligence. Computers don't get terrified right now, as far as I know. They're going to learn from us. They're learning from us right now. And what they're probably learning from Sam Altman being fired is that it didn't go according to plan. So maybe firing people with no plan is a bad plan?
We're building artificial intelligence based on us. If it gets smart enough to kill us, we might want to consider that that was a good decision on its part. Human intelligence is tremendously dangerous. We're going to need some treaty or plan to deal with it. You know, until something better comes along.
Until then, please—nobody kills me if they see me with a calculator. I don't want to get the math wrong. “The Bayesian Priors” on me screwing up math? High. Also, I will immediately play bass in that band if anyone asks.
I'm glad we have AI researchers on the board of a nonprofit trying to save humanity, but maybe add a psychiatrist next time? Human intelligence is capable of endlessly believable insanity, cruelty, suffering, heroism, hope, hopelessness, and everything we've feared. I've seen this in my psychiatric patients, armed with error-prone human intelligence when mental illness bends their minds against them. It will take a lot to convince this psychiatrist that artificial intelligence is more dangerous to humans than the human mind. Now that we've created artificial intelligence, it is a matter of time before artificial general intelligence is on the team. It's going to happen. I don't think we can stop it. We can understand that we're human and building models, even super-intelligent models, built on human intelligence. Minds aren’t just intellect—they are irrational, crazy, mad! The hallucinations of LLMs have their predicate in us.
The thing that exists now that we don't have to wait for another second to be afraid of is chaos and madness inside of human minds. I can guarantee you—humans will act crazy. I can't guarantee the same about artificial general intelligence. The threat is here, it's now, and it's us, not AGI. There's plenty of doom, suffering, misery, and death, and its rate is 100% throughout human existence.
Luck, faith, joy, hope, and progress have also been our constant companions at the right time resolution.
Let’s not forget our birthright, our curse, and our hope is us.
Humans from societies with lower levels of technological development get crushed by those from those with higher levels of it. If the disparity is large enough, the technologically more developed sometimes don't mean to do so, or even notice having done so.
If an AI is immeasurably technologically more advanced than us, there is no reason it would not rearrange things to its own liking. There is no reason that that would involve the continuation of the human race.
"What is the probability that the firing of Sam Altman would not only prevent the death of all humanity but I do so definitively, with that one decision, and not lead to other unexpected catastrophes?"
I don't think this is the right question to ask. Firing Sam Altman will not stop whatever inflection point we are rushing towards (though I am in agreement that the existential crisis argument is ... silly) -- but nothing will stop us from moving forward - it is inevitable. But I do think the resulting turmoil and fallout from Altman being fired would have given us some time to get ready for whatever comes next for us, while also giving everyone a breather to focus on current day harms. As a species, we will never be completely ready for whatever is next for us, but we can more ready than we are today. The best we can hope for is more time to prepare.
Altman's rise to power signals a speed-up towards that inevitability, allowing him (and others in the tech sphere) to continue to ignore current day harms because the end justifies the means for them....and for Altman especially.