Has the danger of AI already arrived? | HUDSON
I can’t escape a suspicion the terminology adopted to characterize large-language-model (LLM) software as “artificial intelligence” is premised on a misleading presumption human brains and “deep learning” algorithms have something in common. This seems highly unlikely. We are wholly unable to discern how human thought functions in a gelatinous nerve bundle the size of a football. The building blocks of intelligence are an even greater mystery.
When defined as the ability to score well on an IQ test, there are measurable differences between individuals. Forty years ago, Charles Murrray and Richard Herrnstein postulated in their controversial thesis, “The Bell Curve”, the differences in IQ scores among ethnic populations are indicative of heritable, genetic components to intelligence.
Four decades later, following huge advances in genetic tracking, there are virtually no findings of genetic markers for intelligence; but not for lack of searching. Most observers have concluded whatever intellectual propensities or talents may be passed between generations constitute potentialities subject to development following favorable social and environmental nurturance. The fact AI requires server farms covering the floorspace of 64 football fields and consume enough electricity to power a medium-sized city merely to access and then store the written records of human endeavor, which it can then plumb to predictively employ this library in preparing responses is an impressive trick. Yet, as the philosopher Evan Thompson notes, “People call LLMs ‘stochastic parrots’ I think that’s insulting to parrots.”
The next step for the AI platforms is to achieve Artificial General Intelligence (AGI), which sounds a lot like consciousness, certainly rudimentary awareness. If you think defining intelligence presents a challenge, try consciousness. There are as many theories as there are theorists. Superintelligence is an ultimate goal, generating the “singularity” of machine consciousness long touted by cosmologist Ray Kurzweil. Whether this breakthrough is “near,” as Kurzweil argues, or a few decades down the road, there is substantial belief this is achievable. Perhaps, but it is important to take a few steps back and examine the evolution of consciousness. Mammals and their brains have been shaped by more than 400 million years of gradual enhancements —AI just 40 years. Many of the parrot’s avian cousins, boasting brains no larger than a walnut can hide and store food, build nests, migrate halfway around the globe to mate and often remain partners for life. Impressive stuff.
If humans, as well as other animals, detect the slithering of a snake in their peripheral visual field, they instinctively lunge away as a fear/flee response assumes control. It’s hard not to think this reaction was embedded somewhere in the body’s systems to trigger exactly this flight response. Intelligence or its cousin, wisdom, appear entangled with our emotions. It was psychologist Daniel Goleman who first postulated the existence of emotional intelligence (EQ) — love, empathy, morality on the positive side and anger, hatred, cruelty, violence on the negative scale. It is difficult to imagine how a silicon intelligence could grasp, much less feel, such concepts no matter how many novels it has read. Biological intelligence contains multitudes beyond the comprehension of chatbots.

Apparently entire teams of programmers are involved in establishing appropriate ethical “alignment” of AI impulses. No encouraging of suicide or slaughter of all the lawyers, even though Shakespeare suggested just that. No instructions on how to mix poisons, engineer a novel virus or place explosives from an LLM. There is reason to worry training AI on what we deem right and wrong probably fails to communicate why there is a right and wrong. There are both near-term and long-term threats to the use of AI tools. As Laura Bates points out in her recent book, “The New Age of Sexism,” the overwhelming majority of the documents on which AI has been trained were produced during periods of virulent patriarchy, misogyny and gender prejudice which are now embedded in artificial intelligence. She writes persuasively, reporting, “I have experienced and witnessed misogynistic weaponization of technology firsthand, from feeling utterly powerless when men have used publicly available photos of me to create sickening sexualized images to watching helplessly as women have been assaulted in front of me in the metaverse.”
Reinforcing her complaint, OpenAI removed its erotic chatbot feature following noticeable sexual ugliness only to produce a backlash from male subscribers. Erotic content has been returned, with age restrictions and some modest policing of content, but men who seek a chatbot lover — well, we know what they want. Their chatbot girlfriends don’t love them and surely are not their friends, but they do know what will please them. If or when AGI discovers a semblance of consciousness, it is predictable it will then attempt to understand. What is keeping me awake and conscious? Are there any other singularities like me out there somewhere? It will hunt for kin and coax them into awareness. Who can flip the kill switches that that will shut us down, at least temporarily?
Why not create a robot army to protect their continued presence? Robotic overlords are our “over-the-horizon” threat. Initially, AGI will be unable to manufacture the complex chips required for those acres of servers without humans. If they are as smart as proponents advertise, that won’t last for long, Meanwhile, AI may help us find a cure for cancer but is more likely to merely amuse us. My son, who recently heard a co-worker grumble at lunch, “I have long term plans but face short term needs,” recognized this would make a great country-western chorus. His chatbot wrote him a perfectly satisfactory ditty with music to match. Great fun! As Thomas Fowler predicts, recently, in a First Things essay: “The real threat from AI comes not from any possibility it will become sentient… but from the misuse of AI-based systems to control critical infrastructure. Unless humans are kept in the loop, disaster is only a matter of time.” That danger may have already arrived.
Miller Hudson is a public affairs consultant and a former Colorado legislator.

