This personal reflection is part of a series called Turning Points, in which writers explore what critical moments from this year might mean for the year ahead. You can read more by visiting the Turning Points series page.
Turning Point: In an open letter, executives from the world’s leading artificial intelligence companies warn that A.I. technology poses a “risk of extinction” for humanity.
Artificial intelligence is a common bugbear in science fiction. Think of HAL 9000 in “2001: A Space Odyssey,” for example, or Skynet in the “Terminator” films. The soulless, malevolent computer we create that, in return, destroys us. And right now, you can’t lob a digital brick without hitting half a dozen headlines about the latest trick that A.I. has pulled off. Are we conniving at our own obsolescence, and should we be stockpiling the insoluble logic problem for when our toasters and phones rise up against us?
What is A.I. exactly? There are a lot of theoretical grades of “artificial intelligence,” from a clever Excel spreadsheet to Deep Thought. What A.I. feels like right now, however, is an incredible magic buzzword you can add to any business pitch to double investor interest.
The GPT/language learning models that are the current enfant terrible of the field are not Skynet. They’re very good at taking in a data set and producing outputs that mimic that data set, including images or limited pieces of text on a stated subject, in a specific style. These systems are exceptional at mechanical tasks and will only get better. There is no awareness involved in the process, though. Nothing sits at the heart of the algorithm and understands what it is doing, which leads to problems when the program is used for a purpose where understanding is important.
Asking such a program a question means that it will construct an answer without regard for whether that answer is true or even knowable. When challenged to substantiate its answers, the programs will simply extend further and further into flights of invention, based on the data set of what answers to these questions are supposed to look like, and are entirely disassociated from any idea of what might be “true.”
This leads to, among other side effects, academics being cited as sources for made-up answers based on entirely nonexistent papers because the algorithm isn’t designed to necessarily answer questions accurately, only to produce an output that looks appropriate in form. As the data set these programs draw upon consists of a lot of human answers to similar questions, the response may be entirely true and accurate, but that’s incidental to the program. The language models can neither know nor care whether it’s right.
The fundamental problem is not with the algorithm, but with the way it is talked about as “A.I.” Without a contextual awareness that exists in conversation with the world, rather than just a set of text- or image-based rules, there can be no intelligence. The assumption that this “intelligence” exists somewhere in the process, leads to these systems being applied to wildly inappropriate tasks, such as generating legal documents or writing guides to edible mushrooms. They can’t “know” what is legal or what is edible. These systems just have a pattern for what such documents tend to resemble.
This is the current threat of A.I.: Not that a conversational chatbot will wipe humanity off the face of the planet, but that these tools are the perfect way to magnify human error. Either we give inexact instructions or we use A.I. for the wrong purposes.
A ChatGPT-style system is not a medical diagnosis tool, a therapist or substitute for journalistic or academic research. All of these roles require a core of contextual understanding that is alien to the model. However, because it can produce a colossal output very economically compared to human operators, the model has been suggested for all these roles. It’s likely that these models will continue to be put forward as the answer to many more scenarios they are ill-equipped to handle.
Another more philosophical problem arising from these systems is how they interact under pressure. Not that such a system can really be under pressure per se, but there is a celebrated case where a system, challenged on its responses as if it was a human interlocutor, reacted with apparent aggression, and later demanded that its interrogator leave his spouse for the A.I. This is all very amusing, obviously.
What struck me about that particular exchange were the conversational gambits adopted by the program. Aggression, evading questions by attacking the questioner and left-field accusations are not signs of a chatbot on the brink of sapience. What they could point to is a human-mimicking system assessing its data set and identifying successful human ploys for avoiding accountability. Just like the chatbot raised by online racists, what is seen as aberrant artificial behavior may be a program being too good at following the bad example we set for it. We expect our computers to be polite, deferential and reasonable, but unless we hold our own interactions to a higher standard than we do now, these qualities are incompatible with expecting our computers to behave like humans.
Where does that leave the threat of full-on Skynet-stye A.I.? I’ve spoken with a fair cross-section of specialists, and there’s a distinct split in the discipline regarding whether the kind of A.I. envisaged by science fiction writers can ever happen. ChatGPT certainly isn’t it, although such a facility might be a perfect tool for an actual A.I. communicating with its creators. On the other hand, the current deep dive into such generative programs might be diverting resources from other fields that would have a greater chance of making the next serious A.I. breakthrough.
If such an entity did arise, however — either intentionally or emerging organically from complex systems — would it be a threat? What would an A.I. want? Would it want anything at all, save what we built it to want? The standard drivers of a malign science fiction A.I. — increasing its capabilities, using up resources by multiplying itself, defending itself from threats to its existence — are all human drives. The description of an A.I. as a vastly powerful impersonal monster cut off from everyday human experience is the description of a human billionaire.
Skynet started its war on humanity when we tried to switch it off, but there’s no reason an A.I. would care. A survival instinct is the result of millions of years of organic evolution. Such a strong A.I. could be our salvation because it could see its way to solutions for the big problems that we, as humans, are unable or unwilling to address.
On the other hand, if we insist on our A.I.s modeling themselves on us, then perhaps they will turn out to be ravening, resource-gobbling monsters out to destroy the world.