In at least one crucial way, AI has already won its campaign for global dominance. An unbelievable volume of synthetic prose is published every moment of every day—heaping piles of machine-written news articles, text messages, emails, search results, customer-service chats, even scientific research.
Chatbots learned from human writing. Now the influence may run in the other direction. Some people have hypothesized that the proliferation of generative-AI tools such as ChatGPT will seep into human communication, that the terse language we use when prompting a chatbot may lead us to dispose of any niceties or writerly flourishes when corresponding with friends and colleagues. But there are other possibilities. Jeremy Nguyen, a senior researcher at Swinburne University of Technology, in Australia, ran an experiment last year to see how exposure to AI-generated text might change the way people write. He and his colleagues asked 320 people to write a post advertising a sofa for sale on a secondhand marketplace. Afterward, the researchers showed the participants what ChatGPT had written when given the same prompt, and they asked the subjects to do the same task again. The responses changed dramatically.
“We didn’t say, ‘Hey, try to make it better, or more like GPT,’” Nguyen told me. Yet “more like GPT” is essentially what happened: After the participants saw the AI-generated text, they became more verbose, drafting 87 words on average versus 32.7 in the first round. The full results of the experiment are yet to be published or peer-reviewed, but it’s an intriguing finding. Text generators tend to write long, even when the prompt is curt. Might people be influenced by this style, rather than the language they use when typing to a chatbot?
AI-written text is baked into software that millions, if not billions, of people use every day. Even if you don’t use ChatGPT, Gemini, Claude, or any of the other popular text-generating tools, you will inevitably be on the receiving end of emails, documents, and marketing materials that have been compiled with their assistance. Gmail offers some users an integrated AI tool that starts drafting responses before any fingers hit the keys. Last year, Apple launched Apple Intelligence, which includes AI features on Macs, iPhones, and iPads such as writing assistance across apps and a “smart reply” function in the Mail app. Writing on the internet is now more likely than even a year or two ago to be a blended product—the result of a human using AI somewhere in the drafting or refining phase while making subtle tweaks themselves. “And so that might be a way for patterns to get laundered, in effect,” Emily M. Bender, a computational-linguistics professor at the University of Washington, told me.
Bender, a well-known critic of AI who helped coin the term stochastic parrots, does not use AI text generators on ethical grounds. “I’m not interested in reading something that nobody said,” she told me. The issue, of course, is that knowing if something was written by AI is becoming harder and harder. People are sensitive to patterns in language—you may have noticed yourself switching accents or using different words depending on whom you’re speaking to—but “what we do with those patterns depends a lot on how we perceive who’s saying them,” Bender told me. You might not be moved to emulate AI, but you could be more susceptible to picking up its linguistic quirks if they appear to come from a respected source. Interacting with ChatGPT is one thing; receiving a ChatGPT-influenced email from a highly esteemed colleague is another.
Language evolves constantly, and advances in technology have long shaped the way people communicate (lol, anyone?). These influences are not necessarily good or bad, although technological developments have often helped to make language and communication more accessible: Most people see the invention of the printing press as a welcome development from longhand writing. LLMs follow in this vein—it’s never been easier to turn your thoughts into flowing prose, regardless of your view on the quality of the output.
Recent technological advances have generally inspired or even demanded concision—many text messages and social-media posts have explicit character limits, for instance. As a general rule, language works on the principle that effort increases with length; five paragraphs require more work than two sentences for the sender to write and the receiver to read. But AI tools could upset this balance, Simon Kirby, a professor of language evolution at the University of Edinburgh, told me. “What happens when you have a machine where the cost of sending 10,000 words is the same or roughly the same as the cost of sending 1,000?” he said.
Kirby offered me a hypothetical: One person may give an AI tool a few bullet points to turn into a lengthy, professional-sounding email, only for the recipient to immediately use another tool to summarize the prose before reading. “Essentially, we’ve come up with a protocol where the machines are using flowery, formal language to send very long versions of very short, encapsulated messages that the humans are using,” he said.
Beyond length, the linguists I spoke with speculated that the proliferation of AI writing could lead to a new form of language. “It’s pretty easy to imagine that English will become more standardized to whatever the standard of these language models is,” said Jill Walker Rettberg, a professor of digital culture at the University of Bergen’s Center for Digital Narrative, in Norway. This already happens to an extent with automated spelling- and grammar-checkers, which nudge users to adhere to whichever formulations they consider to be “correct.” As AI tools become more commonplace, people may see their style as the template to follow, resulting in a greater homogenization of language: Just yesterday, Cornell University presented a study suggesting that this is happening already. In the experiment, an AI writing tool “caused Indian participants to write more like Americans, thereby homogenizing writing toward Western styles and diminishing nuances that differentiate cultural expression,” the authors wrote.
Philip Seargeant, an applied linguist at the Open University in the U.K., told me that when students use AI tools inappropriately, their work reads a little too perfect, “but in a very bland and uninteresting way.” Kirby says that AI text lacks the errors or awkwardness he’d expect in student essays and has an “uncanny valley” feel. “It does have that kind of feeling [that] there’s nothing behind the eyes,” he said.
Several linguists I spoke with suggested that the proliferation of AI-written or -mediated text may spark a countermovement. Perhaps some people will rebel, leaning into their own linguistic mannerisms in order to differentiate themselves. Bender imagines people turning off AI features or purposely choosing synonyms when prompted to use certain words, as an act of defiance. Kirby told me he already sees some of his students taking pride in not using AI writing tools. “There is a way in which that will become the kind of valorized way of writing,” he said. “It’ll be the real deal, and it’ll be obvious, because you’ll deliberately lean into your idiosyncrasies as a writer.” Rettberg compares it to choosing handmade goods over cheap, factory-made fare: Rather than losing value as a result of the AI wave, human writing may be appreciated even more, taking on an artisanal quality.
Ultimately, as language continues to evolve, AI tools will be both setting trends and playing catch-up. Trained on existing data, they’ll always be somewhat behind how people are using language today, even as they influence it. In fact, we may end up with AI tools evolving language separately to humans, Kirby said. Large language models are usually trained on text from the internet, and the more AI-generated text ends up permeating the web, the more these tools may end up being trained on their own output and embedding their own linguistic styles. For Kirby, this is fascinating. “We might find that these models start going off and taking the language that’s produced with them in a particular direction that may be different from the direction language would have evolved in if it had been passed from human to human,” he said. This, he believes, is what could set generative AI apart from other technological advances when it comes to impact on language: “We’ve inadvertently created something that could itself be culturally evolving.”
The post The Great Language Flattening appeared first on The Atlantic.