Are tech companies on the verge of creating thinking machines with their tremendous AI models, as top executives claim they are? Not according to one expert.
We humans tend to associate language with intelligence. We tend to be compelled by those with greater linguistic skills as orators or writers.
But the latest research suggests that language isn’t the same as intelligence, says Benjamin Riley, founder of the venture Cognitive Resonance, in a essay for The Verge. And that’s bad news for the AI industry, which is predicating its hopes and dreams of creating an all-knowing artificial general intelligence, or AGI, on the large language model architecture it’s already using.
“The problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own,” Riley wrote. “We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.”
AGI, to elaborate, would be an all-knowing AI system that equals or exceeds human cognition in a wide variety of tasks. But in practice, it’s often envisioned as helping solve all the biggest problems humankind can’t, from cancer to climate change. And by saying they’re creating one, AI leaders can justify the industry’s exorbitant spending and catastrophic environmental impact.
Part of the reason why AI capex has been so out of control is the obsession with scaling: by furnishing the AI models with more data and powering them with ever growing-numbers of GPUs, AI companies have made their models better problem solvers and more humanlike in their ability to hold a conversation.
But “LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build,” Riley wrote.
If language were essential to thinking, then taking it away should take away our ability to think. But this doesn’t happen, Riley points out, citing decades of research summarized in a commentary published in Nature last year.
For one, functional magnetic resonance imaging (fMRI) of human brains has shown that distinct parts of the brain are activated during different cognitive activities, Riley notes. We’re not recruiting the same region of neurons when pondering a math problem versus a language one. Meanwhile, studies of people who lost their language abilities showed that their ability to think was largely unimpaired, since they could still solve math problems, follow nonverbal instructions, and understand other peoples’ emotions.
Even some leading AI figures are skeptical of LLMs. Most famous of all is the Turing Award winner and “godfather” of modern AI Yann LeCun, who until recently was Meta’s top AI scientist. LeCun has long argued that LLMs will never reach general intelligence, and instead believes in pursuing so-called “world” models that are designed to understand the three dimensional world by training them on a variety of physical data, rather than just language. It’s likely that this view led to his recent departure; despite LeCun’s position, Meta CEO Mark Zuckerberg has pivoted to pouring billions of dollars into a new AI division for creating an artificial “superintelligence” using LLM technology.
Other research adds to the idea that LLMs have a hard ceiling. In a new analysis published in the Journal of Creative Behavior, a researcher used a mathematical formula for determining the limits of AI “creativity,” with damning results. Because LLMs are a probabilistic system, they reach a point where they are no longer capable of generating novel and unique outputs that aren’t nonsensical. As a result, the study concluded that even the best AI systems will never be anything more than serviceable artists that write you a nice wordy email.
“While AI can mimic creative behavior — quite convincingly at times — its actual creative capacity is capped at the level of an average human and can never reach professional or expert standards under current design principles,” study author David H Cropley, a professor of engineering innovation at the University of South Australia, said in a statement about the work.
“A skilled writer, artist or designer can occasionally produce something truly original and effective,” Cropley added. “An LLM never will. It will always produce something average, and if industries rely too heavily on it, they will end up with formulaic, repetitive work.”
That isn’t a promising portent if LLM-powered AI is supposed think up new innovations and push the envelope of our understanding of the world. How will it invent “new physics,” as Elon Musk says it will, or solve the climate crisis, as OpenAI CEO Sam Altman has suggested, if the tech struggles to string together new sentences that aren’t based on preexisting writing?
“Yes, an AI system might remix and recycle our knowledge in interesting ways,” Riley writes. “But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine.”
More on AI: Godfather of AI Predicts Total Breakdown of Society
The post Large Language Models Will Never Be Intelligent, Expert Says appeared first on Futurism.




