Artificial intelligence often gets criticized because it makes up information that appears to be factual, known as hallucinations. The plausible fakes have roiled not only chatbot sessions but lawsuits and medical records. For a time last year, a patently false claim from a new Google chatbot helped drive down the company’s market value by an estimated $100 billion.
In the universe of science, however, innovators are finding that A.I. hallucinations can be remarkably useful. The smart machines, it turns out, are dreaming up riots of unrealities that help scientists track cancer, design drugs, invent medical devices, uncover weather phenomena and even win the Nobel Prize.
“The public thinks it’s all bad,” said Amy McGovern, a computer scientist who directs a federal A.I. institute. “But it’s actually giving scientists new ideas. It’s giving them the chance to explore ideas they might not have thought about otherwise.”
The public image of science is coolly analytic. Less visibly, the early stages of discovery can teem with hunches and wild guesswork. “Anything goes” is how Paul Feyerabend, a philosopher of science, once characterized the free-for-all.
Now, A.I. hallucinations are reinvigorating the creative side of science. They speed the process by which scientists and inventors dream up new ideas and test them to see if reality concurs. It’s the scientific method — only supercharged. What once took years can now be done in days, hours and minutes. In some cases, the accelerated cycles of inquiry help scientists open new frontiers.
“We’re exploring,” said James J. Collins, an M.I.T. professor who recently praised hallucinations for speeding his research into novel antibiotics. “We’re asking the models to come up with completely new molecules.”
Producing a minuscule system for drug deliveries
The A.I. hallucinations arise when scientists teach generative computer models about a particular subject and then let the machines rework that information. The results can range from subtle and wrongheaded to surreal. At times, they lead to major discoveries.
In October, David Baker of the University of Washington shared the Nobel Prize in Chemistry for his pioneering research on proteins — the knotty molecules that empower life. The Nobel committee praised him for discovering how to rapidly build completely new kinds of proteins not found in nature, calling his feat “almost impossible.”
In an interview before the prize announcement, Dr. Baker cited bursts of A.I. imaginings as central to “making proteins from scratch.” The new technology, he added, has helped his lab obtain roughly 100 patents, many for medical care. One is for a new way to treat cancer. Another seeks to aid the global war on viral infections. Dr. Baker has also founded or helped start more than 20 biotech companies.
“Things are moving fast,” he said. “Even scientists who do proteins for a living don’t know how far things have come.” How many proteins has his lab designed? “Ten million — all brand-new,” he replied. “They don’t occur in nature.”
Designing a new kind of hormone tracker
Despite the allure of A.I. hallucinations for discovery, some scientists find the word itself misleading. They see the imaginings of generative A.I. models not as illusory but prospective — as having some chance of coming true, not unlike the conjectures made in the early stages of the scientific method. They see the term hallucination as inaccurate, and thus avoid using it.
The word also gets frowned on because it can evoke the bad old days of hallucinations from LSD and other psychedelic drugs, which scared off reputable scientists for decades. A final downside is that scientific and medical communications generated by A.I. can, like chatbot replies, get clouded by false information.
In July, the White House released a report on fostering public trust in A.I. research. Its sole reference to hallucinations was about finding ways to reduce them.
The Nobel Prize committee seems to have followed that playbook. It said nothing about A.I. hallucinations in a detailed review of Dr. Baker’s work. Instead, in a news release, it simply credited his team with producing “one imaginative protein creation after another.” Increasingly, parts of the scientific establishment seem to view hallucinations as unmentionable.
Even so, experts said in interviews that the imaginings of scientific A.I. have major advantages compared with the hallucinations of chatbots and their kin. Most fundamentally, they said, the creative bursts are rooted in the hard facts of nature and science rather than the ambiguities of human language or the blur of the internet, known for its biases and falsehoods.
Fashioning a new attack on influenza
“We’re teaching A.I. physics,” said Anima Anandkumar, a professor of math and computing sciences at the California Institute of Technology who formerly directed A.I. research at Nvidia, the leading maker of A.I. chips.
For science, Dr. Anandkumar added, the physical grounding in reliable facts can produce highly accurate outcomes. She said the large language models of chatbots have no practical way to verify the correctness of their statements and assertions.
The ultimate check, she said, comes as scientists compare the digital flights of fancy with the solid particulars of physical reality.
“You need to test it,” Dr. Anandkumar said of A.I. results. “Something newly designed by A.I. hallucinations requires testing.”
Recently, Dr. Anandkumar and her colleagues used A.I. hallucinations to help design a new kind of catheter that greatly reduces bacterial contamination — a global bane that annually causes millions of urinary tract infections. She said the team’s A.I. model dreamed up many thousands of catheter geometries and it then picked one that was the most effective.
The inner walls of the new catheter are lined with sawtooth-like spikes that prevent bacteria from gaining traction and swimming upstream to infect patients’ bladders. Dr. Anandkumar said the team is discussing the device’s commercialization.
Echoing other scientists, Dr. Anandkumar said she dislikes the term hallucination. Her team’s paper on the new catheter avoids the word.
On the other hand, Harini Veeraraghavan, head of a Memorial Sloan Kettering Cancer Center lab in Manhattan, cited the term in a paper on using A.I. to sharpen blurry medical images. Its title in part read: “Hallucinated MRI,” short for magnetic resonance imaging.
Researchers at the University of Texas at Austin have also embraced the term. “Learning from Hallucination,” read the title of their paper on improving robot navigation.
And the head of the science division at DeepMind, a Google company in London that develops A.I. applications, praised hallucinations as promoting discovery, doing so shortly after two of his colleagues shared this year’s Nobel Prize in Chemistry with Dr. Baker.
“We have this amazing tool which can exhibit creativity,” the DeepMind official, Pushmeet Kohli, said in an interview.
An example, he said, was how a DeepMind computer in 2016 beat the world champion player of Go, a complex board game. The game’s turning point was move 37, fairly early in the contest. “We thought it was a mistake,” Dr. Kohli said. “And people realized as the game went on that it was a stroke of genius. So these models are able to produce these very, very novel insights.”
Dr. McGovern, the A.I. institute director, is also a professor of meteorology and computer science at the University of Oklahoma. She said A.I. hallucinations might be described less colorfully as “probability distributions” — a very old term in the world of science.
Weather sleuths, Dr. McGovern added, now use A.I. routinely to create thousands of subtle forecast variations, or ranges of probability. She said the rich imaginings let them discover unexpected factors that can drive extreme events like deadly heat waves. “It’s a valuable tool,” Dr. McGovern said.
Dr. Baker, the recent Nobel Prize winner, has adopted the frank approach. “De novo protein design by deep network hallucination,” read the title of one of his 2021 papers, which appeared in Nature, a top scientific journal.
The phrase de novo — meaning “from the beginning” in Latin — draws a sharp contrast with how scientists in the early 1980s began tweaking the structures of known proteins that occur in nature.
In 2003, Dr. Baker and his colleagues achieved a far more ambitious goal: making the world’s first entirely new protein from scratch. They called it Top7. Their accomplishment was seen as a major advance because proteins are superstars of complexity. Experts liken the structure of DNA to a string of pearls and that of large proteins to hairballs. Their structures are so complicated that even detailed graphic representations are rough approximations.
As A.I. grew into a powerful new technology, Dr. Baker wondered if it could speed de novo design. His 2021 paper in Nature cited the inspiration of Google DeepDream — a model that morphs existing images into psychedelia. When people look at the full moon and see a man’s face, that’s called pareidolia, a perceptual quirk that turns ambiguous patterns into meaningful images. A version of that tendency is what DeepDream uses to create its surreal fantasies.
Dr. Baker’s plan was to see if A.I. could impose the pareidolia effect on ambiguous sets of amino acids, the building blocks of proteins. His team fed random strings of amino sequences into a model trained to recognize the structural features of real proteins. It worked — in spades.
The paper said the test run created thousands of virtual proteins. It likened them to the explosion of A.I. cat images on the internet. “Just as simulated images of cats generated by deep network hallucination are clearly recognizable as cats,” the paper said, so too the artificial protein structures “resemble but are not identical to” the natural structures.
The Baker team then sought to turn the imagined proteins into the real thing — a step not unlike bringing digital cats to life. First, the team took information on the hallucinated molecules and used it as a blueprint to produce the strands of DNA that form genes. Then, as the 2021 paper reported, the eureka moment came as the genes were inserted into microbes and the tiny organisms churned out 129 new kinds of proteins unknown to science and nature.
Afterward, in early 2022, Dr. Baker described that moment as “the first demonstration” of how A.I. can accelerate de novo protein design. His follow-up papers of 2022 and 2023 once again used the word hallucination in their titles.
In an interview, Dr. Baker said his lab had taken a new step forward in the creative imaginings with an A.I. method known as diffusion. That is what powers DALL-E, Sora and other popular generators of visuals.
Dr. Baker praised diffusion as being better than hallucination at conjuring up novel protein designs. “It’s much faster and the success rate is higher,” he said.
In recent years, some analysts have worried that science is in decline. They point to a drop over recent decades in the number of breakthroughs and major discoveries.
A.I. backers argue that its bursts of creativity are coming to the rescue. On the design horizon, Dr. Baker and his colleagues see waves of protein catalysts that will harvest the energy of sunlight, turn old factories into sleek energy savers and help create a sustainable new world.
“The acceleration keeps on happening,” said Ian C. Haydon, a member of Dr. Baker’s team. “It’s incredible.”
Others concur. “It’s amazing what will come out in the next few years,” Dr. Kohli said. He sees A.I. as unlocking life’s deepest secrets and establishing a powerful new basis for curing ills, improving health and lengthening lives.
“Once we decipher and truly understand the language of life,” he said, “it will be magical.”
The post How Hallucinatory A.I. Helps Science Dream Up Big Breakthroughs appeared first on New York Times.