For years, Rick Stevens, a computer scientist at Argonne National Laboratory, pushed the notion of transforming scientific computing with artificial intelligence.
But even as Mr. Stevens worked toward that goal, government labs like Argonne — created in 1946 and sponsored by the Department of Energy — often took five years or more to develop powerful supercomputers that can be used for A.I. research. Mr. Stevens watched as companies like Amazon, Microsoft and Elon Musk’s xAI made faster gains by installing large A.I. systems in a matter of months.
Last month, Mr. Stevens welcomed a major change. The Energy Department began cutting deals with tech giants to hasten how quickly national labs can land bigger machines.
At Argonne, outside Chicago, the A.I. chip giant Nvidia and the cloud provider Oracle agreed to deliver the lab’s first two dedicated A.I. systems, with a modest size machine in 2026 and a larger system later. In a shift, the tech companies are expected to pay at least some of the costs of building and operating the hardware, rather than the government. And other companies are expected to share use of the hardware.
“It’s a much more businesslike strategy,” said Mr. Stevens, an Argonne associate laboratory director and computer science professor at the University of Chicago. “It’s much more of a Silicon Valley kind of strategy.”
The A.I. boom is shaking up national laboratories that have led some of the most cutting-edge scientific research, increasingly pushing them to emulate the behavior of the tech giants. That’s because A.I. has added new urgency to the world of high-performance computing, promising to stunningly speed up tasks like developing drugs, new batteries and power plants. Many of the labs now want dedicated A.I. hardware more quickly.
That has led to deals with tech companies including the chipmakers Nvidia and Advanced Micro Devices. Oak Ridge National Laboratory, founded in 1943 and a leader in developing nuclear technology, recently said it expected an A.I. system called Lux to be installed in just six months in a project driven by AMD.
The moves dovetail with the Trump administration’s efforts to bolster the United States in an A.I. race against China, in part by cutting red tape.
“If we move at the old speed of government, we’re going to get left behind,” the energy secretary, Chris Wright, said at a briefing last month on the Oak Ridge announcements. “We’re going to have dozens of partnerships with companies to build facilities at commercial speed.”
The strategy emerged from talks between Mr. Wright and the chiefs of many major chip and A.I. companies that started last spring, according to lab officials and company executives. It is just one facet of a flurry of activity in supercomputers, which was the subject of a major conference this week in St. Louis.
Nvidia helped galvanize the action last month when Jensen Huang, its chief executive, unveiled plans for seven Department of Energy supercomputers. A day earlier, Oak Ridge announced two AMD-powered machines.
Supercomputer veterans said they could not recall announcements of nine major systems in one week.
“I don’t think that’s ever happened before,” said Trish Damkroger, a senior vice president at Hewlett Packard Enterprise, which is building five of the new systems.
Supercomputers are room-size machines that have historically been used to create simulations of complex processes, like explosions or the movement of air past aircraft wings. They use thousands of processor chips and high-speed networks, with the silicon components working together like one vast electronic brain.
The machines have many similarities with the hardware in A.I. data centers, which are the computing hubs that power the development of the technology. But scientific chores typically demand high-precision calculations, processing what are known as 64-bit chunks of data at a time.
Many A.I. chores require simpler math, so the latest systems use chunks of data as small as 4-bits to do many more calculations at once. Nvidia estimates that the large system being constructed for Argonne will handle more operations per second than the 500 largest conventional supercomputers combined. Such speeds can slash months off tasks like training large language models, which are the systems that underpin many A.I. products.
The potential benefits have spurred private investments that dwarf those of the government. Microsoft, for example, plans to spend $7 billion on a single A.I. computing complex in Wisconsin. The speediest supercomputer at Lawrence Livermore National Laboratory, by contrast, cost about $600 million.
Those differences have pushed national labs to seek specialized A.I. systems and change their buying practices, which often involved waiting for future chips to be developed and procedures such as requests for proposals, or RFPs.
“The old way we bought things is not the way we should do it going forward,” said Gary Grider, the high performance supercomputing division leader at Los Alamos National Laboratory, which is expected to receive two new supercomputers.
This past spring, the Energy Department began brainstorming how to speed up. It offered to provide space and electrical power for potential A.I. data centers at 16 national labs. Mr. Wright also invited tech leaders to come up with proposals.
One who responded was Lisa Su, the chief executive of AMD, which offered to build a machine in just several months at Oak Ridge if the lab provided a location and maintained the system, Mr. Wright said at the briefing last month.
He recalled Ms. Su saying: “‘I’m going to pay for it. I’m going to build it, and then we’re going to split the use of it.’”
Ms. Su said there were multiple meetings with Mr. Wright, but did not discuss financial arrangements.
Oak Ridge, historically home to some of the largest scientific machines, said in October that it would add to that line in three years with a machine called Discovery. It expects the additional Lux system to help train specialized A.I. models and accelerate research in nuclear fusion and the discovery of new materials, said Gina Tourassi, an associate laboratory director.
Mr. Stevens has similar hopes at Argonne, where five new supercomputers were announced last month. Nvidia and Oracle first plan to deliver an A.I. system called Equinox, which has 10,000 of Nvidia’s coveted Blackwell processors. They later expect to deliver Solstice, powered by 100,000 of those chips.
Besides reaching researchers, Mr. Stevens said, he expects the machines to attract companies pushing the frontiers of A.I.
“Instead of thinking of the government or D.O.E. as the only customer, think of us as at the center of a consortium,” Mr. Stevens said. And instead of buying the new hardware outright, the lab can essentially pay for use of the systems as needed, he said.
Many questions remain about the changing approach, including the potential impact on taxpayers and whether Congress will have input.
“No matter how good these systems are for American innovation and leadership, it is unusual to have the funding methods and RFP process to be obscured,” said Addison Snell, chief executive of Intersect360 Research, a firm that tracks the supercomputer industry.
Most questions about the new purchasing approach should be answered after negotiations among the parties are completed, said Ian Buck, vice president and general manager of Nvidia’s accelerated computing business. But he added that Nvidia would not be paying for all the chips going to Argonne.
“It’s not a donation,” Mr. Buck said.
The post To Meld A.I. With Supercomputers, National Labs Are Picking Up the Pace appeared first on New York Times.




