AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.
Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.
Future bots could help design and carry out biological, nuclear, and chemical warfare. They could be weaponized to take down power grids, monitor congressional emails, and black out major media outlets. These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed a new AI model, Claude Mythos Preview, capable of orchestrating cyberattacks on the level of elite, state-sponsored hacking cells, potentially putting a private company’s cyber offense on par with that of the CIA and NSA. In an example of Mythos’s power, Anthropic researchers described how the model used a “moderately sophisticated multi-step exploit” to work around restrictions and gain broad internet access, then emailed a researcher—much to his surprise—while he was eating a sandwich in the park.
Washington is getting antsy about the power imbalance. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI. Murmurs of possible tactics abound—including more talk within the administration of the DPA after Anthropic’s Mythos announcement, one person with knowledge of such discussions told us. Meanwhile, Silicon Valley is watching carefully. In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization. Lawyers who represent Silicon Valley’s biggest AI firms are paying attention.
So what if nationalization actually happens?
In the most extreme scenario, top researchers from across the AI companies would be forced to work out of SCIFs in the basement of the Pentagon and report to Hegseth. Computational capacity, too, would be centralized under one nationalized mega-operation. The work would be locked down, and the focus would be primarily on defense applications, as opposed to the products made for businesses and individuals—ChatGPT and the like—that dominate the market today.
All of this would constitute full nationalization, an absolute takeover of the industry that would hollow out the commercial businesses of its three leading players: OpenAI, Anthropic, and Google DeepMind. Based on a dozen conversations we’ve had with former Pentagon and Trump-administration officials, AI-policy experts, and legal scholars, such a situation is, in all likelihood, not going to happen.
For starters, it’s probably illegal, according to Charlie Bullock, a senior research fellow at the Institute for Law & AI: The Constitution generally prevents the government from seizing private property without paying, and the government is unlikely to easily produce the trillions of dollars that the industry is collectively worth. The top American AI labs might immediately lose a fair portion of their research staff as well, because of restrictions on foreigners who can work on the most crucial defense-related technologies.
If AI firms were forced to focus primarily on defense applications, there would be the inevitable question of what to do with the massive consumer businesses these companies run. Would people use ChatGPT.gov, like buying a sundae from Cuba’s state-run ice-cream parlor? And if the goal of nationalization is to keep a competitive edge over China, it’s hard to imagine that Hegseth’s Pentagon could run an AI company more efficiently than Altman or Dario Amodei, the CEO of Anthropic.
But consider another possibility—slightly less extreme, though still capable of remaking the industry as we know it. The government could regulate AI companies like it does utilities. In the 1900s, as electricity went from a luxury good to a necessity, state and federal governments saw a need to regulate how much energy companies charge and to impose requirements around service reliability. In much the same way, the government could pass new laws regulating AI firms’ commercial activities. The companies could be prevented from charging more than it costs to generate images and text, for instance, or required to provide a basic level of model speed and capabilities to all customers, a sort of AI net neutrality.
A hard pivot to government control would likely entail both new state and federal laws, as well as heavy cooperation from tech companies—which, given the nation’s sclerotic politics and Silicon Valley’s libertarian leanings, could pose insurmountable barriers. But the notion is not so far-fetched. Some corners of Silicon Valley itself seem to be at least partially open to it. Altman has described a future in which “intelligence is a utility like electricity or water and people buy it from us on a meter.” Jensen Huang, the CEO of Nvidia, recently said that just as “every country has its electricity, you have your roads, you should have AI as part of your infrastructure.”
Such talk serves AI companies’ own interests—in part because being classified as a service provider can be, as the era of social media has demonstrated, an excellent way for companies to avoid liability for harmful or inaccurate information on their platforms—but it’s certainly possible that AI could become so entrenched that elected officials come to see it as an essential resource. Already, just as the federal government uses regulatory incentives and investment to spur the construction of new power plants and transmission lines, both the Biden and Trump administrations have undertaken initiatives that are essentially industrial policy for AI, using federal dollars and regulatory authority to accelerate the construction of AI infrastructure on American soil.
OpenAI has already flirted with the notion of a “Right to AI,” suggesting in a recent policy document that the government should consider making a “baseline level of capability broadly available, including through free or low-cost access points.” Similar regulations already govern many aspects of digital communication. “Your internet-service provider, cable, telephone services, these things are considered so essential that the government basically says how the providers” can do business, Dean Ball, a former AI adviser to the Trump administration, told us. AI could be next.
For years, AI companies have insisted they need to be regulated—but only as they see fit. Should the federal government ever take AI regulation seriously, the utility route would be among the most aggressive approaches available. But, really, the AI industry would be getting what it asked for.

Before we get into other conceivable futures, an important caveat. A full-blown nationalization effort may be unlikely, but that changes if a major global war breaks out or the economy collapses. During an emergency of historical scale, Ball reminded us—especially an emergency under the Trump administration—anything is possible. Drastic measures become easier to justify, both legally and politically.
Imagine that over the next year President Trump continues his game of imperialist roulette: America is further eroding the trust of its international partners, NATO keeps crumbling, and a new geopolitical reality continues to take shape. Say that in the midst of this, China decides to invade Taiwan. The conflict escalates fast, drawing in the U.S. and reluctant allies. The ensuing war is a major one. The Pentagon, already drastically short on munitions after its forays in Iran, wants to apply the latest AI capabilities to its wartime efforts, and Hegseth demands that Anthropic allow the Pentagon unrestricted access to Claude, reigniting the dispute first set in motion earlier this year.
Because there is active conflict, Anthropic is more willing to engage with the government’s demands than they were previously, but the firm asserts that it requires continuous oversight into how the Pentagon is using Claude. The company fears that in an effort to crack down on espionage, the Defense Department might create monitoring capabilities that supersede even the Chinese Communist Party’s, sliding America into an autocratic AI regime. Lest this sound speculative, it’s merely a restatement of Anthropic’s own position: Amodei has warned of a near future where “a powerful AI” scans “billions of conversations from millions of people” to “gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.”
The spat from earlier this year looks mild by comparison. Amodei remains stubbornly principled despite repeated requests from the Defense Department made under emergency laws. Hegseth responds by sending his troops to descend upon the company’s headquarters in San Francisco. Amodei is forcibly removed and replaced with a deferential Army general. The situation is exceedingly unlikely, but not without precedent: Soldiers once carried the chair of one of America’s largest retailers out from his Chicago office after he failed to comply with federal demands during World War II.
Throughout American history, efforts to take control of industry have been rare, and limited mostly to times of crisis: President Woodrow Wilson nationalized the railroads during World War I, and Fannie Mae and Freddie Mac were placed under conservatorship during the financial crisis. Today, there are all kinds of possible emergencies. If a global financial crash leads AI companies to insolvency, the administration might swoop in to provide life support, as it did for many banks and car companies during the Great Recession. On the flip side, should AI models displace large swathes of the labor market, such that a handful of companies run most of the economy, “then some kind of nationalization becomes potentially imperative,” Samuel Hammond, the acting director of AI policy and chief economist at the Foundation for American Innovation, told us—to distribute wealth and simply ensure the proper functioning of society. Both Anthropic and OpenAI have already suggested possible versions of such redistributive measures.
Advances in AI could be their own kind of disruptor: Imagine a Sputnik 2.0 moment where the White House decides that American companies need to consolidate resources if the U.S. wants to win the AI race against China. By exerting more control, America becomes more like China in the very race to beat it.
The thing about nationalization, though, is that it need not be all or nothing. Nationalization “has layers,” Hammond said. “Like an onion.” Perhaps the most likely fate for American AI companies is a future of soft nationalization—a world in which the government doesn’t fully control AI labs and their models, but instead enacts an escalating series of policies and established close partnerships with private companies to shape the technology.
By some measures, soft nationalization has already begun. The Trump administration has already taken a 10 percent stake in Intel, a major semiconductor manufacturer, providing the White House with (some) direct financial leverage over the company. OpenAI has appointed the retired general and former NSA director Paul Nakasone to its board. Meanwhile, the Army recently established a new detachment for senior tech leaders, and its first four recruits included executives from Meta, Palantir, and OpenAI.
The top AI companies are coordinating with government officials as their products’ military and intelligence implications advance. OpenAI, which scooped up a contract with the Pentagon after Anthropic’s fell apart, has said it will deploy its own engineers to work alongside the military. The firm has also been briefing governments—at the state, federal, and international levels—on the capabilities of a new OpenAI cybersecurity model. Google is reportedly negotiating its own Pentagon contract to allow Gemini to be used in classified settings. And even Anthropic is coming back around. The company is fighting the Pentagon in court over a “supply-chain risk” designation that Hegseth slapped on them amid their dispute. But after Anthropic announced its Mythos model, a group of tech executives including Amodei spoke with Vice President Vance and others to discuss the risks, and Amodei took a trip to the White House. Last week, President Trump said a possible Pentagon deal with Anthropic might still be on the table.
The White House, OpenAI, and Anthropic all paid lip service to the value of cooperation when we reached out to them. The Trump administration is “working with frontier AI labs to discuss opportunities for collaboration,” a White House official told us. A spokesperson for OpenAI said: “As AI systems become more capable, it is only going to become more important for industry to work with governments.” And an Anthropic spokesperson told us that Amodei’s recent visit to the White House was “productive” and that the firm believes that governments must play a central role in addressing the technology’s national-security implications. (Google DeepMind and the Pentagon did not return repeated requests for comment.)
This campfire ethos could easily fall apart. And clearly, tensions exist. But so long as it’s in both the AI firms’ and Trump’s interests to please each other, we may see the leading AI companies partnering even more closely with the U.S. military to accelerate the development of defense applications, analogous to what contractors including Palantir, Boeing, and Lockheed Martin have done for years. As a protective measure, the White House might ask AI companies to increase their security practices to prevent espionage and exfiltration of the most capable versions of the technology (consider that a handful of unauthorized users have reportedly gained access to Mythos). The government could even designate certain research as classified and subject technologies to export controls, and federal employees could embed inside the companies to oversee various safety measures and run their own, independent evaluations. Every nuclear power plant in America has at least two on-site government inspectors who check daily to confirm compliance with federal safety requirements. So why not AI companies too?
If such partnerships persist, one could imagine private companies resisting certain government demands. But even without new legislation, the White House can easily exert greater authority over industry. “There’s quite a lot of power that the federal government can wield,” Paul Scharre, an executive at the Center for a New American Security who previously did policy work at the Department of Defense, told us. “And even more so if you have an administration that’s willing to stretch the bounds of executive power.” Anthropic’s supply-chain-risk designation—a label that effectively bars the military from doing business with the company, and that is typically reserved for companies with ties to foreign adversaries—was a clear example of the government flexing its muscles. So was the Biden administration’s decision to block Nvidia from selling its most advanced AI chips to China in 2022. (The Trump administration has since relaxed restrictions, claiming that selling to China was the better strategy for winning the AI race.)
One of the most salient tools available remains the Defense Production Act, the law that Hegseth threatened Anthropic with before pursuing the supply-chain-risk designation. The act has been used over the decades to support the manufacture of military equipment such as bombers and tanks, though in recent years, it has been used more expansively. Both the first Trump and the Biden administrations used it to accelerate pandemic safety measures, and Biden relied on the law in a since-repealed executive order to compel AI companies to share certain information about model training and evaluations with the government. Last week, Trump invoked the act to fund new energy projects. Actually pursuing the DPA as a general tool for controlling AI companies would raise a host of thorny legal issues, but that hasn’t exactly stopped the Trump administration in the past.
Such reins on an industry that has billed itself as capable of extinguishing humankind are, theoretically, in everyone’s interest. It would seem to clearly benefit the American people to have democratically elected institutions—rather than corporate executives—overseeing a set of technologies with huge implications for the nation’s security and well-being. It’s also historically anomalous for a private industry to dictate the deployment of such a powerful, general-purpose technology. With the announcement of Mythos, Anthropic has been effectively functioning as a geopolitical actor, briefing ally governments on the model’s capabilities. The European Commission, for instance, has met with Anthropic thrice since Mythos was announced—although as of Wednesday, the company had not yet given European Union officials access.
The government should play a role in dictating the terms of how AI transforms the world. But the ongoing fracturing of American politics, and especially the capricious and authoritarian-leaning tendencies of the current administration, complicates everything. Entrusting the future of generative AI entirely to Altman and Amodei or Trump and Hegseth seem like two very different and similarly disastrous outcomes—a “Scylla and Charybdis” dynamic, as Bullock put it, between the tremendous concentration of power in government or in a small cadre of companies.
The impossible truth is that no private company should be trusted to unilaterally steer the future of AI development, but Americans should also have serious questions about whether government control is in their best interest—not least of all under an erratic and norm-shattering Trump administration. The Manhattan Project coordinated the efforts of scientists, private companies, and America’s leaders. What if instead, Boeing and DuPont had been racing against each other to develop the atomic bomb while Hegseth and Trump led the military? We are diving headfirst into the 21st-century equivalent of such a situation. Our political dysfunction is about to ram into Silicon Valley’s immeasurable power.
The post What Happens If America Nationalizes AI? appeared first on The Atlantic.




