On his second day in office this year, President Trump underscored his unequivocal support for the tech industry. Standing at a lectern next to tech leaders, he announced the Stargate Project, a plan to pump $500 billion in private investment over four years into artificial intelligence infrastructure. For comparison: The Apollo mission, which sent the first men to the moon, spent around $300 billion in today’s dollars over 13 years. Sam Altman, OpenAI’s chief executive, played down the investment. “It sounds crazy big now,” he said. “I bet it won’t sound that big in a few years.”
In the decade that I have observed Silicon Valley — first as an engineer, then as a journalist — I’ve watched the industry shift into a new paradigm. Tech companies have long reaped the benefits of a friendly U.S. government, but in its early months the Trump administration has made clear that the state will now grant new firepower to the industry’s ambitions. The Stargate announcement was just one signal. Another was the Republican tax bill that the House passed last week, which would ban states from regulating A.I. for the next 10 years.
The leading A.I. giants are no longer merely multinational corporations; they are growing into modern-day empires. With the full support of the federal government, soon they will be able to reshape most spheres of society as they please, from the political to the economic to the production of science.
When I took my first job in Silicon Valley 10 years ago, the industry’s wealth and influence were already expanding. The tech giants had grandiose missions — take Google’s, to “organize the world’s information” — which they used to attract young workers and capital investment. But with the promise of developing artificial general intelligence, or A.G.I., those grandiose missions have turned into civilizing ones. Companies claim they will bring humanity into a new, enlightened age — that they alone have the scientific and moral clarity to control a technology that, in their telling, will usher us to hell if China develops it first. “A.I. companies in the U.S. and other democracies must have better models than those in China if we want to prevail,” said Dario Amodei, chief executive of Anthropic, an A.I. start-up.
This language is as far-fetched as it sounds, and Silicon Valley has a long history of making promises that never materialize. Yet the narrative that A.G.I. is just around the corner and will usher in “massive prosperity,” as Mr. Altman has written, is already leading companies to accrue vast amounts of capital, lay claim to data and electricity, and build enormous data centers that are accelerating the climate crisis. These gains will fortify tech companies’ power and erode human rights long after the shine of the industry’s promises wears off.
The quest for A.G.I. is giving companies cover to vacuum up more data than ever before, with profound implications for people’s privacy and intellectual property rights. Before investing heavily in generative A.I., Meta had amassed data from nearly four billion accounts, but it no longer considers that enough. To train its generative A.I. models, the company has scraped the web with little regard for copyright and even considered buying up Simon & Schuster to meet the new data imperative.
These developments are also convincing companies to escalate their consumption of natural resources. Early drafts of the Stargate Project estimated that its A.I. supercomputer could need about as much power as three million homes. And McKinsey now projects that by 2030, the global grid will need to add around two to six times the energy capacity it took to power California in 2022 to sustain the current rate of Silicon Valley’s expansion. “In any scenario, these are staggering investment numbers,” McKinsey wrote. One OpenAI employee told me that the company is running out of land and electricity.
Meanwhile, there are fewer independent A.I. experts to hold Silicon Valley to account. In 2004, only 21 percent of people graduating from Ph.D. programs in artificial intelligence joined the private sector. In 2020, nearly 70 percent did, one study found. They’ve been won over by the promise of compensation packages that can easily rise over $1 million. This means that companies like OpenAI can lock down the researchers who might otherwise be asking tough questions about their products and publishing their findings publicly for all to read. Based on my conversations with professors and scientists, ChatGPT’s release has exacerbated that trend — with even more researchers joining companies like OpenAI.
This talent monopoly has reoriented the kind of research that’s done in this field. Imagine what would happen if most climate science were done by researchers who worked in fossil fuel companies. That’s what’s happening with artificial intelligence. Already, A.I. companies could be censoring critical research into the flaws and risks of their tools. Four years ago, the leaders of Google’s ethical A.I. team said they were ousted after they wrote a paper raising questions about the industry’s growing focus on large language models, the technology that underpins ChatGPT and other generative A.I. products.
These companies are at an inflection point. With Mr. Trump’s election, Silicon Valley’s power will reach new heights. The president named David Sacks, a billionaire venture capitalist and A.I. investor, as his A.I. czar, and empowered another tech billionaire, Elon Musk, to slash through the government. Mr. Trump brought a cadre of tech executives with him on his recent trip to Saudi Arabia. If Senate Republicans now vote to prohibit states from regulating A.I. for 10 years, Silicon Valley’s impunity will be enshrined in law, cementing these companies’ empire status.
Their influence now extends well beyond the realm of business. We are now closer than ever to a world in which tech companies can seize land, operate their own currencies, reorder the economy and remake our politics with little consequence. That comes at a cost — when companies rule supreme, people lose their ability to assert their voice in the political process and democracy cannot hold.
Technological progress does not require businesses to operate like empires. Some of the most impactful A.I. advancements came not from tech behemoths racing to recreate human levels of intelligence, but from the development of relatively inexpensive, energy-efficient models to tackle specific tasks such as weather forecasting. DeepMind’s AlphaFold built a nongenerative A.I. model that predicts protein structures from their sequences — a function critical to drug discovery and understanding disease. Its creators were awarded the 2024 Nobel Prize in Chemistry.
A.I. tools that help everyone cannot arise from a vision of development that demands the capitulation of the majority to the self-serving agenda of the few. Transitioning to a more equitable and sustainable A.I. future won’t be easy: It’ll require everyone — journalists, civil society, researchers, policymakers, citizens — to push back against the tech giants, produce thoughtful government regulation wherever possible and invest more in smaller-scale A.I. technologies. When people rise, empires fall.
Karen Hao is a reporter who covers artificial intelligence. She was formerly a foreign correspondent covering China’s technology industry for The Wall Street Journal and a senior editor for A.I. at MIT Technology Review.
Source photograph by Gary Yeowell/Getty Images
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post Silicon Valley Is at an Inflection Point appeared first on New York Times.