One of the only things Republicans and Democrats seem to agree on in Washington these days is the importance of AI. Immediately after taking office, Donald Trump signed an executive order aimed at solidifying the United States’ “position as the global leader in AI” so as to “secure a brighter future for all Americans,” after which he announced White House support for a $500 billion partnership between OpenAI, SoftBank, and Oracle to build data centers and other infrastructure meant to expand the capabilities of large language models like ChatGPT. Democratic pundits and politicians have been enthusiastic, too, emphasizing the need for U.S. leadership. “If America falls behind China on AI,” Chuck Schumer warned earlier this year, “we will fall behind everywhere: economically, militarily, scientifically, educationally, everywhere.”
On Thursday morning, the Senate Committee on Commerce, Science and Transportation hosted a hearing on “Winning the AI Race.” Featuring some of that industry’s biggest names, like OpenAI CEO Sam Altman and Microsoft vice chair and president Brad Smith, the panel explored “regulatory barriers on the AI supply chain” so that the U.S. can “secure U.S. dominance in the 21st century global industrial revolution” over China, specifically. Like most D.C. chatter about the importance of AI, though, the hearing was light on answers to a few seemingly basic questions: What exactly is “artificial general intelligence,” or AGI, as the experts kept referring to it? What value are these companies’ products providing, and what will they do in the future that makes them so essential to U.S. national security?
Senators didn’t ask those kinds of questions. And the executives they’d invited to Capitol Hill didn’t volunteer answers, preferring the sorts of quasi-religious generalizations that have become a hallmark for the industry. Altman conceded in his prepared testimony that AGI is “weakly defined,” but suggested that it was enough to describe it as “a system that can tackle increasingly complex problems, at human level, in many fields.” It will be, he argued, “the most powerful tool ever created,” enabling people to “build incredible things for each other and improve their quality of life.” It can usher in a future that “can be almost unimaginably bright, but only if we take concrete steps to ensure that an American-led version of AI, built on democratic values like freedom and transparency, prevails over an authoritarian one.” Smith vowed that AI “has the potential to become the most useful tool for people ever invented.”
Altman’s prepared testimony listed out a few similarly broad-strokes examples of how it’s currently being used. US National Laboratories are employing OpenAI products to “accelerate breakthroughs in areas like energy,” while ChatGPT is helping state employees in Pennsylvania “do administrative tasks more quickly.” In his opening remarks, however, Altman mostly focused on how much he liked having a computer when he was a kid.
The ways people use these products in real life, meanwhile, are plain. Social media platforms are clogged with AI-generated slop and photos made to resemble Studio Ghibli productions. Email platforms push users to generate AI summaries of one-line emails. As the Harvard Business Review notes, the top usage of AI technology is for therapy and companionship. People rely on large language models for life advice, and for help crafting texts to their friends and crushes. A recent New York Magazine feature catalogues the widespread use of OpenAI’s ChatGPT by college students, who call on it to write essays and even respond to professors’ prompts asking them to introduce themselves. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” said a Cal State Chico ethics professor who has spent the “the better part of the past two years grading AI-generated papers.” The tool makes practically every academic task easier with no obvious downside. It is, as the article points out, impossible for growing numbers of students to resist.
If Altman and Smith are vague about the soaring potential of AI, they’ve been comparatively specific about how they’d like policymakers to help them. “Winning the “AI innovation race,” Smith argued on Thursday, “will require massive data centers and AI infrastructure that need federal support to expand and modernize the electrical grid on which they depend.” Last year, OpenAI spent $1.76 million lobbying the federal government, up from just $260,000 the year before. Its demands have been precise. In a 15 page comment on Trump’s second executive order on the subject of artificial intelligence, OpenAI requested that the federal government override state attempts to regulate the company or limit infrastructure development; implement export controls so as to keep their products out of China, preventing additional competition there; and loosen intellectual property protections so as to allow the company’s large language models to train on more material. It called for a National Transmission Highway Act to expand the construction of transmission lines, fiber connectivity, and gas pipelines, and asked to use the Defense Production Act to “shorten timelines for data center power infrastructure projects.” OpenAI also argued that AI developers should be granted access to massive amounts of government data. In exchange for that access, the authors of the comment wrote, “developers using this data could work with governments to unlock new insights that help it develop better public policies.”
At Thursday’s hearings, Republicans and Democrats alike appeared credulous about the virtues of AI. They certainly didn’t inquire about why the companies gathered before them seem to be so bad at making money off of it. OpenAI, for instance, lost $5 billion last year. And although Microsoft has demanded that governments roll out the red carpets for new data centers, over the last six months it’s walked back plans for two gigawatts worth of data center projects in the U.S. and Europe, thanks to an oversupply relative to its current demand forecast. Given AI developers’ allegedly urgent and expanding needs for land, electricity, and control over local governments, however, there’s been relatively little public debate about the already mounting harms being posed by these technologies: the degradation of future generations’ abilities to read, write, and think critically; the filling up our digital lives up with ugly garbage; the impairment of our capacity to form and maintain relationships with other human beings. If artificial intelligence is so important to the United States, in other words, then why does it also seem to be making so many parts of life here so shitty?
The post AI Execs Are Demanding Government Support—No Questions Asked appeared first on New Republic.