We’ve all heard about the revolutionary breakthroughs that could result from the deployment of artificial intelligence, including cures for cancer, advances in energy and individually tailored education for every student. These benefits would be truly game-changing.
Unfortunately for many Americans, these advantages remain distant. And because of the lack of sensible rules governing A.I. technology, we are more familiar with its darker side: the theft of people’s voices and visual likenesses; scams directed at seniors; political attack videos in which you can’t tell if what you’re seeing or hearing is actually the candidate you love (or hate); and worst of all, children committing suicide after turning to A.I. chatbots for help.
These harms will only multiply. That’s why it has been critical for states to step up and pass desperately needed A.I. safety standards while Congress sadly continues to delay enacting federal standards. Now we risk going backward, with President Trump saying on Monday that he will sign an executive order that will replace state laws with “One Rulebook” that the public has never seen.
That executive order should concern every American. Tech companies should not be allowed to use their lobbying power to undo the few protections Americans have from the downsides of A.I. — passed at the state level with bipartisan support. Congress urgently needs to stop delaying the passage of mutually agreed upon federal A.I. standards. But it remains paramount that states be able to protect people right now, before such rules are enacted.
Despite a series of well-meaning and thorough bipartisan Senate meetings, Congress has been unable to overcome its own institutional inertia to pass comprehensive A.I. regulation. And tech leaders — who once warned that “mitigating the risk of extinction from A.I. should be a global priority” — are at best divided on what to do or, at worst, actively lobbying against proposals they think will thwart their short-term interests.
The most serious federal A.I. protection that has passed Congress and been signed into law by Mr. Trump is a bill I led with Senator Ted Cruz and 20 others, the Take It Down Act. This legislation allows victims to remove intimate images — both authentic and A.I.-created deepfakes — published without their consent. While it is a good model in that it requires platforms to take down content, it doesn’t scratch the surface of the many privacy, economic and national security risks A.I. poses.
Enter the states. After years of waiting on Congress, both Democratic and Republican governors and state legislatures have passed their own deeply needed A.I. laws. Tennessee’s ELVIS Act gives artists control over their A.I.-generated digital replicas so others cannot use their voices and likenesses without consent. New laws in Utah require some companies to disclose when people are interacting with A.I. And from Alabama to Minnesota, 28 states have laws to rein in deceptive political deepfakes.
Former Supreme Court Justice Louis Brandeis once argued that states are the laboratories of democracy. Inspired by state action, many of us at the federal level are pressing for similar laws. Senator Chuck Schumer and a bipartisan group of senators have put forward a road map to support A.I. innovation and improve safeguards. Senator John Thune and I have come together to lead a bill that would promote innovation and transparency for A.I. systems in high-risk settings such as health care or public safety. Other federal bills would protect creators online, similar to Tennessee’s ELVIS Act.
But as of now, these are just concepts and bills, not laws. States have no choice but to act.
Tech lobbyists have frequently opposed even the most sensible federal standards and rules — such as labeling videos as produced by A.I. or taking down unauthorized content. In an act of total hubris, they are now arguing that states should be banned from regulating A.I., and pushing the president and Congress to override the states’ laws. That included a recent failed attempt to shoehorn a moratorium on state A.I. laws into Congress’s annual defense bill, and a similar failed attempt this summer as part of congressional Republicans’ budget bill, which the Senate rejected in a 99-1 vote.
Details around the new executive order aren’t known yet, but a draft executive order that circulated last month directed the U.S. attorney general to sue states to overturn A.I. laws and withhold broadband grants and other funding from states with A.I. laws.
Even if the executive order is challenged in court (as it should be), it’s clear the industry — which often says the right thing about wanting rules in place while actively working behind the scenes to scuttle major safeguards — and its allies in the White House and in Congress want to strip Americans of the few legal protections they currently have from A.I.-created harms, rather than work with lawmakers to ensure these technologies are deployed responsibly.
This is wrong. We have seen the tragic consequences of the lack of enforceable safety standards, such as the young boy who died by suicide after confiding in ChatGPT about his emotional struggles and plans about ending his life. Though the chatbot suggested he seek help, it also provided feedback on a photograph of a noose that the boy had made. Repealing what standards exist at the state level will only make it worse.
Once we actually have federal standards passed by Congress, it should be for Congress to decide whether to pre-empt state laws or allow them to go further. And as A.I. continues to evolve, there will always be new applications of the technology that spur states to act before the federal government does. That is something we should encourage — it is how our laboratories of democracy were intended to function.
But we can’t supersede state protections until we have strong, enforceable federal standards in place. So A.I. companies should join us in putting meaningful safeguards spearheaded by Congress in place at the federal level — and stop pretending that in the meantime, state standards are too much of a burden to bear. After all, how can you expect us to believe you’re on the precipice of creating groundbreaking superintelligence if you can’t manage to comply with a handful of state laws?
Tech leaders need to understand: There will be safety standards for these products. If they do not want a patchwork of state laws, they should work with Congress to pass comprehensive standards. Until then, states have a right — and a duty — to stand up for their citizens.
Amy Klobuchar, a Democrat, is a U.S. senator from Minnesota.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post Amy Klobuchar: State A.I. Laws Keep Us Safe. Trump’s Next Move Could Upend That. appeared first on New York Times.




