Pennsylvania’s attorney general recently accused a police officer of taking photos in a women’s locker room, secretly filming people while on duty and possessing a stolen handgun. But he was unable to bring charges related to a cache of photos found on the officer’s work computer featuring lurid images of minors created by artificial intelligence. When the computer was seized, in November, creating digital fakes was not yet considered a crime.
Since then, a statewide ban on such content has taken effect. While it came too late to apply to the police officer’s case, the state’s attorney general, Dave Sunday, has already used the law to charge another man who was accused of having 29 files of A.I.-generated child sexual abuse material in his home.
Over the past two years, American legislators have grown increasingly alarmed by the threat of malicious deepfakes. Sexual images of middle school students have been digitally faked without their permission. Vice President JD Vance disavowed an almost certainly inauthentic clip that mimicked his voice to criticize Elon Musk. An ad featuring an A.I.-generated version of the actress Jamie Lee Curtis was removed from Instagram only after she posted a public complaint.
Legislators are responding. Already this year, 26 laws governing various kinds of deepfakes have been enacted, following 80 in 2024 and 15 in 2023, according to the political database Ballotpedia. This month in Tennessee, sharing deepfake sexual images without permission became a felony that carries up to 15 years of prison time and as much as $10,000 in fines. Iowa enacted two bills related to sexually explicit deepfakes last year, one of which established sexual images of children generated by A.I. as a felony punishable by up to five years in prison and a $10,245 fine for the first offense. In New Jersey, a recently approved ban on malicious deepfakes could result in a fine of up to $30,000 and prison time.
California has been especially aggressive in reacting to deepfakes, passing eight related bills in September alone, including five on a single day.
“We’re in a very dangerous time, and we’re playing defense on everything that we do,” said Josh Lowenthal, a Democrat in the California Assembly, while introducing a session last week in Sacramento on the dangers of deepfakes.
Mr. Lowenthal, who co-sponsored a recently introduced bill targeting sexually explicit deepfake material, later watched a demonstration of the technology spit out a realistic image of him in a prison cell and produce a fake news story about comments he never made.
“I would’ve thought that was me,” he said after hearing deepfake audio of his voice, generated on the spot.
Reining in deepfakes has also become a federal priority, and a markedly bipartisan one. Congress overwhelmingly passed the Take It Down Act, which criminalizes the nonconsensual sharing of sexually explicit photos and videos, including A.I. content, and requires tech platforms to quickly remove the content once they are notified. President Trump signed the bill in the White House Rose Garden on Monday, accompanied by his wife, Melania, who backed the legislation.
But lawmakers’ enthusiasm for deepfake legislation has also set off a surge of pushback. Critics complain that many of the laws stifle free speech, constrain American competitiveness and are so complicated to enforce that they are, in effect, toothless.
Because of those concerns, some Republicans in Congress are trying to curb the state actions. They are now considering a 10-year moratorium that would stop states from enforcing and passing legislation related to artificial intelligence, giving the federal government sole regulatory authority and lessening the pressure on A.I. companies. Soon after re-entering office, Mr. Trump revoked an executive order from his predecessor that sought to ensure the technology’s safety and transparency, issuing his own executive order that decried “barriers to American A.I. innovation” and pushed the United States “to retain global leadership” in the field.
Regulating artificial intelligence requires balance, said Representative Josh Gottheimer, a Democrat from New Jersey who has helped write multiple deepfake bills. For all its potential dangers, he said, the technology could also become a powerful engine for job creation and creative expression.
“It’s an ever-evolving space,” said Mr. Gottheimer, a candidate for governor who last month posted a video that featured, with a disclosure, a digitally generated version of himself boxing with Mr. Trump. “The key is making sure that people are protected as we harness the opportunities here.”
Some state laws have also been challenged in court. In California, a conservative YouTube creator who posted an edited campaign video spoofing former Vice President Kamala Harris’s voice sued the attorney general last fall over two laws focused on election-related deepfakes. His argument: The regulations force social media companies to censor protected political speech, including parodies, and allow anybody to sue over content that he or she dislikes.
The lawsuit now includes plaintiffs such as The Babylon Bee, a right-wing satirical site; Rumble, the right-wing streaming platform; and X, the social media company owned by Mr. Musk (which last month also sued Minnesota over a similar law). A federal judge ordered that enforcement of one of the California laws be temporarily paused, saying it “acts as a hammer instead of a scalpel.”
Litigation isn’t the only challenge to regulating deepfakes. In Dubuque County, Iowa, Sheriff Joseph L. Kennedy is assisting a local police department with a case involving male high schoolers who shared images of female students’ faces attached to artificially generated nude bodies.
Such cases are time-consuming to work through, requiring careful documentation, data preservation efforts, subpoenas and search warrants for devices, Sheriff Kennedy said. Occasionally, the companies behind the websites or apps that people use to make A.I. images are uncooperative, especially if they are based in a country where an Iowa law has no power, he said.
“That’s where you can hit snags and are short on options for what you can do,” he said. “Sometimes, it just seems like we’re chasing our tails.”
While most deepfake bans are focused on sexual, political or artistic content, the technology also has banks and other businesses on high alert. Michael S. Barr, a member of the Federal Reserve’s board of governors, said in a speech last month that the technology “has the potential to supercharge identity fraud.”
One deepfake scam bilked Arup, a British design and engineering company that worked on the Sydney Opera House and Beijing’s Bird’s Nest stadium, out of $25 million last year. Fraudsters also tried to target Ferrari last summer, using WhatsApp messages that mimicked the southern Italian accent of the automaker’s chief executive.
“If this technology becomes cheaper and more broadly available to criminals — and fraud detection technology does not keep pace — we are all vulnerable to a deepfake attack,” Mr. Barr said.
Tiffany Hsu reports on the information ecosystem, including foreign influence, political speech and disinformation
The post Deepfake Laws Bring Prosecution and Penalties, but Also Pushback appeared first on New York Times.