Lawmakers in California last month advanced about 30 new measures on artificial intelligence aimed at protecting consumers and jobs, one of the biggest efforts yet to regulate the new technology.
The bills seek the toughest restrictions in the nation on A.I., which some technologists warn could kill entire categories of jobs, throw elections into chaos with disinformation, and pose national security risks. The California proposals, many of which have gained broad support, include rules to prevent A.I. tools from discriminating in housing and health care services. They also aim to protect intellectual property and jobs.
California’s legislature, which is expected to vote on the proposed laws by Aug. 31, has already helped shape U.S. tech consumer protections. The state passed a privacy law in 2020 that curbed the collection of user data, and in 2022 it passed a child safety law that created safeguards for those under 18.
“As California has seen with privacy, the federal government isn’t going to act, so we feel that it is critical that we step up in California and protect our own citizens,” said Rebecca Bauer-Kahan, a Democratic assembly member who chairs the State Assembly’s Privacy and Consumer Protection Committee.
As federal lawmakers drag out regulating A.I., state legislators have stepped into the vacuum with a flurry of bills poised to become de facto regulations for all Americans. Tech laws like those in California frequently set precedent for the nation, in large part because lawmakers across the country know it can be challenging for companies to comply with a patchwork across state lines.
State lawmakers across the country have proposed nearly 400 new laws on A.I. in recent months, according to the lobbying group TechNet. California leads the states with a total of 50 bills proposed, although that number has narrowed as the legislative session proceeds.
Colorado recently enacted a comprehensive consumer protection law that requires A.I. companies use “reasonable care” while developing the technology to avoid discrimination, among other issues. In March, the Tennessee legislature passed the ELVIS Act (Ensuring Likeness Voice and Image Security Act), which protects musicians from having their voice and likenesses used in A.I.-generated content without their explicit consent.
It’s easier to pass legislation in many states than it is on the federal level, said Matt Perault, executive director of the Center on Technology Policy at the University of North Carolina at Chapel Hill. Forty states now have “trifecta” governments, in which both houses of the legislature and the governor’s office are run by the same party — the most since at least 1991.
“We’re still waiting to see what proposals actually become law, but the massive number of A.I. bills introduced in states like California shows just how interested lawmakers are in this topic,” he said.
And the state proposals are having a ripple effect globally, said Victoria Espinel, the chief executive of the Business Software Alliance, a lobbying group representing big software companies.
“Countries around the world are looking at these drafts for ideas that can influence their decisions on A.I. laws,” she said.
More than a year ago, a new wave of generative A.I. like OpenAI’s ChatGPT provoked regulatory concern as it became clear the technology had the potential to disrupt the global economy. U.S. lawmakers held several hearings to investigate the technology’s potential to replace workers, violate copyrights and even threaten human existence.
The OpenAI chief executive Sam Altman testified before Congress and called for federal regulations roughly a year ago. Soon after, Sundar Pichai, chief executive of Google; Mark Zuckerberg, chief executive of Meta; and Elon Musk, chief executive of Tesla gathered in Washington for an A.I. forum hosted by the Senate majority leader, Chuck Schumer, Democrat of New York. The tech leaders warned of the risks their products presented and called for Congress to create guardrails. They also asked for support for domestic A.I. research to ensure the United States could maintain its lead in developing the technology.
At the time, Mr. Schumer and other U.S. lawmakers said they wouldn’t repeat past mistakes of failing to rein in emerging technology before it became harmful.
Last month, Mr. Schumer introduced an A.I. regulation road map that proposed $38 billion in investments, but few specific guardrails on the technology in the near term. This year, federal lawmakers have introduced bills to create an agency to oversee A.I. regulations, proposals to clamp down on disinformation generated by A.I. and privacy laws for A.I. models.
But most tech policy experts say they don’t expect federal proposals to pass this year.
“Clearly there is a need for harmonized federal legislation,” said Michael Karanicolas, executive director of the Institute for Technology Law and Policy at the University of California, Los Angeles.
State and global regulators have rushed to fill the gap. In March, the European Union adopted the AI Act, a law that curbs law enforcement’s use of tools that can discriminate, like facial recognition software.
The surge of state A.I. legislation has touched off a fierce lobbying effort by tech companies against the proposals. That effort is particularly pronounced in Sacramento, the California capital, where nearly every tech lobbying group has expanded their staff to lobby the Legislature.
The 30 bills that were passed out of either the Senate or Assembly will now go to various committees for further consideration before the Legislature ends its session later this summer. Democrats there control the Assembly, Senate and governor’s office.
“We’re in a unique position because we are the fourth-largest economy on the planet and where so many tech innovators are,” said Josh Lowenthal, an Assembly member and Democrat, who introduced a bill aimed at protecting young people online. “As a result, we are expected to be leaders and we expect that of ourselves.”
Three of those bills are designed to protect actors and singers, living or dead.
SAG-AFTRA, the union for actors and other creators, helped write a bill that would require studios to obtain explicit consent from actors for the use of their digital replicas. The union said the public has expressed strong support for intellectual property protections after high-profile conflicts between actors and A.I. companies.
Last month, for instance, the actress Scarlett Johansson accused OpenAI of copying her voice to develop a voice assistant without her permission. OpenAI has denied the accusation.
California is an important battleground because “the Legislature tends to be progressive and believes strongly in consumer protection and worker rights,” said Duncan Crabtree Ireland, the strategic and creative chief negotiator for SAG-AFTRA. “But it is also where five of the six biggest A.I. companies in the world are based.”
The bill gaining the most traction requires safety tests of the most advanced A.I. models, like OpenAI’s chatbot GPT4 and the image creator DALL-E, which can generate humanlike writing or eerily realistic videos and images. The bill, by State Senator Scott Wiener, a Democrat, also gives the state attorney general power to sue for consumer harms.
(The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems.)
On May 8, the California Chamber of Commerce and tech lobbying groups wrote a letter to appropriations committee members who were considering the bill. The letter described the proposal as “vague and impractical,” saying it would create “significant regulatory uncertainty” that discourages innovation.
Chamber of Progress, a tech trade group with lobbyists in California, has also criticized the bill. It issued a report this week that noted the state’s dependence on tech business and their tax revenue, which total around $20 billion annually.
“Let’s not overregulate an industry that is located primarily in California, but doesn’t have to be, especially when we are talking about a budget deficit here,” said Dylan Hoffman, executive director for California and the Southwest for TechNet, in an interview.
Mr. Wiener said his safety testing bill would likely be amended in coming weeks to include provisions that support more transparency in A.I. technology development and to limit the tests only to the biggest systems that companies have invested more than $100 million in to develop. He stressed that many in the tech sector have supported the bill.
“I would prefer that Congress act, but I’m not optimistic they will,” he said.
The post States Take Up A.I. Regulation Amid Federal Standstill appeared first on New York Times.