BRUSSELS — The European Union has missed a key milestone in its effort to rein in the riskiest artificial intelligence models amid heavy lobbying from the U.S. government.
After ChatGPT stunned the world in November 2022, EU legislators quickly realized these new AI models needed tailor-made rules.
But two and a half years later, an attempt to draft a set of rules for companies to sign on to has become the subject of an epic lobbying fight involving the U.S. administration.
Now the European Commission has blown past a legal deadline of May 2 to deliver.
Pressure has been building in recent weeks: In a letter to the Commission in late April, obtained by POLITICO, the U.S. government said the draft rules had “flaws” and echoed many concerns aired in recent months by U.S. tech companies and lobbyists.
It’s the latest pushback from the Trump administration against the EU’s bid to become a super tech regulator, and follows attacks on the EU’s social media law and digital competition rules.
The delay also exposes the reality that the rules are effectively a bandage measure after EU legislators failed to settle some of the thorniest topics when they negotiated the binding AI Act in early 2024. The rules are voluntary, leading to a complicated dance between the EU and industry to land on something meaningful that companies will actually implement.
POLITICO walks you through how a technical process turned into a messy geopolitical lobbying fight — and where it goes from here.
1. What is the EU trying to do?
Brussels is trying to put guardrails around the most advanced AI models such as ChatGPT and Gemini. Since September, a group of 13 academics tasked by the Commission has been working on a “code of practice” for models that can perform a “wide range of distinct tasks.”
That initiative was inspired by ChatGPT’s rise to fame in late 2022. The instant popularity of a chatbot that could perform several tasks upon request, such as generating text, code and now also images and video, upended the bloc’s drafting of the AI Act.
Generative AI wasn’t a thing when the Commission first presented its AI Act proposal in 2021, which left regulators scrambling. “People were saying: we will not go through five more years to wait for a regulation, so let’s try to force generative AI into this Act,” Audrey Herblin-Stoop, a top lobbyist at French OpenAI rival Mistral, recalled at a panel last week.
EU legislators decided to include specific obligations in the act on “general-purpose AI,” a catch-all term that includes generative AI models like OpenAI’s GPT or Google’s Gemini.
The final text left it up to “codes of practice” to put meat on the bones.
2. What is in the code that was due May 2?
The 13 experts, including heavy hitters like Yoshua Bengio, a French Canadian computer scientist nicknamed the “godfather of AI,” and former European Parliament lawmaker Marietje Schaake, have worked on several thorny topics.
According to the latest draft, signatories would commit to disclosing relevant information about their models to authorities and customers, including the data being used to train them, and to drawing up a policy to comply with copyright rules.
Companies that develop a model that carries “systemic risks” also face a series of obligations to mitigate those risks.
The range of topics being discussed has drawn immense interest: Around 1,000 interested parties ranging from EU countries, lawmakers, leading AI companies, rightsholders and media to digital rights groups have weighed in on three different drafts.
3. What are the objections?
U.S. Big Tech companies, including Meta and Google, and their lobby group representatives have repeatedly warned that the code goes beyond what was agreed on in the AI Act.
Just last week, Microsoft President Brad Smith said “the code can be helpful” but warned that “if too many things [are] competing with each other … it’s not necessarily helpful.”
The companies also claim this is the reason the deadline was missed.
“Months [were] lost to debates that went beyond the AI Act’s agreed scope, including [a] proposal explicitly rejected by EU legislators,” Boniface de Champris, senior policy manager at Big Tech Lobby CCIA, told POLITICO.
Digital rights campaigners, copyright holders and lawmakers haven’t been impressed with Big Tech’s criticism.
“We have to ensure that the code of practice is not designed primarily to make AI model providers happy,” Italian Social Democrat lawmaker Brando Benifei and the Parliament’s AI Act lead negotiator said in an interview — a clear hint that the Parliament doesn’t want a watered-down code.
Benifei was among a group of lawmakers who resisted a decision in March to remove “large-scale discrimination” from a list of risks in the code that AI companies must manage.
There have also been allegations of unfair lobbying tactics by U.S. Big Tech. Last week, two non-profit groups complained that “Big Tech enjoyed structural advantages.”
“A staggering amount of corporate lobbying is attempting to weaken not just the EU’s AI laws but also DMA and DSA,” said Ella Jakubowska, head of policy at European Digital Rights.
Tech lobby CCIA resisted that criticism, saying AI model providers are “the primary subjects of the code” but make up only 5 percent of the 1,000 interest groups involved in the drafting.
4. What has the U.S. government said?
The U.S. administration has been less public in its pushback against the EU’s AI rules than in its attacks on the EU’s social media law (the Digital Services Act) and the EU’s digital competition rules (the Digital Markets Act).
Behind the scenes, the positioning has been strong. The U.S. Mission to the EU filed feedback on the third draft of the code of practice in a letter to the European Commission echoing many of the concerns already aired by U.S. tech executives or lobby groups.
“Several elements in the code are not found in the AI Act,” the letter read.
The mission piggybacked on the European Commission’s own pivot toward focusing on AI innovation, and said that the code must be improved “to better enable AI innovation.”
5. How will this play out?
Ultimately, the success of the effort hinges on whether leading AI companies such as U.S.-based Meta, Google, OpenAI, Anthropic and French Mistral sign on to it.
That means the Commission needs to figure out how to publish something that meets its intentions while also being sufficiently palatable to Big Tech and the Trump administration.
The Commission has repeatedly stressed that the code is a voluntary tool for companies to ensure they comply — but more recently warned that life could be more complicated for companies that don’t sign it.
Those who do sign the code will “benefit from increased trust” by the Commission’s AI Office and “from reduced administrative burden,” said European Commission spokesperson Thomas Regnier.
Benifei too said that it’s “our challenge to make sure that the obligations behind the code are somehow applicable to those that don’t sign the code.”
Under the timelines set out in the AI Act, providers of the most complex AI models will have to abide by the new obligations, either through the code or otherwise, by Aug. 2.
The post EU sails past deadline to tame AI models amid vocal US opposition appeared first on Politico.