BRUSSELS — Brussels has served the world’s leading artificial intelligence companies with a tricky summer dilemma.
OpenAI, Google, Meta and others must decide in the coming days and weeks whether to sign up to a voluntary set of rules that will ensure they comply with the bloc’s stringent AI laws — or refuse to sign and face closer scrutiny from the European Commission.
Amid live concerns about the negative impacts of generative AI models such as Grok or ChatGPT, the Commission on Thursday took its latest step to limit those risks by publishing a voluntary set of rules instructing companies on how to comply with new EU law.
The final guidance handed clear wins to European Parliament lawmakers and civil society groups that had sought a strong set of rules, even after companies such as Meta and Google had lambasted previous iterations of the text and tried to get it watered down.
That puts companies in a tough spot.
New EU laws will require them to document the data used to train their models and address the most serious AI risks as of Aug. 2.
They must decide whether to use guidance developed by academic experts under the watch of the Commission to meet these requirements, or get ready to convince the Commission they comply in other ways.
Companies that sign up for the rules will “benefit from more legal certainty and reduced administrative burden,” Commission spokesperson Thomas Regnier told reporters on Thursday.
French AI company Mistral on Thursday became the first to announce it would sign on the dotted line.
Win for transparency
Work on the so-called code of practice began in September, as an extension of the bloc’s AI rulebook that became law in August 2024.
Thirteen experts embarked on a process focused on three areas: the transparency AI companies need to show to regulators and customers who use their models; how they will comply with EU copyright law; and how they plan to address the most serious risks of AI.
The proceedings quickly boiled down to a few key points of contention.
Industry repeatedly emphasized that the guidance should not go beyond the general direction of the AI Act, while campaigners complained the rules were at risk of being watered down amid intense industry lobbying.
On Wednesday, European Parliament lawmakers said they had “great concern” about “the last-minute removal of key areas of the code of practice,” such as requiring companies to be publicly transparent about their safety and security measures and “the weakening of risk assessment and mitigation provisions.”
In the final text put forward on Thursday, the Commission’s experts handed lawmakers a win by explicitly mentioning the “risk to fundamental rights” on a list of risks that companies are asked to consider.
Laura Lázaro Cabrera of the Center for Democracy and Technology, a civil rights group, said it was “a positive step forward.”
Public transparency was also addressed: the text says companies will have to “publish a summarised version” of the reports filed to regulators before putting a model on the market.
Google spokesperson Mathilde Méchin said the company was “looking forward to reviewing the code and sharing our views.”
Big Tech lobby group CCIA, which includes Meta and Google among its members, was more critical, stating that the code “still imposes a disproportionate burden on AI providers.”
“Without meaningful improvements, signatories remain at a disadvantage compared to non-signatories,” said Boniface de Champris, senior policy manager at CCIA Europe.
He heckled “overly prescriptive” safety and security measures and slammed a copyright section, with “new disproportionate measures outside the Act’s remit.”
Sour climate
A sour climate around the EU’s AI regulations and the drafting process for the guidance will likely affect tech companies’ calculations on how to respond.
“The process for the code has so far not been well managed,” said Finnish European Parliament lawmaker Aura Salla, a conservative politician and former lobbyist for Meta, ahead of Thursday’s announcement.
The thirteen experts produced a total of four drafts over nine months, a process that garnered the attention of over 1,000 participants and was discussed in several iterations of plenaries and four working groups — often in the evenings since some of the experts were based in the U.S. or Canada.
The Commission’s Regnier applauded the process as “inclusive,” but both industry and civil society groups said they felt they had not been heard.
The U.S. tech companies that must now decide whether to sign the code have also shown themselves critical of the EU’s approach to other parts of its AI regulation.
Tech lobby groups, such as the CCIA, were among the first to call for a pause on the parts of the EU’s AI Act that had not yet been implemented — specifically, obligations for companies deploying high-risk AI systems, which are set to take effect next year.
The post EU throws down gauntlet to Big Tech over artificial intelligence risks appeared first on Politico.