President Trump wants to unleash American A.I. companies on the world. For the United States to win the unfolding A.I. arms race, his logic goes, tech companies should be unfettered by regulations and free to develop artificial intelligence technology as they generally see fit. He is convinced that the benefits of American supremacy in this technology outweigh the risks of ungoverned A.I., which experts warn could include heightened surveillance, disinformation or even an existential threat to humanity. This conviction is at the heart of the administration’s recently unveiled A.I. Action Plan, which looks to roll back red tape and onerous regulations that it says paralyze A.I. development.
But Mr. Trump can’t single-handedly protect American A.I. companies from regulation. Washington may be able to eliminate the rules of the road at home, but it can’t do so for the rest of the world. If American companies want to operate in international markets, they must follow the rules of those markets. That means that the European Union, an enormous market that is committed to regulating A.I., could well thwart Mr. Trump’s techno-optimist vision of a world dominated by self-regulated, free-market U.S. companies.
In the past, the E.U.’s digital regulations have resonated well beyond the continent, with technology companies extending those rules across their global operations in a phenomenon I have termed the Brussels Effect. Companies like Apple and Microsoft now broadly use the E.U.’s General Data Protection Regulation, which gives users more control over their data, as their global privacy standard in part because it is too costly and cumbersome for them to follow different privacy policies in each market. Other governments also often look to E.U. rules when drafting their own laws regulating the tech sector.
The same phenomenon could at least partly hold for A.I. technology. Over the past decade, the E.U. has put in place a number of regulations aimed at balancing A.I. innovation, transparency and accountability. Most important is the A.I. Act, the world’s first comprehensive and binding artificial intelligence law, which entered into force in August 2024. The act establishes guardrails against the possible risks of artificial intelligence, such as the loss of privacy, discrimination, disinformation and A.I. systems that could endanger human life if left unchecked. This law, for instance, restricts the use of facial recognition technology for surveillance and limits the use of potentially biased artificial intelligence for hiring or credit decisions. American developers looking to get access to the European market will have to comply with these rules and others.
Some companies are already pushing back. Meta has accused the E.U. of overreach and even sought the Trump administration’s help in opposing Europe’s regulatory ambitions. But other companies, such as OpenAI, Google and Microsoft, are signing on to Europe’s A.I. code of practice. These tech giants see an opportunity: Playing nice with the European Union could help build trust among users, pre-empt other regulatory challenges and streamline their policies around the world. Individual American states looking to govern A.I., too, could use E.U. rules as a template when writing their own bills, as California did when developing its privacy laws.
By holding its ground, Europe can steer global A.I. development toward models that protect fundamental rights, ensure fairness and don’t undermine democracy. Standing firm would also boost Europe’s tech sector by creating fairer competition between foreign and European A.I. firms, which have to abide by E.U. laws.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in.
Want all of The Times? Subscribe.
The post Trump’s Plans for A.I. Might Hit a Wall. Thank Europe. appeared first on New York Times.