LONDON — The hints first came in Paris. Rumors were circulating over the two-day AI Action Summit in the city this week that Britain would recast its landmark AI Safety Institute (AISI) as a “security institute.”
British government officials had also cited an absence of references to “national security” as their reason for not signing the summit’s declaration, positioning Britain in lockstep with the United States.
Their language dovetailed with a landmark speech from U.S. Vice President JD Vance in Paris on Tuesday, setting out the administration’s thinking on tech-geopolitics.
“The AI future is not going to be won by hand-wringing about safety,” Vance said.
Britain was listening. The country’s science and tech ministry said on Friday that its AI Safety Institute, the world’s first, would focus on “serious AI risks with security implications” and change its name.
The AI Security Institute will focus on cybersecurity and partner with Ministry of Defence to mitigate biosecurity risks, while also working with the Home Office on fraud and the use of AI to create child abuse images.
Henry de Zoete, former AI adviser to the U.K. government and now senior adviser at the Oxford Martin AI Governance Initiative, said: “The U.K. AISI leads the world on testing powerful AI models for national security risks. It’s great to see it double down on this approach.”
Technology Secretary Peter Kyle described it as the “logical next step” for the AISI — and insisted its work wouldn’t change.
But the institute has quietly made subtle changes to how it refers to its work.
Bye-bye bias
In his speech in Paris this week, Vance insisted that “AI must remain free from ideological bias” and “never restrict our citizens’ right to free speech.”
On its website the institute has now dropped talk of “societal impacts” as a reason for evaluating models, changing it to “societal resilience.” References to the risk of AI creating “unequal outcomes” and “harming individual welfare” have also gone.
The institute has also dropped “public accountability” as a reason for evaluating models, changing it to keeping the “public safe and secure.”
That move is already attracting scrutiny.
“We are pleased to see how seriously the U.K. is taking criminal misuse of AI,” said Elizabeth Seger, director of digital policy at think tank Demos. “However we are deeply concerned that any attention to bias in AI applications has been explicitly cut out of the new AISI’s scope.”
“We ask the government to clarify where, if not AISI, harms to U.K. citizens from bias and discrimination will be tackled,” Seger added.
Michael Birtwistle, associate director at the Ada Lovelace Institute, said it risked leaving a “whole range of harms to people and society unaddressed,” which the AISI had previously committed to tackling.
“The most significant and recurring AI scandals relate heavily to bias, from Australian robo-debt, to the Dutch welfare algorithm, to the Ofqual exams algorithm. There is a real risk that inaction on risks like bias will lead to public opinion turning against AI and the U.K. missing out on its benefits.”
The U.K. government did not immediately respond to a request for comment on the changes.
Overshadowed
On the morning media round Friday, Kyle got little chance to talk about his department’s announcement. Instead the conversation was dominated by Ukraine’s future and tariffs.
Trump signed a presidential memorandum Thursday moving the U.S. one step closer to a “reciprocal” tariff system. This could see Britain hit with tariffs of up to 21 percent on exports if, as indicated by Trump’s trade adviser Peter Navarro last night, VAT is considered by the administration as a tax on imports.
Kyle said: “We need a government with cool, clear thinking at times like this… We will assess any changes and challenges that come down the line from any part of the global economy, and we will act appropriately and in the best interest of Britain.”
As POLITICO first reported earlier this month, the thinking inside the U.K. government has changed on how to bring forward new AI legislation. Ministers have dropped language about forcing AI companies to give the AISI pre-release access for testing, which was also the subject of industry resistance.
The last time the country’s Technology Secretary Peter Kyle mentioned it publicly was the morning after Trump was elected.
The post Britain dances to JD Vance’s tune as it recasts AI security institute appeared first on Politico.