The AI Action Summit in Paris is one of the most important events of the year, as elected officials and tech executives meet in Paris to discuss the future of AI and regulation.
It’s why Sam Altman penned a hopeful vision about the near and distant future of ChatGPT AI and what happens when AGI and AI agents start stealing jobs and impacting your life in more meaningful ways. It’s a vision that’s too hopeful, according to an analysis from the same ChatGPT, which highlighted Altman’s downplaying of risks associated with the rise of AI.
AI must be safe for humans, especially once it reaches AGI and superintelligence. Unsurprisingly, one of the points of the AI Action Summit was to sign an international statement on safe AI development.
The US and UK declined to sign the document, although other participants were not as reluctant. Even China is among the signatories who pledged to adhere to “open,” “inclusive,” and “ethical” approaches to developing AI products.
Is it good or bad that the US and UK refrained from inking the statement?
The representatives of the two countries have not explained their decision. While America’s stance isn’t exactly shocking, the UK’s approach is more puzzling, especially considering a recent survey in the country that showed Brits are actually concerned about the dangers of AI, especially the more intelligent kind.
Before the joint statement, Vice President JD Vance made clear to everyone that the US doesn’t want too much regulation. Per the BBC, AI regulation could “kill a transformative industry just as it’s taking off.”
AI was “an opportunity that the Trump administration will not squander,” Vance said, adding that “pro-growth AI policies” should come before safety. Regulation should foster AI development rather than “strangle it.” The VP told European leaders they especially should “look to this new frontier with optimism, rather than trepidation.”
Meanwhile, French President Emmanual Macron took the opposite stance: “We need these rules for AI to move forward.”
However, Macron also seemed to normalize AI-generated deepfakes to promote the AI Action Summit a few days ago. He posted on social media clips showing his face inserted in all sorts of videos, including the TV show MacGyver.
As a longtime ChatGPT Plus user in Europe who can’t use the latest OpenAI innovations as soon as they’re available in the US because of local EU regulations, it’s disturbing to see Macron make use of AI fakes to promote an event where AI safety and regulation are top priorities.
Of all the AI products available now, AI-generated images and videos are the worst, as far as I’m concerned. They can be used to mislead unsuspecting people with incredible ease. AI safety should absolutely handle that.
That’s not to say that the US and UK not signing the document isn’t disturbing. If you were worried about OpenAI shedding AI safety engineer after AI safety engineer in recent months, hearing Vance promote AI deregulation as a national policy is disturbing.
It’s not like OpenAI and other AI firms will usher in AIs that will ultimately destroy the human race in the near future. But some guardrails have to exist.
Then again, the AI Action Summit’s declaration isn’t an enforceable regulation but more of a cordial agreement. It sounds good to say your country will develop “open,” “inclusive,” and “ethical” AI after the Paris event, but it’s not a guarantee.
China signing the agreement is the best example of that. There’s nothing ethical about DeepSeek’s real-time censorship that happens if you try to talk to the AI about topics that the Chinese government deems too sensitive to discuss.
DeepSeek isn’t safe either if databases containing plain-text user content can be hacked, and if DeepSeek user data is sent over the web to Chinese servers unencrypted. Also, DeepSeek can help with more nefarious user requests, making it less safe than alternatives.
In other words, we’ll need more AI Action Summit events like the one in Paris in the coming years for the world to try to get on the same page about what it means to AI safety and actually enforce it. The risk is that super-advanced AI will escape human control in the future and act in its own interest, like in the movies.
Then again, anyone with the right hardware can develop super-advanced AI in their own home and accidentally create a misaligned intelligence regardless of what accords are signed internationally and whether they’re enforceable.
The post US and UK refuse to sign AI safety agreement in Paris appeared first on BGR.