For a great many of us, permanently hooked to our phones, doom-scrolling through social media has become the lens through which we’ve watched Israel’s war in Gaza unfold.
Holiday pictures and life updates from friends, colleagues and celebrities have often been punctuated with a hot take on Hamas and, for some, graphic images of bloodied victims.
But, most likely, the posts have been somewhat sporadic – so much so that activists on both sides have accused platforms of suppressing content relating to the war.
That is until this week, when a young man from Malaysia, whose fairly ordinary account shows him posing next to supercars and partying with friends, uploaded a simple AI-generated image to his Instagram stories.
Amid a desert landscape with snowy mountains in the background, it shows a sprawling mass of refugee tents neatly lined up spelling out the slogan “All eyes on Rafah”, a reference to the scene of a deadly Israeli airstrike in southern Gaza.
Within hours, the post went viral, quickly snowballing as it was shared by celebrities, from pop star Dua Lipa (88 million followers) to Bridgerton star Nicola Coughlan (five million). By Wednesday, users were reporting their social media feeds had been overwhelmed by the image.
At first glance, the aerial shot looks like it could be real. But on closer inspection, several irregularities heavily suggest it has been AI-generated. The vast sprawl of tents is unnaturally symmetrical – a key giveaway of AI is a repeated pattern – and casts inconsistent shadows. Added to this, the setting itself – with no humans in sight for miles and snow-covered mountains in the background – in no way reflects Rafah, or anywhere in Gaza for that matter.
As it stands, the image has now been reposted by nearly 50 million people, putting it firmly in the pantheon of Instagram’s most viral posts, just shy of the number one spot occupied by Lionel Messi after Argentina won the 2022 World Cup.
It’s fair to assume then that the “All eyes on Rafah” post is unlikely to have slipped your attention. But it begs the question: why, among the 65,000 posts uploaded to the platform every minute, did this image manage to cut through so effectively?
Meta, which owns Instagram and Facebook, has been particularly tough in its policing of similarly controversial Israel-Gaza content since October 7, admitting it errs on the side of caution in many cases by simply taking it down. Yet, in this case, the post appears to have been amplified.
It has led to a deep unease that the cause may be AI. Experts say the image highlights how activists can now use the technology to create and share content that can send an explicit message while still obeying the platforms’ rules.
In this case, social media expert Matt Navarra told NBC the image may have got past any automated moderation because the text appeared in the image itself, helping it dodge any keyword detection algorithms.
Cyber expert Professor Alan Woodward from the University of Surrey went a step further and questioned whether users were explicitly asking AI image generators to create content that could bypass the platforms’ policies.
He told The Telegraph he was “bemused” how the “All eyes on Rafah” message “got through whereas anything similar in nature seems to have been blocked”. He added: “Could it be that a large language model like GPT knows enough about social media moderation to be able to work within the boundary condition that states the result must pass moderation?”
The “All eyes on Rafah” post is not the only pro-Palestinian AI-generated post to go viral – nor is it the most controversial. Another one shows Israeli prime minister Benjamin Netanyahu in a blood-spattered prison uniform, overlaid with the words “war criminal”, “child killer” and “Satanyahu”. It has so far been shared more than 2.5 million times.
Meta has admitted it struggles to detect AI-generated content created by any models other than its own. In February, the social media giant’s president of global affairs, Sir Nick Clegg, announced both Instagram and Facebook would begin labelling every image that had been manipulated with AI “in the coming months” – but confessed its engineers had not yet developed the tools to do so.
In the meantime, AI-generated content appears to be proliferating on Meta’s platforms. Scammers and spammers have been profiting from it in particular, according to recent analysis by the Stanford Internet Observatory. Researchers found they were using the technology, which was “easy to generate [and] often visually sensational”, to gain huge followings or sell dodgy products on Facebook.
Sir Nick acknowledged users wanted “transparency”, but suggested they should also do their own due diligence by checking “whether the account sharing the content is trustworthy or looking for details that might look or sound unnatural”. Easier said than done, it seems, though, with recent research by software company ESET finding only one in three Britons can tell if an image is AI-generated.
While the “All eyes on Rafah” post may have bypassed automated systems, questions have been raised over how it passed human moderators, which happens when an image is reported by users. Ann Koppuzha, a lawyer specialising in digital media, said this was likely because there was “nothing inherently offensive about the post – it’s not violent, graphic or overtly political. The post does not even mention Israel, Gaza or Palestine, which could be sensitive trigger words.”
Under their current policies, social media platforms have “limited” reason to take it down, she said, adding “there’s nothing objectively offensive about the content that violates Instagram’s content guidelines”.
But what about other more incendiary posts, including one colourful AI-generated post shared more than 5,000 times of a Palestinian keffiyeh next to a tank, with the words “From the river to the sea”? On its website, Meta – which declined to comment directly – says its “goal is to allow people to express themselves while still removing harmful content”. It adds: “If the user’s intent in sharing the content is unclear, we err on the side of safety and remove it.”
For Israel supporters, there is certainly a feeling of double standards. In a counter-campaign following the “All eyes on Rafah” post, more than half a million users shared an AI-generated image showing a gunman standing in front of a baby taken captive in Gaza, overlaid with the words “Where were your eyes on October 7?”.
But before it could gain more traction, it was reportedly taken down from the platform and the Israeli influencer who created it, Benjamin Jamon, had his account banned. The image later reappeared, with Meta admitting it had been “mistakenly removed” due to a “technical issue” and that it did not violate its policies.
To avoid these issues, Meta has been trying to dampen down political content across its platforms in recent months. In February, Instagram boss Adam Mosseri announced they would “avoid recommending” political posts and no longer “proactively amplify” them from accounts users don’t follow.
Yet with AI seemingly able to circumvent social media algorithms, they may well be fighting a losing battle. “Quality AI images can be made easily in seconds and as pictures still tell a thousand words, it is clear why such images are created and used to spread a message,” says Jake Moore, global cybersecurity advisor at ESET.
“AI images tend to look digitally enhanced with sharp colours and deep contrasts, making them instantly stand out. But bake in a deep and meaningful current message and it is no wonder why it has been viewed so many times.
“It is likely we will see many more AI images go viral in the coming years, which will soon become the norm in being associated with the accompanying story.”
In other words, this is likely not the last time your social media feed will be overwhelmed by a single AI image.
The post How pro-Palestine activists are using AI to evade social media censorship appeared first on The Telegraph.