The viral AI-generated image showing an explosion near the Pentagon is “truly the tip of the iceberg of what’s to come,” a CEO who works in image authenticity detection has warned.
The image, which was largely thought to have been created using AI, quickly spread on social media last month and even caused the stock market to briefly dip.
“We’re going to see a lot more AI generated content start to surface on social media, and we’re just not prepared for it,” Jeffrey McGregor, the CEO of Truepic, told CNN.
AI image-generation sites such as DALL-E, Midjourney, and Stable Diffusion have boomed in popularity over recent months. In their prompts, users can ask the sites to create artwork in the style of a particular artist, creating concerns about ownership and copyright, or images of events that never happened, leading to some deepfake images going viral, includes ones showing former President Donald Trump being arrested.
Earlier this year, a photographer sparked debate about whether AI-generated images can be classed as art after an image he created using DALL-E 2 won a major international photography competition.
The photographer told Insider that judges hadn’t managed to spot that the image was a fake. “It has all the flaws of AI, and it could have been spotted but it wasn’t,” he said.
It’s not just AI-generated images that are being used to deceive people. Trolls have used a voice-cloning to mimic the voices of celebrities including Joe Rogan, Ben Shapiro, and Emma Watson, and scammers have used the technology to persuade people to part with money they think is going towards a relative or even to fake kidnappings.
Sites such as GPTZero have been developed to help detect whether text was written by AI chatbots like ChatGPT. Some professors have been putting their students’ essays through AI-detection services.
“When anything can be faked, everything can be fake,” McGregor told CNN. “Knowing that generative AI has reached this tipping point in quality and accessibility, we no longer know what reality is when we’re online.”
Ben Colman, the CEO of Reality Defender, which says it can detect AI-generated images, video, and audio, told CNN that one of the reasons that fake images were spreading online was that “anybody can do this.”
“You don’t need a PhD in computer science. You don’t need to spin up servers on Amazon. You don’t need to know how to write ransomware,” he told the outlet. “Anybody can do this just by Googling ‘fake face generator.’”