Telegram has a big problem. According to a new Wired investigation, dozens of AI-powered chatbots have appeared on the messaging app, allowing users to create “pornographic” photos and videos of people with just a few clicks. The report says that these new “nudity AI bots” have already garnered more than 4 million users per month, and the problem is likely only going to get worse.
This isn’t the first time we’ve seen AI bots used for these kinds of nefarious purposes, either. There have also been issues with users misusing ChatGPT and other chatbots that are far more popular. However, OpenAI and other companies have reliably patched out those issues and introduced safety nets to help keep content cleaner.
However, these deepfake bots—which allow users to create nude photos of almost anyone by uploading a photo or even a prompt—have skyrocketed, with Wired reporting at least 50 at the moment. Deepfakes have long been an issue, even before AI became as “good” as it is now. Back in February, we actually saw an AI scammer use deepfakes on a conference call to steal $25 million.
This kind of nudity-driven AI bot is a bit different, though, as it isn’t exactly looking to drain the bank accounts of large companies. Instead, it’s putting these tools directly in front of people on one of the most used applications in the world. And it’s making it easier than ever for people to create completely fraudulent photos and videos.
Tech. Entertainment. Science. Your inbox.
Sign up for the most interesting tech & entertainment news out there.
Email: SIGN UP
By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.
The difficult thing here, too, is that stopping these nudity AI bots is almost impossible. Even if you kill one, another is likely to pop up in its place. This makes completely eradicating the problem nigh impossible, and that means it is likely only going to get worse and worse.
There’s also an entirely separate situation in regards to how Telegram is allowing this kind of hosting on its platform, and that creates even more questions that nobody seems to be able to answer at the moment. Yes, Telegram is responsible for how people use its platform. This is a fact made very clear by France’s arrest of Telegram’s CEO earlier this year.
But how exactly do you regulate that? And will regulating it stop others from innovating inside of the tech market? I suppose we’ll see.
The post People are using AI bots to create nude images of almost anyone online appeared first on BGR.