
Getty Images; Ava Horton/BI
Sometime last year, Ian Lamont’s inbox began piling up with inquiries about a job listing. The Boston-based owner of a how-to guide company hadn’t opened any new positions, but when he logged onto LinkedIn, he found one for a “Data Entry Clerk” linked to his business’s name and logo.
Lamont soon realized his brand was being scammed, which he confirmed when he came across the profile of someone purporting to be his company’s “manager.” The account had fewer than a dozen connections and an AI-generated face. He spent the next few days warning visitors to his company’s site about the scam and convincing LinkedIn to take down the fake profile and listing. By then, more than twenty people reached out to him directly about the job, and he suspects many more had applied.
Generative AI’s potential to bolster business is staggering. According to one 2023 estimate from McKinsey, in the coming years it’s expected to add more value to the global economy annually than the entire GDP of the United Kingdom. At the same time, GenAI’s ability to almost instantaneously produce authentic-seeming content at mass scale has created the equally staggering potential to harm businesses.
Since ChatGPT’s debut in 2022, online businesses have had to navigate a rapidly expanding deepfake economy, where it’s increasingly difficult to discern whether any text, call, or email is real or a scam. In the past year alone, GenAI-enabled scams have quadrupled, according to the scam reporting platform Chainabuse. In a Nationwide insurance survey of small business owners last fall, a quarter reported having faced at least one AI scam in the past year. Microsoft says it now shuts down nearly 1.6 million bot-based signup attempts every hour. Renée DiResta, who researches online adversarial abuse at Georgetown University, tells me she calls the GenAI boom the “industrial revolution for scams” — as it automates frauds, lowers barriers to entry, reduces costs, and increases access to targets.
The consequences of falling for an AI-manipulated scam can be devastating. Last year, a finance clerk at the engineering firm Arup joined a video call with whom he believed were his colleagues. It turned out that each of the attendees was a deepfake recreation of a real coworker, including the organization’s chief financial officer. The fraudsters asked the clerk to approve overseas transfers amounting to more than $25 million, and assuming the request came through the CFO, he green-lit the transaction.
Business Insider spoke with professionals in several industries — including recruitment, graphic design, publishing, and healthcare — who are scrambling to keep themselves and their customers safe against AI’s ever-evolving threats. Many feel like they’re playing an endless game of whack-a-mole, and the moles are only multiplying and getting more cunning.
Last year, fraudsters used AI to build a French-language replica of the online Japanese knives store Oishya, and sent automated scam offers to the company’s 10,000-plus followers on Instagram. The fake company told customers of the real company they had won a free knife and that all they had to do was pay a small shipping fee to claim it — and nearly 100 people fell for it. Kamila Hankiewicz, who has run Oishya for nine years, learned about the scam only after several victims contacted her asking how long they needed to wait for the parcel to arrive.
It was a rude awakening for Hankiewicz. She’s since ramped up the company’s cybersecurity and now runs campaigns to teach customers how to spot fake communications. Though many of her customers were upset about getting defrauded, Hankiewicz helped them file reports with their financial institutions for refunds. Rattling as the experience was, “the incident actually strengthened our relationship with many customers who appreciated our proactive approach,” she says.
Her alarm bells really went off once the interviewer asked her to share her driver’s license.
Rob Duncan, the VP of strategy at the cybersecurity firm Netcraft, isn’t surprised at the surge in personalized phishing attacks against small businesses like Oishya. GenAI tools now allow even a novice lone wolf with little technical know-how to clone a brand’s image and write flawless, convincing scam messages within minutes, he says. With cheap tools, “attackers can more easily spoof employees, fool customers, or impersonate partners across multiple channels,” Duncan says.
Though mainstream AI tools like ChatGPT have precautions in place when you ask them to infringe copyright, there are now plenty of free or inexpensive online services that allow users to replicate a business’s website with simple text prompts. Using a tool called Llama Press, I was able to produce a near-exact clone of Hankiewicz’s store, and personalize it from a few words of instructions. (Kody Kendall, Llama Press’s founder, says cloning a store like Oshiya’s doesn’t trigger a safety block because there can be legitimate reasons to do so, like when a business owner is trying to migrate their website to a new hosting platform. He adds that Llama Press relies on Anthropic’s and OpenAI’s built-in safety checks to weed out bad-faith requests.)
Text is just one front of the war businesses are fighting against malicious uses of AI. With the latest tools, it takes a solo adversary — again with no technical expertise — as little as an hour to create a convincing fake job candidate to attend a video interview.
Tatiana Becker, a tech recruiter based in New York, tells me deepfake job candidates have become an “epidemic.” Over the past couple years, she has had to frequently reject scam applicants who use deepfake avatars to cheat on interviews. At this point, she’s able to discern some of their telltale signs of fakery, including a glitchy video quality and the candidate’s refusal to switch up any element of their appearance during the call, such as taking off their headphones. Now, at the start of every interview she asks for the candidates’ ID and poses more open-ended questions, like what they like to do in their free time, to suss out if they’re a human. Ironically, she’s made herself more robotic at the outset of interviews to sniff out the robots.
Nicole Yelland, a PR executive, says she found herself on the opposite end of deepfakery earlier this year. A scammer impersonating a startup recruiter approached her over email saying he was looking for a head of comms, with an offer package that included generous pay and benefits. The purported person even shared with her an exhaustive slide deck, decorated with AI-generated visuals, outlining the role’s responsibilities and benefits. Enticed, she scheduled an interview.
During the video meeting, however, the “hiring manager” refused to speak, and instead asked Yelland to type her responses to the written questions in the Microsoft Teams chat section. Her alarm bells really went off once the interviewer started asking her to share a series of private documents, including her driver’s license.
Yelland now runs a background check with tools like Spokeo before engaging with any stranger online. “It’s annoying and takes more time, but engaging with a spammer is more annoying and time-consuming; so this is where we are,” she says.
While videoconferencing platforms like Teams and Zoom are getting better at detecting AI-generated accounts, some experts say the detection itself risks creating an vicious cycle. The data these platforms collect on what’s fake is ultimately used to train more sophisticated GenAI models, which will help them get better at escaping fakery detectors and fuel “an arms race defenders cannot win,” says Jasson Casey, the CEO of Beyond Identity, a cybersecurity firm that specializes in identity theft. Casey and his company believe the focus should instead be on authenticating a person’s identity. Beyond Identity sells tools that can be plugged into Zoom that verify meeting participants through their device’s biometrics and location data. If it detects a discrepancy, the tools label the participants’ video feed as “unverified.” Tramèr Florian, a computer science professor at ETH Zurich, agrees that authenticating identity will likely become more essential to ensure that you’re always talking to a legitimate colleague.
It’s not just fake job candidates entrepreneurs now have to contend with, it’s always fake versions of themselves. In late 2024, scammers ran ads on Facebook for a video featuring Jonathan Shaw, the deputy director of the Baker Heart and Diabetes Institute in Melbourne. Although the person in it looked and sounded exactly like Dr. Shaw, the voice had been deepfaked and edited to say that metformin — a first-line treatment for type 2 diabetes — is “dangerous,” and patients should instead switch to an unproven dietary supplement. The fake ad was accompanied by a fake written news interview with Shaw.
Several of his clinic’s patients, believing the video was genuine, reached out asking how to get a hold of the supplement. “One of my longstanding patients asked me how come I continued to prescribe metformin to him, when ‘I’ had said on the video that it was a poor drug,” Shaw tells me. Eventually he was able to get Facebook to take down the video.
Then there’s the equally vexing and annoying issue of AI slop — an inundation of low-quality, mass-produced images and text that is flooding the internet and making it ever-more difficult for the average person to tell what’s real or fake. In her research, DiResta found instances where social platforms’ recommendation engines have promoted malicious slop — where scammers would put up images of items like nonexistent rental properties, appliances, and more that users were frequently falling for it and giving away their payment details.
On Pinterest, AI-generated “inspo” posts have plagued people’s mood boards — so much so that Philadelphia-based Cake Life Shop now often receives orders from customers asking them to recreate what are actually AI-generated cakes. In one shared with Business Insider, the cake resembles a moss-filled rainforest, and features a functional waterfall. Thankfully for cofounder Nima Etemadi, most customers are “receptive to hearing about what is possible with real cake after we burst their AI bubble,” he says.
Similarly, AI-generated books have swarmed Amazon and are now hurting publisher sales.
Pauline Frommer, the president of the travel guide publisher Frommer Media, says that AI-generated guidebooks have managed to reach the top of lists with the help of fake reviews. An AI publisher buys a few Prime memberships, sets the guidebook’s ebook price to zero, and then leaves seemingly “verified reviews” by downloading its copies for free. These practices, she says, “will make it virtually impossible for a new, legitimate brand of guidebook to enter the business right now.” Ian Lamont says he received an AI-generated guidebook as a gift last year: a text-only guide to Taiwan, with no pictures or maps.
While the FTC now considers it illegal to publish fake, AI-generated product reviews, official policies haven’t yet caught up with AI-generated content itself. Platforms like Pinterest and Google have started to watermark and label AI-generated posts, but since it’s not error-free yet, some worry these measures may do more harm than good. DiResta fears that a potential unintended consequence of ubiquitous AI labels would be people experiencing “label fatigue,” where they blindly assume that unlabeled content is therefore always “real.” “It’s a potentially dangerous assumption if a sophisticated manipulator, like a state actor’s intelligence service, manages to get disinformation content past a labeler,” she says.
For now, small business owners should stay vigilant, says Robin Pugh, the executive director of Intelligence for Good, a non-profit that helps victims of internet-enabled crimes. They should always validate they’re dealing with an actual human and that the money they’re sending is actually going where they intend it to go.
Etemadi of Cake Life Shop recognizes that for as much as GenAI can help his business become more efficient, scam artists will ultimately use the same tools to become just as efficient. “Doing business online gets more necessary and high risk every year,” he says. “AI is just part of that.”
Shubham Agarwal is a freelance technology journalist from Ahmedabad, India, whose work has appeared in Wired, The Verge, Fast Company, and more.
The post America’s small business owners are being swamped by scammers appeared first on Business Insider.