As AI increasingly takes over the work of modern programmers, the cybersecurity world has warned that automated coding tools are sure to introduce a new bounty of hackable bugs into software. When those same vibe-coding tools invite anyone to create applications hosted on the web with a click, however, it turns out the security implications go beyond bugs to a total absence of any security—even, sometimes, for highly sensitive corporate and personal data.
Security researcher Dor Zvi and his team at the cybersecurity firm he cofounded, RedAccess, analyzed thousands of vibe-coded web applications created using the AI software development tools Lovable, Replit, Base44, and Netlify and found more than 5,000 of them that had virtually no security or authentication of any kind. Many of these web apps allowed anyone who merely finds their web URL to access the apps and their data. Others had only trivial barriers to that access, such as requiring that a visitor sign in with any email address. Around 40 percent of the apps exposed sensitive data, Zvi says, including medical information, financial data, corporate presentations, and strategy documents, as well as detailed logs of customer conversations with chatbots.
“The end result is that organizations are actually leaking private data through vibe-coding applications,” says Zvi. “This is one of the biggest events ever where people are exposing corporate or other sensitive information to anyone in the world.”
Zvi says RedAccess’ scouring for vulnerable web apps was surprisingly easy. Lovable, Replit, Base44, and Netlify all allow users to host their web apps on those AI companies’ own domains, rather than the users’. So the researchers used straightforward Google and Bing searches for those AI companies’ domains combined with other search terms to identify thousands of apps that had been vibe coded with the companies’ tools.
Of the 5,000 AI-coded apps that Zvi says were left publicly accessible to anyone who simply typed their URLs into a browser, he found close to 2,000 that, upon closer inspection, seemed to reveal private data: Screenshots of web apps he shared with WIRED—several of which WIRED verified were still online and exposed—showed what appeared to be a hospital’s work assignments with the personally identifiable information of doctors, a company’s detailed ad purchasing information, what appeared to be another firm’s go-to-market strategy presentation, a retailer’s full logs of its chatbot’s conversations with customers, including the customers’ full names and contact information, a shipping firm’s cargo records, and assorted sales and financial records from a variety of other companies. In some cases, Zvi says, he found that the exposed apps would have allowed him to gain administrative privileges over systems and even remove other administrators.
In the case of Lovable, Zvi says he also found numerous examples of phishing sites that impersonated major corporations, including Bank of America, Costco, FedEx, Trader Joe’s, and McDonald’s, that appeared to have been created with the AI coding tool and hosted on Lovable’s domain.
When WIRED asked the four AI coding companies about RedAccess’ findings, Netlify didn’t respond, but the three other companies pushed back on the researchers’ claims and protested that they hadn’t shared enough of their findings or provided enough time for them to respond. (RedAccess says it reached out to the companies on Monday.) But they didn’t deny that the web apps RedAccess found were left exposed.
“From the limited information they shared, [RedAccess’s] core claim appears to be that some users have published apps on the open web that should’ve been private,” Replit’s CEO Amjad Masad wrote in a response post on X. “Replit allows users to choose whether apps are public or private. Public apps being accessible on the internet is expected behavior. Privacy settings can be changed at any time with a single click.”
A spokesperson for Lovable responded in a statement that “Lovable takes reports of exposed data and phishing sites seriously, and we’re actively working to obtain what we need to investigate. We’re treating this as an ongoing matter. It’s also worth noting that Lovable gives builders the tools to build securely, but how an app is configured is ultimately the creator’s responsibility.”
Blake Brodie, the head of public relations for Base44’s parent company, Wix, wrote in a statement that “Base44 provides users with robust tools to configure their own applications’ security, including access controls and visibility settings.” She added that “disabling those controls is a deliberate, straightforward action, any user can do it. Where applications were publicly accessible, that reflects a user configuration choice, not a platform vulnerability.”
Brodie also noted that “it is trivially easy to fabricate applications that appear to contain real user data. Without a single verified example provided to us, we have no way to assess the validity of these claims.” RedAccess, for its part, disputed that it hadn’t provided examples to Base44.
Zvi notes that for a few dozen exposed web apps, it went so far as to contact the app’s apparent owner, which confirmed that data had been exposed. RedAccess also shared with WIRED anonymized communications in several cases that showed Base44 users thanking the researchers for alerting them to exposed web apps, which were then secured or taken offline.
Verifying whether real data has been exposed on any particular unsecured AI-coded web app can be tricky, says Joel Margolis, a security researcher who, along with a colleague, recently discovered that an AI chat toy had exposed 50,000 conversations the toy had with children on a website with virtually no security. Data in a vibe-coded web app might be a placeholder, he says, or the app might be just a proof of concept. Wix’s Brodie argued that two examples that WIRED shared with Base44 did appear to be test sites or have AI-generated data.
For the web apps WIRED reviewed, we couldn’t confirm that the personal or corporate data was as sensitive—or real—as it appeared to be.
Margolis nonetheless says that the problem of AI-built web apps exposing data is very real. He says he frequently comes across exactly the sort of exposures that Zvi cataloged. “Somebody from a marketing team wants to create a website. They’re not an engineer and they probably have little to no security background or knowledge,” Margolis says. AI coding tools, he says, “do what you ask them to do. And unless you ask them to do it securely, they’re not going to go out of their way to do that.”
Zvi points out that the 5,000 exposed apps Red Access found were only those hosted on the AI coding tools’ own domains, and that likely thousands more are hosted on users’ own purchased domains. He compares the ongoing deluge of data exposures that are resulting from companies’ unsecured AI-coded web apps to the epidemic of exposed data created by the security settings of Amazon S3 storage buckets in earlier years. Companies from Verizon to World Wrestling Entertainment accidentally exposed reams of sensitive data due to misconfigurations in their instance of Amazon’s cloud storage service. Yet many in the cybersecurity industry also partially blamed Amazon for confusing security settings that led so many customers to make the same mistakes.
AI web-app coding tools are now creating a wave of data exposures, the result of a similar combination of user error and lack of safeguards, Zvi says. Yet more fundamental than any particular security failing on the part of the AI coding companies, he argues, is simply that these tools allow a new class of people within organizations to create applications—often with little security awareness and outside the usual software development processes that companies use to vet applications before they’re released.
“Anyone from your company at any moment can generate an app, and this is not going through any development cycle or any security check,” Zvi says. “People can just start using it in production without asking anyone. And they do.”
The post Thousands of Vibe-Coded Apps Expose Corporate and Personal Data on the Open Web appeared first on Wired.




