Key Takeaways
- Vibe‑coding platforms (Lovable, Replit, Base44, Netlify) enable non‑technical users to build web apps quickly, often in a few hours.
- A security audit by RedAccess found that roughly 5,000 vibe‑coded applications had little or no authentication, allowing anyone with the correct URL to access data.
- In about 40 % of the examined apps, sensitive information—such as hospital work assignments, corporate strategy decks, and financial records—was exposed.
- Researchers attribute the problem to the tools’ default behavior: unless explicitly instructed to enforce security, the AI‑driven builders do not add protective measures.
- Companies behind the platforms (Wix/Base44, Lovable) disputed the findings, claiming the exposed apps were either test sites, deliberately set to public, or lacked the URLs needed for verification.
- The incident highlights a growing tension between democratizing software development and maintaining baseline security hygiene for non‑expert creators.
The Rise of Vibe‑Coding
Vibe‑coding refers to a new wave of AI‑powered development tools that let people without programming experience create functional web applications by describing what they want in natural language. Platforms such as Lovable, Replit, Base44, and Netlify have marketed themselves as shortcuts for marketers, product managers, and hobbyists who need a prototype or internal tool in a matter of hours rather than weeks. The appeal lies in the elimination of traditional coding barriers: users can drag‑and‑drop components, rely on AI to generate backend logic, and deploy instantly to the cloud. As a result, adoption has surged, with thousands of apps being spun up each month across industries ranging from healthcare to finance.
RedAccess’s Security Audit
Cybersecurity firm RedAccess conducted a large‑scale audit of vibe‑coded applications after receiving tips from Wired. Researchers led by Dor Zvi scanned publicly accessible URLs associated with apps built on the four major platforms and identified approximately 5,000 web applications that exhibited “virtually no security or authentication of any kind.” In many cases, simply knowing the exact URL granted unrestricted access to the app’s interface and underlying data. Other apps employed only trivial safeguards—such as accepting any email address as a valid login—making them equally vulnerable to casual discovery or automated scraping.
Exposure of Sensitive Data
Perhaps the most alarming finding was that roughly 40 % of the insecure apps contained sensitive information. Examples included hospital work assignments that revealed doctors’ personally identifiable information, a corporation’s go‑to‑market strategy presentation, and sales and financial records from various businesses. Because these apps lacked even basic access controls, anyone who stumbled upon the URL could view, download, or potentially misuse the data. The exposure of protected health information (PHI) raises particular concern under regulations such as HIPAA, while leaked corporate strategies could jeopardize competitive advantage.
Why Security Is Missing
Joel Margolis, another security researcher interviewed by Wired, explained that the root cause is not malice but a design gap in the AI‑driven builders. “Somebody from a marketing team wants to create a website. They’re not an engineer, and they probably have little to no security background or knowledge,” he said. Unless the user explicitly asks the AI to incorporate authentication, role‑based access, or data encryption, the platforms default to the simplest possible implementation—often a public‑facing static site with no backend protection. The AI models are trained to prioritize usability and speed, not to infer security requirements from vague user prompts.
Platform Vendors’ Responses
The companies behind the tools pushed back on RedAccess’s conclusions. Blake Brodie, a spokesperson for Wix (which owns Base44), told Axios that the research “deliberately withheld the URLs that would have allowed us to identify and examine the applications in question.” He added that many of the flagged apps had been “deliberately set to public by their owners,” suggesting that the exposure was intentional rather than a flaw in the platform. Brodie also noted that two examples shown to Wix appeared to be test sites or contained only AI‑generated placeholder data, implying low real‑world risk.
Samyutha Reddy, a spokesperson for Lovable, echoed similar sentiments, telling Axios that RedAccess’s report lacked the URLs or technical specifics needed for Lovable to verify, investigate, or remediate the alleged vulnerabilities. She said Lovable was nonetheless looking into the matter and would take appropriate action if genuine security gaps were confirmed.
Implications for Non‑Technical Creators
The episode underscores a broader challenge in the democratization of software development: empowering users to build applications quickly can inadvertently lower the security baseline if safeguards are not baked into the tooling. Non‑technical creators often lack the awareness to ask for authentication, encryption, or proper data handling, and the AI assistants they rely on do not proactively suggest those features unless prompted. Consequently, organizations that permit employees to use vibe‑coding tools for internal projects may unintentionally create shadow IT systems that expose sensitive data.
Recommendations for Safer Vibe‑Coding
To mitigate these risks, several steps can be taken by both platform providers and end‑users:
- Secure‑by‑Default Templates – Platforms should offer pre‑configured templates that include baseline authentication (e.g., email verification, OAuth) and role‑based access controls, making the secure path the easiest one to follow.
- Explicit Security Prompts – When a user describes an app that will store or process personal or proprietary data, the AI should ask clarifying questions about desired security measures and suggest appropriate implementations.
- Automated Scanning – Vendors could integrate lightweight security scanners that flag public‑exposed endpoints, missing authentication, or insecure data storage before an app is published.
- User Education – Short, interactive tutorials on basic security concepts (e.g., why authentication matters, how to set up simple login flows) can help non‑technical builders make informed decisions.
- Clear Public‑vs‑Private Settings – Platforms should make it obvious when an app is set to public versus private, perhaps with visual warnings or required confirmation steps before deploying to a public URL.
Looking Ahead
As vibe‑coding continues to grow, the tension between accessibility and security will likely intensify. The RedAccess findings serve as a wake‑up call: enabling rapid development is valuable, but not at the expense of exposing confidential information. If platform vendors adopt the recommended safeguards and users become more security‑conscious, the promise of AI‑driven app creation can be fulfilled without compromising data protection. Conversely, if the current trend of minimal security persists, the ecosystem may see a rise in preventable data breaches, regulatory penalties, and erosion of trust in low‑code/no‑code solutions. The onus is now on both makers of the tools and the communities that use them to bake security into the very vibe that makes these platforms so appealing.

