Lovable left AI prompts exposed—what users should do now

A researcher says Lovable’s API allowed access to other users’ code, credentials, and chat histories tied to AI prompts—raising urgent trust and security questions.
A researcher says Lovable’s platform inadvertently allowed other users to view AI prompt chats tied to public projects through an API.
That claim matters because vibe-coding tools don’t just process text—they often ingest sensitive business context.. When prompts. chats. and generated artifacts are exposed. the leak can reveal more than “ideas.” It can expose system design. database structures. and sometimes secrets embedded during development.
Lovable declined to name an executive for comment. but its response lays out a different story than the initial “not a data breach” messaging.. The company later described how permission logic in its backend was unclear and. in a specific timeframe. chats on public projects were re-enabled by mistake.
The reported exposure: code and chat histories via API
The researcher. posting under the handle @weezerOSINT. reported that after creating an account. they could access another user’s source code. database credentials. AI chat histories. and customer data tied to that user’s projects.. The allegation included a screenshot showing project code and chat content, alongside what the researcher described as an unresolved issue.
In follow-up discussion, the researcher said it took roughly half an hour to validate the problem using xAI’s Grok 4.2 model. They also claimed that without AI-assisted discovery, locating similar exposures would have taken hours or days.
From a business-risk perspective. the key detail isn’t only that “public projects” were visible—it’s that an API pathway appeared to make access easier than a normal interface would.. For platforms built on developer collaboration. this difference between “you can see something in the UI” and “you can retrieve everything through an API” is often where trust breaks.
Lovable’s explanation: permission confusion and a backend patch
Lovable initially told users that no “data breach” had occurred and argued that exposure of project code was “intentional behavior” when projects were marked public.. The company’s clarification emphasized that “public” settings mean other users can view code. and Lovable said its documentation of what those settings imply was unclear.
But Lovable’s later post focused on a narrower failure: the exposure of chats and prompts.. The company said it retroactively patched its API so that public project chats could not be accessed “no matter what.” It also stated that during February. while unifying permissions in its backend. it accidentally re-enabled access to chats on public projects.
That sequence—unclear intent. then a permissions mistake. then a retroactive fix—fits a pattern common in fast-moving AI startups: product behavior evolves. permissions logic is refactored. and edge cases become visible when real users stress-test the system.. In practice, the risk is amplified for vibe-coding because prompts can function like development notes.. Users may include environment assumptions, schema details, and troubleshooting steps that never appear in formal code comments.
Why AI prompt leaks are a bigger problem than most users realize
Most people think of data exposure as tables and files. For AI-assisted development platforms, the “data” can include the conversational trail of how a feature was built—what the model was asked to do, what constraints the developer tried, and what the system returned before adjustments.
That is why the conversation around Lovable’s incident shouldn’t stop at “is it a breach?” The operational question for businesses is whether a user’s AI prompts can contain sensitive information.. If a prompt includes credentials. customer context. or internal logic. the leak becomes self-reinforcing: the model may generate additional artifacts that carry the same secrets forward.
For product teams, the human impact is immediate.. A developer might share a project publicly to get feedback or demonstrate a feature. only to discover later that the chat history functioned like a notebook.. If that notebook included anything confidential—customer identifiers. database connection strings. or internal requirements—the exposure becomes a compliance and reputational issue. not just a technical bug.
What users and companies should do next
Lovable says it made all new projects “private by default” for all users in December 2025.. That’s an important mitigation step. but it doesn’t address one lingering reality: projects created earlier may have different visibility behavior. depending on how fixes were applied and how long the vulnerable window lasted.
For users. the practical checklist is straightforward even without knowing every technical detail: review which projects are public. avoid pasting secrets into AI prompts. and treat any AI chat history as potentially sensitive documentation.. For teams adopting vibe-coding at work. internal policy should be equally clear—what can and cannot be placed into prompts. and which environments are safe.
From a broader market standpoint, incidents like this test whether AI developer tools can earn enterprise trust.. Funding and valuation signals growth, but security reliability is what converts curiosity into procurement.. Investors may want to see stronger permission models, better prompt-sanitization practices, and faster, more transparent incident handling.
Lovable’s most recent funding round came in December 2025, raising $330 million and placing the company’s valuation at $6.6 billion.. At that scale. the expectation is not only to ship features quickly—it’s to prevent edge-case exposures that can undermine the very collaboration model that makes vibe-coding attractive.
Tim Cook steps down as Apple CEO in September—what changes next?
5 Steps to Buying a Business in 2025
Clarifai deletes 3 million OkCupid photos—what it means for AI facial data