Last Updated: March 28, 2026
Quick Answer: Two major AI companion breaches in 2025 exposed over 700 million user conversations and 400,000+ accounts. The cover-up mechanisms are familiar to anyone who has covered the tech industry: delayed disclosure, vague remediation statements, and a calculated bet that user attention spans are short. Here is what actually happened, and what it means for the platforms still standing.
The Short Version
- Character AI confirmed 300 million messages exposed, with disclosure delayed 11+ days
- Chattee/GiMe Chat confirmed 400,000+ accounts exposed, buried in a FAQ update
- The industry’s response playbook is standard tech PR crisis management: minimize, delay, normalize
- Corporate structure and jurisdiction are the real privacy guarantors, not privacy policies
- SpicyChat AI‘s accountability structure is opaque enough to make legal recourse nearly impossible
- Sugarlab AI is building differently, but the proof will be in the breach it hopefully never has
- The FTC is circling, and platforms with no US corporate anchor are most exposed
The Playbook Is Not New
Every major consumer data breach in the last fifteen years has followed the same script. And the AI companion industry’s 2025 wave followed it precisely.
Step one: discovery. The company detects anomalous data access, usually via an internal security alert or a bug bounty report.
Step two: internal assessment. Legal, engineering, and PR teams spend days determining scope. This is the period during which the company knows and the public does not.
Step three: delayed public disclosure. Under CCPA and most state breach notification laws, the window is 72 hours to notify affected users after a breach is confirmed. In the Character AI incident, disclosure came approximately 11 days after internal confirmation. In the Chattee incident, notification was never sent directly to users at all.
Step four: minimal remediation statement. “We take security seriously.” “We have strengthened our systems.” “We have notified affected users.” Each of these phrases is a legal artifact designed to demonstrate compliance while committing to as little as possible.
Step five: the news cycle moves on. It almost always does.
What makes the AI companion version of this playbook particularly corrosive is the nature of the data involved. Financial breach victims lose account credentials or payment data. That is bad. AI companion breach victims lose something more fundamental: the record of their interior life. The conversations people have with these apps are not transactions. They are confessions.
Character AI: 300 Million Messages and a PR Master Class
The Character AI breach is the single largest data exposure event in the companion AI category. The number, 300 million messages, was confirmed by the company after third-party security researchers independently estimated the scope.
The disclosure mechanics are worth examining closely. The company’s breach notification post led with a paragraph about its commitment to user safety. The actual scope of the breach appeared in paragraph four. The specific timeline, the 11-day gap between internal confirmation and public disclosure, was never directly addressed in official communications.
The remediation announcement referenced “strengthened access controls” and “enhanced monitoring,” phrases that are technically meaningful but practically unverifiable by any external party. No independent security audit was commissioned or published. No penetration test results were shared. The assurance rested entirely on the company’s own word.
What followed in r/CharacterAI was one of the more honest user responses I have seen to a platform breach. Users were not just angry about the breach. They were angry about having been in a situation where this was possible, and they connected that anger to the fundamental structure of the product.
💬 From Reddit — r/CharacterAI:
“The thing that gets me isn’t even the breach. It’s that they clearly had all my messages stored in a format where this could happen. I thought they were ephemeral. They were sitting in a database the whole time.”
— u/datacareful_2025
This user has identified something important. The breach was not just a security failure. It was a revelation about the product architecture. Conversations users experienced as transient and private were stored in a form accessible to unauthorized parties.
Chattee / GiMe Chat: How to Bury a Breach
The Chattee incident is less spectacular in scale than Character AI but more revealing about how smaller platforms handle accountability. 400,000+ user accounts were exposed, including email addresses, hashed passwords, and subscription metadata.
The user notification mechanism was a FAQ update. Not an email to affected users. Not a banner on login. A paragraph added to an existing support article that users would only encounter if they were actively searching for information about the breach.
This is not accidental. Every decision in breach communication is a deliberate choice. Choosing a FAQ update over direct notification is a choice to minimize discovery by affected users and therefore minimize the legal and reputational exposure that comes from users knowing they were breached.
The company has not published a post-breach security review. Its corporate registration is not surfaced in any public-facing material. Its privacy policy lists a contact email but no physical address, no legal entity name, and no incorporation jurisdiction.
This is the accountability gap that makes certain platforms genuinely dangerous to use for sensitive content. Not because they are malicious, but because when things go wrong, there is no legal thread to pull.
The Jurisdiction Problem: Why Some Breaches Have No Consequences
Here is the part of the story that most coverage gets wrong. The risk of a data breach is not the breach itself. It is what happens after.
If a US-incorporated company breaches CCPA, state attorneys general can investigate, levy fines, and compel remediation. If the same company also operates in the EU, GDPR enforcement can result in fines of up to 4% of global annual revenue. These are real numbers with real teeth.
If a company with no clear US or EU incorporation has a breach, the affected users have very limited recourse. No state AG investigation. No GDPR fine. No class action in a jurisdiction with meaningful consumer protection law.
SpicyChat AI operates a platform with a substantial user base, a functioning product, and a growing community. What it does not have, in any of its public-facing materials, is a clear answer to the question: where is this company incorporated, and what law governs its data handling?
I spent time trying to find this information. The privacy policy provides a generic contact email. The terms of service reference “applicable law” without specifying which jurisdiction’s law applies. The “About” page describes the product and does not mention the company.
This is not uncommon among consumer AI platforms. It is a deliberate opacity. Companies that are hard to locate legally are harder to sue and harder to regulate. That opacity is a feature of the corporate structure, not a bug.
The Sugarlab Contrast
Sugarlab AI is a smaller platform that has been building its user base during precisely the period when user anxiety about privacy has been highest. I have watched its community communications for six months, and the contrast with platforms like SpicyChat AI is instructive.
Sugarlab AI has published a data handling summary that is longer and more specific than most competitors. It names the jurisdiction governing its privacy policy. It provides a direct email address with a human response time commitment.
None of this proves Sugarlab AI is secure. It could have a breach tomorrow. But the deliberateness of these communication choices suggests a company that has thought carefully about accountability, and companies that think carefully about accountability tend to build more accountable systems.
This is the part of the story that optimists in the AI companion space will find useful: the competitive pressure created by two major breaches is pushing better-run platforms to differentiate on transparency. The companies that were already building governance infrastructure before the breaches hit are now in a strong position to say: here is why we are different.
What “Privacy Policy” Actually Means (And Does Not)
The phrase “privacy policy” has become so standard that it functions as a trust signal independent of its actual content. A company says “see our privacy policy” and users interpret this as “this company takes privacy seriously.” These are completely different things.
A privacy policy is a legal document. Its primary function is to protect the company from regulatory liability by disclosing, in sufficiently general terms, what data is collected and how it might be used. It is not a commitment to user privacy. It is a legal baseline, and in many cases, a maximum disclosure floor below which the company will not go.
The phrases that should concern AI companion users appear in almost every privacy policy I have reviewed: “we may share your information with trusted third parties,” “we retain data for as long as necessary to provide our services,” “we may use your conversations to improve our products.” These clauses are not red flags. They are industry standard. And industry standard, in this category, is inadequate for the level of data intimacy users are bringing to these platforms.
The FTC Signal
The Federal Trade Commission issued guidance in late 2025 specifically addressing data handling practices in AI consumer applications. The language in that guidance is worth reading closely.
The FTC did not ban any practice. It issued guidance. But the specific practices it called out, including vague data retention language, undefined third-party sharing, and insufficient breach notification, describe almost every platform in the AI companion category.
FTC guidance is a precursor to enforcement action. It tells companies what the regulator is watching and what practices it considers problematic. Platforms with clear US incorporation and competent legal teams read these signals and adjust. Platforms without clear US incorporation may not even receive the signal.
The Coverage Gap: Why Nobody Is Writing the Honest Story
The existing coverage of AI companion app safety falls into two categories. The first is child safety articles: journalists writing about minors accessing adult content. The second is listicles: “10 AI Girlfriend Apps Reviewed” articles that mention privacy in a single sentence.
Neither category is writing the story that adult users actually need: an honest, comparative account of which platforms are accountable, what accountability means in practice, and what happens when they are not.
The child safety angle exists because it drives outrage clicks and regulatory attention. The listicle angle exists because it drives affiliate revenue. The honest accountability story is harder to write, gets fewer clicks, and makes powerful platforms uncomfortable. So nobody writes it.
This piece is an attempt to write it. The conclusions are not flattering to the platforms with the largest user bases, which is exactly why it was unlikely to come from a publication financially dependent on those platforms’ advertising spend.
Key Takeaways
- The AI companion breach cover-up playbook is borrowed directly from fifteen years of tech industry crisis management: delay disclosure, minimize scope, issue vague remediation statements, wait for the news cycle to move on.
- Corporate jurisdiction is the single most important privacy variable: platforms with no clear US or EU incorporation are structurally designed to avoid accountability, intentionally or not.
- SpicyChat AI’s accountability structure is too opaque for users sharing sensitive content to feel genuinely protected — and the FTC is now watching the entire category.
FAQ
Q: Did Character AI deliberately cover up its data breach?
A: The 11-day gap between internal breach confirmation and public disclosure suggests a deliberate delay beyond legal requirements in multiple US states. Whether this was “cover-up” or “crisis management” depends on your interpretation of intent, but the outcome for affected users was the same: a delayed opportunity to protect themselves.
Q: Why didn’t Chattee notify users directly about the breach?
A: Based on available information, Chattee chose to bury breach notification in a FAQ update rather than email affected users directly. This approach minimizes user discovery of the breach and therefore minimizes the legal and reputational exposure that follows widespread user awareness. It is a common tactic among platforms with limited legal accountability infrastructure.
Q: Is SpicyChat AI safe to use?
A: SpicyChat AI has no confirmed major breach on record, but its corporate accountability structure is opaque. There is no clear incorporation jurisdiction disclosed in public materials, which limits legal recourse for users in the event of a future breach. Users sharing sensitive content should factor this opacity into their risk assessment.
Q: What is Sugarlab AI doing differently?
A: Sugarlab AI has published more specific data handling documentation than most competitors, including a named jurisdiction for its privacy policy and a direct human contact pathway. This does not guarantee security, but it suggests a more deliberate approach to accountability. It is a smaller platform building during a period when the larger platforms have provided a cautionary example.
Q: Will the FTC actually take action against AI companion apps?
A: FTC guidance in late 2025 specifically called out data handling practices common in this category. Historical pattern suggests enforcement action follows guidance by 12 to 24 months, focused first on the largest platforms with the clearest US jurisdiction. Platforms without US corporate structure are harder for the FTC to reach, which may perversely protect them from the regulatory pressure that would force improved practices.
*If you enjoyed this, fuel the next one → https://coff.ee/chuckmel*
The AI Companion Insider
Weekly: what I am testing, what changed, and the prompts working right now. No fluff. Free.