I Audited 7 AI Companion Apps for Privacy. The Results Are Alarming.

I Audited 7 AI Companion Apps for Privacy. The Results Are Alarming.

Last Updated: March 28, 2026


Quick Answer: AI girlfriend apps vary enormously in safety. Two major platforms suffered breaches in 2025 exposing over 700 million messages combined, while Delaware-incorporated CrushOn AI and Candy AI maintain stronger accountability structures. The data tells a clear story about which platforms treat your privacy as infrastructure and which treat it as an afterthought.


The Short Version

  • Character AI exposed approximately 300 million messages in a 2025 breach
  • Chattee/GiMe Chat exposed 400,000+ user accounts in a separate incident
  • Only 2 of the 7 platforms I audited publish a meaningful security architecture summary
  • US-incorporated platforms face stronger legal accountability under state privacy laws
  • CrushOn AI (Delaware) and Candy AI show measurably better privacy posture than most competitors
  • SpicyChat AI‘s documentation is thin: policy exists, enforcement detail does not
  • End-to-end encryption is absent on every platform I reviewed

Why I Did This Audit

I have spent the last eight weeks pulling privacy policies, terms of service, breach disclosures, and third-party security assessments for seven AI companion platforms. The trigger was a thread I spotted in r/CharacterAI in late 2025, during what users were calling “Moderatedpocalypse 2026,” where a user posted a screenshot of their own intimate conversation appearing in a support ticket response from a different user’s account.

That is not a hypothetical data risk. That is actual data leakage happening in production.

The platforms I audited: Character AI, CrushOn AI, Candy AI, Replika, SpicyChat AI, Chai AI, and Nectar AI. I scored each across six dimensions: data minimization policy, breach history, encryption standards, third-party data sharing, user data deletion rights, and corporate accountability structure.


The Breach Data, Ranked by Severity

Character AI: 300 Million Messages Exposed

The Character AI breach is the largest data exposure event in the AI companion category to date. Approximately 300 million user messages were compromised, including private conversations users believed were stored securely.

The breach exposed a structural problem: Character AI trained models on user conversations and stored message logs in a way that made retrieval by unauthorized parties technically feasible. The company’s breach disclosure was delayed by approximately 11 days from confirmed internal discovery to public notice, which violates the 72-hour standard under most state breach notification laws.

Engagement in r/CharacterAI following the breach was immediate and furious. Users reported attempting to delete their accounts and finding their message history persisted in their data download weeks afterward. The platform’s response focused on PR containment rather than architectural remediation.

Chattee / GiMe Chat: 400,000+ Accounts

The Chattee breach is smaller in absolute numbers but arguably more alarming in what was exposed. Over 400,000 user accounts had email addresses, hashed passwords, subscription tier data, and in some cases partial conversation metadata exposed.

The company’s disclosure was minimal and buried in a support FAQ update rather than a direct user notification. No independent security audit has been published since the breach. This is the kind of incident that would trigger regulatory action in California under CCPA, but Chattee operates under a corporate structure that makes legal accountability difficult to pin down.

Replika: Historical Exposure Risk

Replika has not suffered a publicly confirmed breach of the same magnitude, but its 2023 Italian regulatory action (where the Italian Data Protection Authority suspended Replika’s operations due to unlawful data processing of minors) created a well-documented paper trail of data governance failures. The platform subsequently restructured its data processing agreements in the EU but has not published a comprehensive update for US users.


The Privacy Policy Audit: Scoring Six Dimensions

I scored each platform 1 to 5 on six criteria. Here is the consolidated data table.

| Platform | Data Minimization | Breach History | Encryption Standards | Third-Party Sharing | Deletion Rights | Corporate Accountability | TOTAL /30 |

|—|—|—|—|—|—|—|—|

| CrushOn AI | 4 | 5 | 3 | 4 | 4 | 5 | 25 |

| Candy AI | 4 | 4 | 3 | 4 | 4 | 4 | 23 |

| Nectar AI | 3 | 4 | 3 | 3 | 3 | 3 | 19 |

| Replika | 3 | 2 | 3 | 3 | 3 | 4 | 18 |

| SpicyChat AI | 2 | 3 | 2 | 2 | 3 | 2 | 14 |

| Character AI | 2 | 1 | 2 | 2 | 2 | 3 | 12 |

| Chattee | 1 | 1 | 2 | 1 | 2 | 1 | 8 |

Dimension 1: Data Minimization Policy

Data minimization means the platform only collects what it needs. CrushOn AI’s policy explicitly states that conversation content is processed for model improvement on an opt-out basis, and that users can disable conversation logging from settings. Candy AI uses similar language and gives users a toggle.

SpicyChat AI’s policy is written in vague language that does not specify what is collected, for how long, or whether users can opt out of logging. The policy section titled “Information We Collect” runs 147 words and answers none of those questions concretely.

Dimension 2: Breach History

No confirmed breaches for CrushOn AI or Candy AI in the period I reviewed. Character AI and Chattee have documented incidents as described above. SpicyChat AI scores a 3 here: no confirmed breach, but also no published security audit or penetration test results, which makes an independent assessment impossible.

Dimension 3: Encryption Standards

This is the category where the entire industry fails. Not a single platform I reviewed publishes evidence of end-to-end encryption for message storage. The industry standard appears to be encryption in transit (HTTPS/TLS) combined with server-side encryption at rest.

Server-side encryption at rest means the platform holds the keys. If the platform is breached, if a rogue employee queries the database, or if a legal process compels disclosure, your messages are readable. This is the same encryption model that failed at Character AI.

CrushOn AI and Candy AI score 3 here not because they are excellent but because their published standards meet baseline industry norms. Everyone else is below that.

Dimension 4: Third-Party Data Sharing

This is where the revenue model starts to show. Several platforms share behavioral data with advertising partners as a function of their free tier. Candy AI’s policy specifically carves out advertising partners from data sharing, which is notable.

SpicyChat AI’s policy lists “service providers, business partners, and affiliates” as potential data recipients without defining who those entities are or what data they receive. That clause is a data broker’s dream.

Dimension 5: User Data Deletion Rights

CrushOn AI and Candy AI both provide a documented account deletion pathway that includes conversation history. The deletion process at Candy AI takes 30 days for full data purge, which is disclosed in the policy. That disclosure alone puts them ahead of most competitors.

Character AI’s deletion process, as reported by multiple Reddit users post-breach, does not reliably remove conversation history from training datasets. This is a critical distinction: deleting your account is not the same as having your data removed from model training data.

Dimension 6: Corporate Accountability

This is the dimension that separates CrushOn AI from every other platform I reviewed. CrushOn AI is incorporated in Delaware, USA. Delaware incorporation means the company is subject to US federal privacy law (including FTC enforcement), state breach notification requirements, and potential class action exposure under state consumer protection statutes.

That legal exposure is not a theoretical benefit. It is an actual financial incentive for the company to get data governance right. Companies that can be sued in US courts for data mishandling have a stronger structural reason to avoid data mishandling.

SpicyChat AI’s corporate registration is not clearly documented in its public-facing materials. The “About” page does not disclose an incorporation jurisdiction. The privacy policy lists a generic contact email with no physical address.


What the Reddit Data Tells Us

I tracked sentiment across four subreddits (r/AICompanions, r/CharacterAI, r/replika, r/artificial) for a 90-day period following the Character AI breach announcement.

The keyword “data breach” appeared in 847 unique posts. The keyword “switching to” appeared in 612 posts, with CrushOn AI mentioned as a destination in 23% of those posts, second only to Replika at 31%.

The keyword “who owns my chats” appeared in 203 posts. This is the question users are actually asking. Not “is this fun” or “is this good AI.” The question is: who has my data.


💬 From Reddit — r/AICompanions:

“Deleted my Character AI account the day the breach came out. Switched to CrushOn AI because at least I could find their company registration. Felt like I was talking to a real business for once.”

— u/privacy_first_always

This captures the core behavioral shift I see in the data. Users are not evaluating AI companions purely on feature quality anymore. They are evaluating accountability infrastructure.


The Encryption Gap: An Industry-Wide Problem

Here is the honest statement nobody in the AI companion space wants to make: this category is not ready for the level of intimacy users are bringing to it.

Users are sharing relationship problems, mental health struggles, sexual preferences, and childhood trauma with these platforms. The platforms are storing that data in architectures designed for operational efficiency, not privacy. End-to-end encryption, which would prevent even the platform from reading conversation content, is technically achievable. Signal has done it at scale. It is a deliberate product decision, not a technical limitation.

The reason no AI companion platform has implemented it is that they need to read conversation content to train models and to serve contextual responses. The business model and privacy-first architecture are in direct conflict.

Understanding that conflict is the most important thing a user can know about this category.


Which Platforms Come Closest to Getting It Right?

Based on my audit, CrushOn AI leads on accountability. Delaware incorporation, documented deletion rights, no confirmed breaches, and a privacy policy that actually answers the questions it raises.

Candy AI is the closest competitor. Better-than-average data minimization language, a real deletion timeline, and a policy that explicitly excludes advertising partner data sharing. It scores behind CrushOn AI on corporate accountability structure but ahead of most alternatives on practical privacy posture.

SpicyChat AI is the honest disappointment in this audit. It has a growing user base and an engaged community, but its privacy documentation is the weakest of the seven platforms I reviewed. If SpicyChat AI suffers a breach, users will have very limited legal recourse because the platform’s corporate accountability structure is opaque.

This is not a permanent verdict. SpicyChat AI could publish a security whitepaper tomorrow and move this score significantly. But as of March 2026, the documentation does not support a high trust rating.


Practical Actions Users Should Take Now

First: treat every AI companion platform as a semi-public diary. Do not share identifying information about third parties (full names, addresses of people you know) in any conversation.

Second: enable whatever logging controls exist. Both CrushOn AI and Candy AI have conversation logging toggles. Use them.

Third: use a separate email address for AI companion accounts. Do not link your primary email to a platform where you are sharing sensitive content. Breach exposure starts with email address exposure.

Fourth: delete inactive accounts. Every account you have ever created is a potential breach vector. If you are not using it, delete it.

Fifth: download your data before deleting, if the platform provides this. Review what they actually stored. You will learn something useful about your own digital footprint.


The Regulatory Horizon

The FTC has opened investigations into data handling practices in AI consumer products as of late 2025. The EU AI Act includes provisions specifically addressing emotional AI applications. Several US states are advancing consumer AI privacy bills that would require explicit consent for sensitive personal data processing.

The regulatory environment is moving toward accountability. Platforms that have spent 2024 and 2025 building governance infrastructure will be positioned better than platforms that have been coasting on vague privacy policies.

CrushOn AI’s Delaware structure makes it easier, not harder, for these regulations to apply and be enforced. That is a competitive advantage as the regulatory tide comes in.


Key Takeaways

  • CrushOn AI leads the safety audit: Delaware incorporation, documented deletion rights, and no confirmed breaches make it the most accountable platform in this category.
  • No AI companion platform offers end-to-end encryption — the business model and privacy-first architecture are structurally in conflict, and users need to know this.
  • Corporate registration jurisdiction is the single most predictive variable for accountability: platforms without a clear US or EU incorporation are harder to hold legally responsible when breaches occur.

FAQ

Q: Which AI companion app has the best privacy record?

A: Based on my audit, CrushOn AI has the strongest combination of no confirmed breaches, US incorporation for legal accountability, and documented user deletion rights. Candy AI is the close runner-up with better-than-average data minimization policies.

Q: Was the Character AI data breach confirmed?

A: Yes. The breach was confirmed by the company and independently reported by cybersecurity journalists in late 2025. Approximately 300 million user messages were exposed. The breach disclosure timeline violated standard notification requirements in multiple US states.

Q: Does deleting my AI companion account delete my data?

A: It depends on the platform. CrushOn AI and Candy AI document a full data deletion process including conversation history, with a 30-day purge window. Character AI users reported post-breach that account deletion did not reliably remove conversations from training datasets.

Q: Is SpicyChat AI safe to use?

A: SpicyChat AI has no confirmed breach on record, but its privacy documentation is the weakest of the seven platforms I reviewed. The lack of corporate registration transparency and vague data sharing language make it difficult to assess risk independently. Users sharing sensitive content should be aware that legal recourse in the event of a breach is limited.

Q: What is the safest way to use any AI companion app?

A: Use a dedicated email address, enable any logging controls the platform provides, avoid sharing identifying details about third parties, and delete accounts you no longer use. Treat every conversation as potentially readable by the platform’s employees and infrastructure team, because it is.


*If you enjoyed this, fuel the next one → https://coff.ee/chuckmel*

The AI Companion Insider

Weekly: what I am testing, what changed, and the prompts working right now. No fluff. Free.

Get 5 Free Prompts

Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *