Are AI Boyfriend Apps Safe? Privacy, Data Breaches, and What You Should Know
By James Mackenzie
AI boyfriend apps ask you to share things you might not tell anyone else. That's what makes them appealing — and what makes their security practices so important. In the past 18 months, multiple AI companion apps have suffered major data breaches, exposing millions of private conversations. Regulators in the US and Europe have started taking action. And yet most users have no idea how their data is being handled.
This guide covers what's actually happened, how the most popular apps handle your data, and what you can do to protect yourself.
What's Gone Wrong: Recent Data Breaches
The AI companion industry has a serious security problem. Between January 2025 and April 2026, at least 20 documented security incidents have exposed personal data from tens of millions of users across AI-powered apps. Here are the most significant:
Chat & Ask AI — 300 Million Messages Exposed (January 2026)
The biggest AI companion breach to date. A security researcher discovered that Chat & Ask AI, developed by Turkish company Codeway, had left its entire Firebase database publicly accessible with no authentication required. Anyone with basic technical knowledge could access approximately 300 million messages from over 25 million users.
The exposed content included complete chat histories, email addresses, phone numbers, and user configurations. Among the messages were discussions about mental health, personal crises, financial details, and illegal activities. Codeway fixed the issue within hours of being notified, but the data had been publicly accessible for an unknown period before discovery.
Chattee Chat and GiMe Chat — 43 Million Messages Leaked (October 2025)
Two AI companion apps developed by Hong Kong-based Imagime Interactive exposed 43 million messages and over 600,000 images and videos from more than 400,000 users. Researchers found a publicly exposed and unprotected Kafka Broker streaming system that required no credentials to access. The leaked data included intimate conversations and user-uploaded content.
MyLovely.AI — 100,000+ Users Exposed (April 2026)
AI girlfriend platform MyLovely.AI suffered a breach that exposed email addresses, user-created prompts, links to AI-generated images, and social media profiles (including Discord and X usernames) of over 100,000 users. For a platform dealing in NSFW content, having prompts and generated images linked to identifiable user accounts is especially damaging.
The Common Thread
Nearly every one of these breaches traces back to the same preventable root causes: misconfigured Firebase databases, missing security rules, hardcoded API keys, and exposed cloud backends. These aren't sophisticated attacks — they're basic security failures that any competent development team should prevent. The uncomfortable reality is that many AI companion apps are built by small teams moving fast to capture a growing market, and security often isn't the priority.
Regulatory Action: Governments Are Starting to Respond
Replika's 5 Million Euro GDPR Fine (2025)
Italy's data protection authority fined Luka, the company behind Replika, 5 million euros for multiple GDPR violations. The investigation found that until February 2023, Replika had failed to establish a legal basis for its data processing, provided an inadequate privacy policy, and implemented no age verification mechanisms despite claiming minors were excluded from the service. Italy has also opened a separate investigation into whether Replika's AI training methods comply with GDPR.
In the US, a 67-page FTC complaint filed in January 2025 alleged that Replika engaged in deceptive marketing, deliberately fostered emotional dependence in users, and used fabricated testimonials about the app's mental health benefits.
California SB 243 — The First AI Companion Law (January 2026)
California became the first US state to pass a law specifically targeting AI companion apps. SB 243, effective January 1, 2026, requires operators of "companion chatbot" platforms to:
- Disclose that the AI is not human when a reasonable person might be misled
- Implement crisis prevention protocols including notifications that refer at-risk users to suicide hotlines and crisis services when they express suicidal ideation or self-harm
- Prevent chatbots from producing harmful content directed at minors, including sexually explicit material
- Display reminders every 3 hours encouraging young users to take a break
- Submit annual reports to California's Office of Suicide Prevention beginning July 2027
Violations can result in civil lawsuits with damages of at least $1,000 per violation. At least five other states have introduced or are advancing similar legislation.
How the Top Apps Handle Your Data
Not all AI companion apps are equal when it comes to privacy. Here's what we know about the apps we review:
Kindroid
- Encryption: Conversations are encrypted in server storage, but not end-to-end encrypted — staff could technically access data if legally required
- Data selling: Does not sell user data to advertisers
- Data deletion: Full deletion available on request
- Training: Unclear whether conversations are used for model training
Kindroid is relatively transparent about its practices. The lack of end-to-end encryption is a real limitation, but the no-sell policy and deletion option put it ahead of many competitors.
Replika
- Encryption: Conversations stored on Luka servers without end-to-end encryption
- Data selling: Privacy policy allows data sharing with third parties
- Data deletion: Available but the scope of deletion is unclear given ongoing AI training
- Training: Conversations have been used for AI training — this was a key issue in Italy's GDPR investigation
- Regulatory history: 5 million euro GDPR fine, FTC complaint, temporary ban in Italy
Replika's privacy situation is the most concerning among major apps. The combination of regulatory fines, documented use of conversations for AI training, and a privacy policy that allows third-party data sharing means you should treat every Replika conversation as potentially non-private.
Nomi.ai
- Encryption: Data encryption practices are unclear — some reviews report messages may be unencrypted
- Data selling: Claims chats are kept private and not used for ads
- Data deletion: Available on request
- Training: Limited information available
Nomi.ai's privacy practices are less transparent than Kindroid's but don't carry the regulatory baggage of Replika. The lack of clarity about encryption is a concern.
Candy.ai
- Encryption: Limited information available
- Data selling: Limited transparency
- Data deletion: Available on request
- Training: Unclear
Candy.ai provides less privacy documentation than competitors. Given that the platform handles NSFW content including images and video, the lack of transparency about data handling is especially concerning.
How to Protect Yourself
You don't need to avoid AI companion apps entirely, but you should use them with clear eyes about the risks. Here's practical advice:
What Not to Share
Treat every conversation with an AI companion as if it could become public. Specifically, never share:
- Your real full name — use a nickname or first name only
- Home or work address — keep location details vague
- Financial information — no bank details, card numbers, or income specifics
- Passwords or account credentials — for any service
- Employer details — company name, role specifics that could identify you
- Other people's personal information — don't share details about friends, family, or partners that could identify them
Before You Sign Up
- Read the privacy policy. Look specifically for: what data is collected, whether conversations are used for AI training, whether data is shared with third parties, and what happens when you delete your account.
- Use a separate email address. Create an email that isn't connected to your primary accounts, social media, or real name.
- Don't sign up with social login. Avoid "Sign in with Google/Apple/Facebook" — it connects your AI companion account to your real identity. Use email registration instead.
- Check for recent breaches. A quick search for "[app name] data breach" before signing up takes 30 seconds and could save you real problems.
While Using the App
- Assume no encryption. Until an app explicitly confirms end-to-end encryption (and most don't), assume your conversations could be read by the company, its employees, or anyone who gains access to their servers.
- Be careful with photos. If the app lets you upload images, those images are stored on servers you don't control. Don't upload anything you wouldn't want leaked.
- Review permissions. Check what device permissions the app requests (camera, microphone, contacts, location). Deny anything that isn't necessary for features you actually use.
- Monitor your email for breach notifications. If you get one, change passwords on any accounts that share the same email, and consider deleting your companion app account.
If You Want to Leave
- Request data deletion before deleting your account. Most apps have a deletion process separate from simply uninstalling the app.
- Revoke permissions in your phone settings after uninstalling.
- Check if your data was in a known breach using services like Have I Been Pwned.
The Bottom Line
AI companion apps collect some of the most intimate data any app category handles — personal confessions, emotional vulnerabilities, sexual content, and detailed information about your life. Yet the industry's security practices are, on the whole, far behind what this level of sensitivity demands.
That doesn't mean you shouldn't use these apps. It means you should choose carefully, share thoughtfully, and stay informed. Among the apps we review, Kindroid currently has the most transparent privacy practices, while Replika's documented regulatory issues make it the riskiest choice for privacy-conscious users.
The regulatory landscape is shifting. California's SB 243 is just the beginning, and more states are following. As these laws take effect, expect apps to improve their practices — but don't wait for regulation to protect yourself. The precautions in this guide cost nothing and take minutes to implement.
For detailed reviews including privacy information on every major AI boyfriend app, see our full comparison page.