Does your chatbot know an excessive amount of? This is why you must assume twice earlier than you inform your AI companion every thing.
17 Nov 2025
•
,
4 min. learn

Within the film “Her” the movie’s hero strikes up an in the end doomed romantic relationship with a complicated AI system. On the time of its launch in 2013, such a state of affairs was firmly within the realms of science fiction. However with the emergence of generative AI (GenAI) and enormous language fashions (LLMs), it’s not such an outlandish prospect. In truth, “companion” apps are proliferating at present.
Nonetheless, inevitably there are dangers related to hooking up with an AI bot. How are you aware your private data received’t be shared with third events? Or stolen by hackers? The solutions to questions like these will make it easier to decide whether or not it’s all definitely worth the danger.
On the lookout for (digital) love
Companion apps meet a rising market demand. AI girlfriends and boyfriends harness the facility of LLMs and pure language processing (NLP) to work together with their customers in a conversational, extremely customized means. Titles like Character.AI, Nomi and Replika fill a psychological and typically romantic want for many who use them. It’s not onerous to see why builders are eager to enter this area.
Even the large platforms are catching up. OpenAI just lately mentioned it can quickly roll out “erotica for verified adults,” and should enable builders to create “mature” apps constructed on ChatGPT. Elon Musk’s xAI has additionally launched flirtatious AI companions in its Grok app.
Analysis revealed in July discovered that just about three-quarters of teenagers have used AI companions, and half accomplish that often. Extra worryingly, a 3rd have chosen AI bots over people for critical conversations, and 1 / 4 have shared private data with them.
That’s significantly regarding as cautionary tales start to emerge. In October, researchers warned that two AI companion apps (Chattee Chat and GiMe Chat) had unwittingly uncovered extremely delicate consumer data. A misconfigured Kafka dealer occasion left the streaming and content material supply methods for these apps with no entry controls. That meant anybody might have accessed over 600,000 user-submitted photographs, IP addresses, and tens of millions of intimate conversations belonging to over 400,000 customers.
The dangers of hooking up with a bot
Opportunistic menace actors might sense a brand new approach to generate profits. The data shared by victims in romantic conversations with their AI companion is ripe for blackmail. Photos, movies and audio might be fed into deepfake instruments to be used in sextortion scams, for instance. Or private data might be offered on the darkish internet to be used in follow-on identification fraud. Relying on the safety posture of the app, hackers may additionally have the ability to pay money for bank card data saved for in-app purchases. Based on Cybernews, some customers spend hundreds of {dollars} on such purchases.
As per the above instance, income era fairly than cybersecurity is the precedence for AI app builders. Meaning menace actors might be able to discover vulnerabilities or misconfigurations to use. They may even attempt their hand at creating their very own lookalike companion apps which conceal malicious information-stealing code, or manipulate customers into divulging delicate particulars which can be utilized for fraud or blackmail.
Even when your app is comparatively safe, it might be a privateness danger. Some builders accumulate as a lot data on their customers as attainable to allow them to promote it on to third-party advertisers. Opaque privateness insurance policies might make it obscure if, or how, your knowledge is protected. You may additionally discover that the knowledge and conversations you share together with your companion are used to coach or fine-tune the underlying LLM, which additional exacerbates privateness and safety dangers.
The way to preserve your loved ones protected
Whether or not you’re utilizing an AI companion app your self or are involved about your kids doing so, the recommendation is similar. Assume the AI has no safety or privateness guardrails inbuilt. And don’t share any private or monetary data with it that you just wouldn’t be snug sharing with a stranger. This consists of probably embarrassing or revealing photographs/movies.
Even higher, in the event you or your youngsters need to check out one in all these apps, do you analysis forward of time to seek out those that supply the perfect safety and privateness protections. That can imply studying the privateness insurance policies to know how they use and/or share your knowledge. Keep away from any that aren’t express about supposed utilization, or which admit to promoting consumer knowledge.
When you’ve discovered your app, you’ll want to change on safety features like two-factor authentication. This may assist forestall account takeovers utilizing stolen or brute-forced passwords. And discover its privateness settings to dial up protections. For instance, there could also be an choice to choose out of getting your conversations saved for mannequin coaching.
In the event you’re nervous concerning the safety, privateness and psychological implications of your youngsters utilizing these instruments, begin a dialog with them to seek out out extra. Remind them of the dangers of oversharing, and emphasize that these apps are a instrument for revenue which don’t have their customers’ finest pursuits at coronary heart. In the event you’re involved concerning the influence they could be having in your kids, it might be needed to place limits on display time and utilization – probably enforced by way of parental monitoring controls/apps.
It goes with out saying that you just shouldn’t enable any AI companion apps whose age verification and content material moderation insurance policies don’t supply enough protections in your kids.
It stays to be seen whether or not regulators will step in to implement stricter guidelines round what builders can and may’t do on this realm. Romance bots function in one thing of a gray space at current, though an upcoming Digital Equity Act within the EU might prohibit excessively addictive and customized experiences.
Till builders and regulators catch up, it might be higher to not deal with AI companions as confidants or emotional crutches.

