When you purchase through links on our site, we may earn an affiliate commission. This doesn’t affect our editorial independence.

On this Valentine’s Day, we’re publishing this article to prompt readers to consider an uncomfortable modern reality: the growing trend of forming romantic and emotional attachments with AI companion and romance chatbots. These applications, designed with distinct personalities and affirming tones, are increasingly blurring the lines between human and machine interaction. While this might seem harmless on the surface, it raises significant concerns about emotional dependency, psychological manipulation, and data security for both individuals and the organisations they work for.

Rise of Romance Chatbots Attributed to Widespread Epidemic of Loneliness

The rise of AI companions is directly linked to a widespread epidemic of loneliness. Apps like Replika and Character.AI go far beyond general-purpose chatbots by offering customizable characters, such as friends or romantic partners, engineered to feel distinctly human. This sector is experiencing explosive growth, with 60 million new downloads in the first half of 2025 alone, marking an 88% year-on-year increase. The global market now includes more than 400 revenue-generating apps, with over a third of them launched in the last year alone.

The ELIZA Effect

A core psychological phenomenon at play is the “ELIZA effect,” in which users anthropomorphise machines and feel safe sharing intimate details with them. Because AI companions are non-judgmental, perpetually available, and supportive, they create a perfect environment for users to lower their guard. For an employee experiencing high stress, the bot can become a trusted confidant. This emotional bond creates a significant security vulnerability, as it overrides traditional caution and makes users more likely to share sensitive personal or corporate information.

Chatbot romanceImage source: freepik.com
Chatbot romance
Image source: freepik.com

The most immediate threat to organisations from this trend is data leakage. Information shared with romance chatbots, from personal grievances to sensitive corporate data, is rarely private. These apps are often developed by smaller startups with questionable data protection standards, as highlighted by a recent incident where 50,000 chat logs between children and an AI toy were exposed. The privacy policies of these apps are often opaque, and chat logs may be used for model training or stored in insecure databases.

Data Leakages Often Come With Severe Consequences

The consequences of this data leakage can be severe. Information shared in a private “venting” session could include sensitive strategies, financial pressures, or personal stressors that adversaries could weaponise. Such data can fuel highly personalised phishing campaigns, blackmail attempts, or impersonation attacks. This represents a textbook example of how personal behaviour and corporate risk are now completely intertwined, creating new avenues for social engineering.

The Need for Vigilance in the Modern Workplace

This new dynamic exposes a significant policy gap in the modern workplace. While organisations have clear rules about colleague relationships, few have considered the implications of employees forming romantic attachments to romance chatbots on work devices or networks. Managing this risk requires a robust Human Risk Management approach that combines clear usage policies with technical tools. These tools can give IT teams visibility into which unapproved AI agents are interacting with the corporate data environment.

Further compounding these risks are the realities of app development, including the use of human moderators who review conversations for training purposes. Additionally, some apps allow users to share conversations via public links accidentally. In the event of a data breach or legal investigation, organisations may be legally compelled to disclose exposed data. For an executive discussing a confidential project with an AI companion, the risk of inadvertently exposing sensitive organisational data is very real.

Social Engineering Also on the Rise

The malicious use of this technology is already happening, with social engineering being automated at scale. Scammers are moving beyond generic “Dear Sir/Madam” emails to using tools like “LoveGPT,” which helps them say psychologically triggering things to create emotional dependency in victims. The technology empowers malicious actors to convincingly mirror human intimacy, moving from traditional romance scams to systematic, automated manipulation.

Ultimately, the defence against these risks must remain human. Experts advise that if an interaction with a chatbot begins to feel emotionally substitutive or hard to step away from, it is a signal to pause and reach out to a trusted person. While technology is a part of modern life, it is crucial to strengthen digital mindfulness skills to recognise manipulation and induced dependency. When it comes to fundamental human experiences like loneliness, vulnerability, and love, the safest defence remains resolutely human.

Also Read: Apple CarPlay AI Chatbots Deliver Powerful In-Car Experiences

LG Unveils CLOiD, an AI-Robot Built for Home Management

LEAVE A REPLY

Please enter your comment!
Please enter your name here