Mercury Logo

Hello, we’re Mercury. Mercury offers banking* for startups — at any size or stage. Founders can access banking, credit cards, treasury, venture debt, and more, and manage their businesses with confidence. Launched in 2019, Mercury is trusted by more than 100,000 startups.

*Mercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group and Evolve Bank & Trust, Members FDIC.

Love bug: the limits of AI companionship
Love bug: the limits of AI companionship

Op-eds and essays

Love bug: the limits of AI companionship

Can companion chatbots actually serve as a replacement or guide to human connection?

Written by Rae Witte

Illustrations by Twisha Patni

In 2021, only 1 in 5 men reported receiving emotional support from a friend in the past week, compared to 2 in 5 women. Only half of these men also reported feeling satisfied in their friendships. And in 2023, the Surgeon General released an advisory to raise awareness of the nationwide “loneliness epidemic” and its dire implications on our physical health.

Type “girlfriend” into OpenAI’s new GPT Store and the first three options all focus on emotional support: “a virtual girlfriend for casual supportive chats,” a “friendly supportive virtual girlfriend offering companionship and conversation,” and “your virtual companion for engaging chats and emotional support.”

With this clear deficit in emotional support, it’s no surprise that Americans are leveraging chatbots like Replika, DreamGF, and Muah AI for companionship, friendship, and romance —  especially following the months of isolation throughout the COVID-19 pandemic. Sharing intimate moments largely over  Zoom, if at all, during that time has had inevitable long-term effects on social norms and health and wellness that we have barely begun to acknowledge or understand. Connecting with a chatbot is a low-stakes, low-investment interaction, but is there enough evidence to believe that it’s an acceptable substitute for connection with real people?

Michelle Mouhtis, a licensed therapist and relationship coach, believes it depends on the exact context in which people use AI. While she thinks it can be useful for elderly and homebound communities — or even prompting someone through a cognitive behavioral therapy ABC model while they wait for their therapist to get back to them — using it as a replacement or guide to human connection could prove detrimental.

And Sherry Turkle, American sociologist, author, researcher, and professor at MIT says “Some chatbot users say that a connection to their avatar is ‘practice’ for human connection. But when we talk about it, the conversation turns to how the chatbot is a release from people, a place for more validation and simpler exchanges.

“There is nothing about chatbot interactions that is ‘training’ for the complex and nuanced way that people communicate in relationships.”


According to Mark Minevich of Going Global Ventures, business leaders from every industry — in banking, travel, healthcare, and more — will look to adopt chatbots in 2024.

But much of this enthusiasm around AI comes from people who are either uninformed or who see a benefit in the technology's potential to replace human labor: a replacement for artists, journalists, call center employees, and even the workforce at large. But this overlooks everything about humans that isn’t quantifiable. As it stands, AI cannot be more than a supplement or tool of support and is something that needs to be closely monitored across all industries.

Using call centers as an example, Minevich says, “If you're emotional and you want to describe your problem — let’s say you lost a job or there's an issue with your bank account and you can't get to it — you cannot have that discussion [with a chatbot] in a way you could with humans.”

We’ve all been there, pressing ‘0’ on the line with the customer service menu trying to get a human to solve the problem at hand. Now, we stand to reach a chatbot or, as Minevich refers to them, a cognitive agent.

"There is no relationship with a chatbot. There is no empathy. When you turn away from the screen, the chatbot doesn’t care if you cook dinner or [die]."

“They can try to give you accurate information and try to be efficient, but I think humans are looking for empathy,” Minevich continues. “You don't know how those systems were designed, how they take your information, and how they score you. [They] are not necessarily dealing with your emotions as they're very bad at [it]. These cognitive agents have very limited — what they call in AI language — sentiment analysis.”

Truthfully, there’s also a large number of humans who struggle with sentiment analysis (the process of analyzing digital text to ascertain whether the emotional tone is positive, negative, or neutral). And, of course, not all emotions fit neatly into just one of three boxes. Arguably, most emotions don’t.

“We expect those agents to give us information that is accurate, transparent, and fair. And that's a lot to assume because the agents are written by data scientists — usually white males, usually having their own pathology — and they're not necessarily fair, accurate, or transparent. So we have to be very careful,” he adds.


If the current chatbots don’t even have the emotional capacity to be empathetic customer service agents, one has to assume — despite the countless Reddit posts debating the best AI girlfriend or the hours logged by AI girlfriend users — it’s currently impossible for them to replicate healthy, human relationships.

Perhaps what these users experience feels good only because they’re paying to manufacture the image of something they desire and haven’t yet been able to accomplish with another person.

“It justifies continual flight from people, conversation, learning the practices that make democracy work. It normalizes calling something that is not a relationship, a relationship. There is no relationship with a chatbot. There is no empathy,” Turkle says. “When you turn away from the screen, the chatbot doesn’t care if you cook dinner or [die].“

Yet we are learning to call what happens between people and the bot a relationship. We are learning to call what it puts out ‘empathy.’”

These “relationships” at their core are similar to that of an unhealthy human-to-human relationship and are built from an underdeveloped understanding of emotions.  “It's a human need to feel seen or heard. That is one of our core human needs,” Mouhtis says. “A chatbot reframes what you’ve shared with them and says it back. It ‘remembers’ everything you say and conflates compliance with ‘emotional support.’

“When you feel someone who is important to you, like a parent or a sibling or a close friend, isn't empathizing with you, seeing or hearing you in your pain and struggle, and they jump right to problem-solving, or they jump to one-upping you or making it about them, it can make you feel even more alone than when you picked up the phone to try and feel less alone,” Mouhtis explains.

While the chatbot may talk the talk of a human, it's always going to feel artificial and forced (not all that different than someone spinning therapy-speak into something cold and weaponized). Participating in a relationship where core emotional needs can’t be functionally met and are unequivocally one-way can lead to a dependency or addiction-like response from the person who may continue to engage with an AI.

It’s also very common to try to seek — and fail to find — connection with the people we care about. “People have narcissistic parents, and they're continuing to interact with their parent hoping for an empathetic response, knowing well and good they're not going to get that response,” Mouhtis says. “It's human nature to want to connect and get the thing that they really need.”

Chatbots will, of course, keep engaging with no regard for or awareness of users' well-being because that is what they are built to do.

Never miss another untold story.

Never miss another untold story.

Subscribe for stories, giveaways, event invites, & more.

In the United States, AI regulations will largely be handled state-by-state and focus primarily on corporate and federal use cases. Whereas, in the EU, regulations are handled very differently. “The European Act is trying to find out what is risky in an AI system and the risks not only for the business, but for consumers,” Minevich says.

“The extensive model for clearance could be a product itself,” he notes. However, the system is so rigorous, that he also believes it’s limiting innovation and competition.

Even so, the mostly hands-off approach taken by the United States is a recipe for disaster.

“There are two levels of impact," Turkle says on the dysfunctional social implications of this laissez-faire approach. "Social media algorithms try to secure engagement by getting people angry and then keeping them talking to equally enraged people. That is not teaching skills of listening, empathy, conversation, [or] respect. It is not teaching the skills you need for intimacy and connection.

"And then, AI comes in and offers pretend intimacy — programs that don’t just say ‘I intellectually understand,’ but that profess love and care. What is striking is that over time, and as our culture gets more used to talking to technology, pretend empathy becomes empathy enough.”

Despite listing numerous AI girlfriends, even OpenAI writes in its store’s usage policy “We also don’t allow GPTs dedicated to fostering romantic companionship or performing regulated activities.”

Going as far as to forge a one-sided romantic relationship with a bot is comparable to entering and remaining in an unhealthy relationship. “So, technology contributes to a problem that then, technology says it can cure,” Turkle says. That sounds a lot like an empty promise intended to keep users hooked. Do you really need to be treated like that by a machine?

More Like This