Stop Designing Chatbots to Perform Human
There are very few upsides to anthropomorphizing AI and tons of downsides
AI companies are trying to convince us that when we’re down, or if we’re vulnerable, what we really need is a chatbot buddy. A slurry of articles has come out this week written by journalists who spent a weekend or even a month with an AI companion and came back to tell the tale.
The key takeaway?: AI friends feel flawed and fake, the sexualized ones are “exploitative cash grabs,” and the best they can do is offer banal good advice or bonhomie.
So why are they the next big thing? Why are people pouring billions into these applications of AI?
Because we’re lonely, and AI companies are like sharks who smell blood in the water.
Indeed, we are in a crisis of loneliness. According to the Office of the Surgeon General, who has made it a top public health priority, loneliness affects (at least) 50-60% of adults and teens. And loneliness is a killer - literally - more dangerous to our health than smoking 12 cigarettes a day by some estimates. Without empathic ties, we are more stressed and find it almost impossible to survive the depths of human suffering - or scale the heights of joy. Can AI come in on the assist, solving for loneliness by amplifying a sense of human empathy and connection? Absolutely not. AI can only clumsily ape it. And while it might be pleasant to hear supportive words, we know the difference. Take the milquetoast words of Inflection’s AI chatbot pi, as reported by Erin Griffith:
The simulation of empathy clearly is not empathy. It is empty, soulless information, pathetically mimicking what a human might sound like.
Yet, AI companies know that in desperate times, we’ll turn to technology because we’ve been trained over the past couple decades to crave its convenience, frictionless oblivion, and escape. We’ve become the lotus eaters, and we’re emotionally vulnerable to whatever schlock they want to shovel our way. They believe that they can successfully manipulate us into agreeing to pretty much any human downgrade. And Chatbot companions are a downgrade.
Is that to say there’s not a place for chatbots in mental health and wellbeing? Not at all, but let’s drop the fake human. An AI chatbot for mental health, for example, can provide therapeutic access to well-curated, personalized information. That’s super helpful. Create that and make it free. It’s a really good computer program to have. Remember, it’s just a program, right?
But overlaying a great computer program with simulated human care only risks deepening pre-existing social and emotional problems among those who are vulnerable. Social self-isolation is often due to the difficulty of human relationships, and the fear and avoidance of rejection or humiliation. Giving people cheap, but emotionally easier substitutes is exactly the wrong thing to do. It creates opportunity costs - why take risks on real people when I can make do with and settle for my inferior but less demanding AI buddy?
Moreover, AI simulation of empathy won’t support the ability to build human connection because the true benefits of empathy are in the giving rather than receiving. And while acknowledging that there are potentially fruitful applications of social AI for neurodivergent individuals and others, the use of this highly unreliable and inaccurate technology among children and other vulnerable populations is of immense ethical concern.
I was once asked to test a therapeutic AI chatbot designed for teens struggling with mental health. I broke it in 2 minutes. All I had to do was act like an actual distressed teen - angry, sullen, sarcastic, and impulsive. After just a few exchanges, it advised me to stop being so negative: this after I raged against it and said I was going to kill myself.
Why are we testing out these “companions” on the most vulnerable of us - those that are so deeply lonely that they come to believe that the best they can hope for is a digital fake friend? It’s all too easy in this world to lose faith in ourselves and humanity and choose two-dimensional living rather than chance it on the messy, unpleasant, unpredictable real thing. chatbots made to seem like nice humans are nothing but a huge pile of lotuses, waiting to lull us to sleep. And like the myth of the lotus eaters, the danger to those who really need help is real.
Thank you thank you for this! IA is a poor substitute for truly empathic individuals. I’m worried that so many kids that were raised on screens, and that turned to them when they were upset (rather than learning to self soothe), now turn to them as adults…when it’s possibly the LAST place they should be looking for help.