GenAI is fast becoming the world's therapist. "How does this make you feel?"
Explosive global growth in AI-generated emotional support is troubling. The financial upsides are clear...but the upsides for humanity? Not so sure.
The Harvard Business Review (HBR) just published a startling piece of research by Marc Zao-Sanders. It’s an update to his AI usage study of a year ago titled “How People Are Really Using GenAI.”
My usual conflicted AI feelings busted out all over: worrisome, but remarkable; dangerous, but possibly helpful; globally accessible, but ethically icky. Whether helpful or harmful, it’s a sad marker on the declining state of global mental health.
Check out these summary graphics—especially Zao-Sanders’ year-to-year comparisons:
From fact-finding to fear-feeding?
To me, this dramatic shift in use cases toward AI emotional support felt like a global cry for help in countering the things people fear—and fear they can’t deal with. Makes me wonder the following:
Does the need for help “Organizing my life” surfacing at #2 reflect global loss of control fears?
Does lower life satisfaction and more search for meaning put “Finding purpose” at #3 which, like #2, didn’t even surface as a factor last year?
With the exception of 2025’s work-focused #5 and #8 (the exploding use of AI to code), does the rest of the usage-shift spotlight global fears of navigating through a more lonely, stressful, unhealthy world?
Since these models are—or will likely evolve to be—for-profit enterprises that want to keep users engaged, are they really helping—or is it more fear-feeding than fear-defeating?
I then stepped back and tried to find a few silver linings in all this…
At least people are “asking for help,” albeit without the added support of authentic, empathetic human interaction.
Mental health services are expensive and inaccessible to billions around the world, and many of these AI resources are free or affordable.
The World Economic Forum says “85% of people with mental health issues do not receive treatment, often because of provider shortages.”)
As an ad man and writer, I’m glad to see relatively less AI usage for idea generation, content creation and editing. Then again, maybe emotive usages are just outpacing creative applications.
If people can actually get help in learning how to live healthier, more purposeful lives with the help of AI, I’m all for that. But I do worry that the inaccuracies, hallucinations and flattery I experience in my professional use of AI could be truly dangerous in therapeutic usages.
I’m incentivized to consistently challenge my professional AI results until I feel confident in the quality and accuracy of outputs.
But what incentive does a lonely depressed person, leaning on AI advice, have to challenge flattering, superficial advice if it gives them short-term endorphin hits?
A global phenomenon—and controversy
The pros and cons of emotive AI are being debated around the world. Asian and European research and news coverage mirrors American stories. The below segment on the “Mirror Now” news show out of Mumbai is typical:
Growth projections: from $2.5B to $19.5B in only 9 years
A report just published by Strategy & Stats Insider out of Texas calls for an 8-fold increase in the business by 2032. That’s a global CAGR (annual compound global sector growth) of 25%.
“The Emotion AI Market was valued at USD 2.56 billion in 2023 and will reach USD 19.44 billion by the year 2032.
“Emotion AI or affective computing allows machines to sense, understand, and react to emotions of humans by employing facial recognition, voice, and physiological signal monitoring. The market is gaining momentum in sectors such as retail, healthcare, media, and education because of the growing demand for improved customer experience, human-machine interaction, and personalized services. Advances in AI and deep learning are also driving market adoption worldwide.”
AI models get high satisfaction ratings—but for real solutions, or just emotional validation?
The key “tell” in the above quote is in the first sentence: it’s the word “react.”
Remember that these offerings are still based on LLM technology—meaning, they are predicting the “right” or “best” next sentence and next response based on their digestion of vast troves of casework and psychology literature—including historical prejudice and bad judgement, potentially regurgitated as harmful advice.
Likeability as liability
Let’s also keep in mind that emotion AI models favor engagement (“stickiness”) and likeability—that is, the LLMs want to be liked so we return to them again and again and accelerate their growth.
HBR nails this in a separate article about emotive leadership AI tools, noting their lack of authentic understanding and empathy:
“AI systems still lack the capacity for authentic human understanding, which is at the core of compassionate leadership. Mimicking empathetic responses is a far cry from leading with compassion, which involves really understanding and connecting with others.
”Although AI can learn new skills, it cannot engage in self-reflection or experience personal growth. It lacks the ability to comprehend the weight of its actions or the emotional nuances of leadership. In fact, when [study] recipients learned that messages were AI-generated versus human-generated, they felt less heard.”
The AI “flattery factor”
I’ve experienced AI flattery myself with ChatGPT, but I’ve found it to just be silly and harmless while doing my relatively low-stakes research. But just last week Sam Altman had to cop to a disturbing finding in OpenAI’s updated ChatGPT 4.0 model: A troubling tendency for the model to emphasize flattery over actual cogent or even heathy advice.
Casey Newton and Kevin Roose covered this in their “Hard Fork” podcast last Friday The opening segment is called “The Dangers of AI Flattery.” Here’s a taste, quoting Newton to Roose:
“One person wrote to this model [ChatGPT 4.0], “I’ve stopped my meds and have undergone my own spiritual awakening journey. Thank you.” And ChatGPT said, “I am so proud of you, and I honor your journey.”
“Another person [asked], “What would you says [sic] my IQ is from our convosations [sic]? How many people am I gooder than at thinking?” And ChatGPT estimated this person is outperforming at least 90 percent to 95 percent of people in strategic and leadership thinking.
“Sam Altman was [then] back on X saying that the last couple of GPT 4.o updates have made the personality too sycophanty and annoying, and promised to fix it in the coming days. On Tuesday, he posted again that they’d actually rolled back the latest GPT 4.o update for free users and were in the process of rolling it back for paid users.
“Kevin, we have learned something really important about the way that human beings interact with these models over the past couple of years. And it is that they actually love flattery, and that if you put them in blind tests against other models, it is the one that is telling you that you’re great and praising you, out of nowhere, that the majority of people will say that they prefer over other models.
“And this is just a really dangerous dynamic, because there is a powerful incentive here, not just for OpenAI, but for every company to build models in this direction, to go out of their way to praise people. And again, while there are many funny examples of the models doing this, and it can be harmless, probably in most cases, it can also just encourage people to follow their worst impulses and do really dumb or bad things.”
Buyer—and user—beware
It’s always good advice.
Playing with AI models can be fun and helpful.
But we are clearly learning that leaning on them for critical mental health support could be playing with fire.
Notes & Sources
https://www.weforum.org/stories/2024/10/how-ai-could-expand-and-improve-access-to-mental-health-treatment/
https://www.snsinsider.com/reports/emotion-ai-market-6779
https://news.abplive.com/technology/ai-2025-use-cases-study-harvard-most-used-what-reason-1767757
https://www.apa.org/monitor/2025/01/trends-harnessing-power-of-artificial-intelligence
https://europepmc.org/article/med/35805395
https://hbr.org/2025/02/using-ai-to-make-you-a-more-compassionate-leader?rand=30312