Skip to main content
search

In a world where therapy is expensive, waitlists are long, and emotional burdens feel heavier than ever, a new kind of confidant has quietly entered the chat: artificial intelligence.

For a growing number of young people, AI chatbots like ChatGPT have become digital therapists, emotional support systems, and late-night sounding boards. These bots are always available, never judge, and don’t charge $150 an hour. But here’s the problem: unlike your human therapist, AI doesn’t operate under any privacy laws. And the details you pour out to your favourite chatbot? They’re not legally protected. Not yet.

This dissonance, between emotional vulnerability and legal ambiguity, is raising serious questions. Especially after OpenAI’s CEO, Sam Altman, in a recent interview, said something that left people uneasy: “I think [it] makes sense to want the privacy clarity before you use [ChatGPT] a lot.”

That’s a pretty loaded warning coming from the man who helped build the tool.

The Digital Shoulder to Cry On

If you’re reading this and nodding in recognition, maybe you’ve told ChatGPT about your anxiety before a job interview, shared your heartbreak after a breakup, or just vented about how overwhelmed life feels; you’re not alone.

Whether they’re in Lagos or Los Angeles, Gen Z and Millennials are leaning on AI tools for more than productivity hacks. A quick search on TikTok reveals hundreds of videos with captions like “I told ChatGPT I was feeling depressed and here’s what it said…” or “Why my AI friend is better than my ex.” Others have gone further, custom-training bots to mimic dead loved ones or act as always-available companions.

A 2024 survey by Deloitte found that nearly 35% of Gen Z users have used generative AI to discuss mental health concerns or emotional challenges. That number is expected to rise as chatbot interfaces get more sophisticated and human-like in tone.

For people who feel isolated or misunderstood in real life, or those in cultures where therapy is still stigmatised, talking to a machine that “gets it” without judgment can feel like a lifeline.

But what if that digital therapist is more like a nosy stranger than a trusted confidant?

There Are No HIPAA Laws For Chatbots

In most countries, your therapist is bound by confidentiality laws. What you say in a session is protected by regulations like HIPAA (in the US) or GDPR (in Europe). Even your doctor or lawyer can’t just blurt out your private issues.

AI, on the other hand? No such protection.

When Theo Von, host of the This Past Weekend podcast, raised this concern in a conversation with Sam Altman, the CEO of OpenAI, agreed: “I think that’s very screwed up.”

He’s right.

ChatGPT may feel like your virtual therapist, but it’s still a product, a machine learning model trained on large datasets and managed by a company. While OpenAI says it doesn’t use your data to retrain its models by default (as of 2024), the truth is: your conversations could be accessed by human reviewers under certain conditions. And there’s no ironclad guarantee your sensitive disclosures are safe forever.

Imagine sharing details of your mental health struggles, past trauma, or even suicidal thoughts with a chatbot, only to realise those words could technically be read, analysed, or stored.

Why AI Becomes A Confidant In The First Place

Still, it’s easy to see why people do it.

The average cost of a therapy session in Nigeria, for example, ranges between ₦20,000 and ₦50,000, well out of reach for many young people, especially those dealing with joblessness, school stress, or societal pressure.

In the U.S., sessions can be even pricier. Combine that with long waiting periods and rising burnout levels, and, understandably, AI feels like the next best thing. It’s fast, free, and feels responsive.

And for some, it’s safer emotionally.

“There’s no fear of judgment,” says Kelechi, a 24-year-old in Enugu who’s used ChatGPT to talk through her struggles with impostor syndrome. “I grew up in a home where feelings weren’t allowed, so having this non-human thing just listen, even if it’s not perfect, felt revolutionary.”

AI doesn’t interrupt. It doesn’t try to fix you. And that illusion of empathy, even if programmed, can be powerful.

But There Are Limits And Risks

Chatbots like ChatGPT are not therapists. They can simulate compassion, offer decent suggestions, and even quote cognitive behavioural therapy (CBT) models. But they don’t have human intuition. They lack context, nuance, and the ability to hold space for pain in the way a real counsellor can.

They can also mess up badly.

AI models can misunderstand tone, offer dangerous advice, or reinforce harmful stereotypes. There have been reported instances of bots giving questionable responses to people in distress or minimising complex emotional issues.

Then there’s the issue of addiction.

As more users form emotional bonds with AI, the risk of dependency grows. People may begin avoiding real human connection, opting instead to vent to a bot that doesn’t challenge them. That might feel comforting short-term, but in the long run, it could stunt emotional growth and deepen isolation.

What This Means for the Future

This moment, where AI and emotional health intersect, is uncharted territory. Tech companies aren’t therapists, yet their products are becoming emotional lifelines. Regulators haven’t caught up. Users don’t fully understand the risks.

And yet, the need remains.

Mental health care, especially for young Africans navigating societal expectations, economic pressure, and trauma, is not optional. It’s urgent. If AI continues to fill the gap left by inaccessible therapy, then the conversation must shift from “Should people be doing this?” to “How can we make it safer?”

This could mean stronger privacy laws specifically tailored to AI chatbots. It could mean transparency from tech companies about data storage and use. It could also mean developing AI companions that are ethically built for emotional support, with clear disclaimers, human oversight, and built-in safeguards.

Until Then, What Should Users Do?

If you’re one of the many people who talk to ChatGPT or similar bots for support, don’t panic, but do get intentional.

  • Avoid sharing identifiable personal information or anything that could harm you if leaked.
  • Remember that AI is a tool, not a therapist. It can help you think through things, but it cannot replace real human care.
  • If you’re struggling with serious mental health issues, seek professional help. Many platforms offer subsidised rates for students or remote sessions.
  • Use AI as a stepping stone, not a substitute. Let it guide your thinking, not govern it.

The future of therapy might include AI, but it can’t be AI alone. Emotional support is deeply human work. And while machines may mimic empathy, real healing still happens in spaces where vulnerability is honoured—and protected.


Read Also: young-african-entrepreneurs-may-be-stepping-back-from-startups-heres-why