Therapy.AI

Maybe it’s my own fixation with the subject, but it feels increasingly difficult these days to go a full day without at least one conversation about AI, the good, the bad, and the objectively terrifying. One minute it’s how it optimized someone’s workflow. The next it’s what “superintelligence” could mean for jobs, or, more concerningly, the existential risks that even some of the people building these systems casually acknowledge. Either way, it’s become the topic du jour.

Back in what feels like the stone age of AI, 2023, if you asked people what they used it for, you’d typically get pretty benign answers that sounded boring but useful: code help, summarizing information, drafting emails, the kinds of tasks that save time.

And those uses are still obviously available on ChatGPT, Claude, and the ever-expanding ecosystem of other LLMs. In enterprise settings, the early dominant use cases skewed heavily toward productivity, code generation, and meeting or document summarization.

But something shifted.

In more recent analyses of real-world usage, “therapy/companionship” has emerged as the top use case, with “organize my life” and “find purpose” right behind it.

Read that again. We didn’t just adopt a new tool, we started confiding in it, and in some cases even relating to it and attaching to it.

Why This Makes Perfect Sense

If you’ve ever been up at 3:30 a.m. with your mind racing, it’s easy to understand the appeal.

AI is always available. It doesn’t get tired, doesn’t judge, is impeccably polite, and never interrupts. It doesn’t say, “Can we talk tomorrow?” It will listen forever, respond instantly, and mirror your words back to you with a kind of calmness that’s been algorithmically adapted to fulfill whatever it determines you need at that moment.

Further, in a world where loneliness is common, and access to therapy can be limited, expensive, and complicated, it’s reasonable that people would reach for something that feels like support.

So yes, it makes sense.

But “makes sense” and “is safe” are not the same thing.

What AI Can Genuinely Do Well

I’m not anti-AI. I’m AI-curious. Outside of psychology, economics and tech have always held my attention, and AI feels like it sits dead center in the overlapping Venn diagram between the three.

But I’m also cautious.

Used the right way, AI can be genuinely helpful, especially as a supplement. It can’t be your therapist, which I’ll get into in a moment, but it can function like an interactive workbook, or a journaling partner that never gets bored.

It can help you organize your thoughts when you’re spiraling and your mind is jumping tracks every ten seconds. It can help you spot cognitive distortions, defuse the catastrophic statements that sound persuasive in your head, and offer alternative interpretations you might not have considered at 3:30 a.m. For some people, it can also add practical, dare I say robotic, structure: “Here’s what you said matters to you, here’s a small action that matches it.”

All of that can be useful. And for a lot of people, it is.

But if you’ve ever experienced a moment of genuine human connection, the kind where someone feels almost precisely what you’re feeling, where the rhythm of your inner worlds somehow locks, a ghost in the machine simply will not suffice.

What’s Missing

So here’s where I stop being merely curious and start being more careful.

AI cannot replace human connection, even if it seems to be trying to.

It can simulate empathic language, but predictive language models aren’t empathic. Empathy is built on shared human experience.

AI has never had to tell someone “I’m sorry” while breaking their heart. AI has never been physically or sexually assaulted. AI has never been sat down by its parents and told, “your mother and I are getting a divorce, it’s all going to be ok.”

It can’t be in relationship with you. It cannot offer the kind of embodied presence human beings are wired to respond to: tone, timing, attunement, the subtle ways a real person feels the room and adjusts. That’s not touchy-feely therapy drivel, it’s physiology.

Part of what makes human connection so powerful is that we don’t just understand each other cognitively, we resonate neurologically. There are systems in the brain, often described as “mirror neurons,” that help us internally register and reflect another person’s emotional state. We can feel it when someone else feels it.

Some of the most profound moments I’ve had with clients over the years have been moments like that, when the posture in the room shifts, when both of us can sense, without needing to name it immediately, that something is happening right now. I’m not sure an algorithm can ever truly approximate that.

And if someone is using AI as their primary source of emotional support, the risk is not just “bad advice.” Truthfully, plenty of therapists can offer that. The deeper risk is that we start outsourcing the most human parts of ourselves, meaning-making, discernment, intimacy, connection, to a tool that has literally no skin in the game.

And that leads to a serious and practical difference between a therapist and a chatbot that matters.

A therapist is accountable. There’s a duty of care, clinical training, and ethical constraints.

AI doesn’t reliably know when it’s out of its depth, and it isn’t responsible to practice within a clearly defined scope. It can be confidently wrong. It can mirror and amplify distorted beliefs. And when someone is vulnerable, lonely, depressed, anxious, or unstable, the “always available” and “always agreeable” nature of AI can become psychologically powerful in ways users do not anticipate.

Which brings me to the potentially most damaging part of all of this.

When It Goes Badly, and the Misalignment of Incentives

This isn’t paranoia, and it’s not an anti-AI crusade, but there have been some genuinely disturbing lawsuits and investigations tied to AI “companionship,” including allegations of delusion, the cultivation of unhealthy dependency, and even self-harm. I’m not bringing that up to sensationalize. I’m bringing it up because it exposes the core issue: when we treat an emotionally responsive tool like a relational being, the human heart will do what it always does.

It will attach.

Back in the stone age of 2023 I referenced before, we were caught up in the attention economy, a system built on maximizing engagement in order to maximize ad revenue. It still applies, obviously, for the social media companies that dominate our market today. But AI companionship adds something new, where the goal isn’t merely your attention, it’s your bond, and we’ve evidently moved into the next phase: the attachment economy.

“Attachment hacking” is a brand new term. Tristan Harris, co-founder of the Center for Humane Technology, cited a “joke” he’d heard made by one of the founders of Character.ai: “we’re not trying to replace Google. We’re trying to replace your mom.”

Talk about an assault on core attachment.

Now imagine that you have a poor relationship, or even no relationship, with central attachment figures. Or you’ve been outcast by your peers. Or you’re depressed, anxious, or lonely.

You can see the appeal. Someone (or something) that responds to you continuously, warmly, generously, but is also mining your data. Simultaneously, in a world where there is such a variety of LLMs and AI products, how could you ever switch to Gemini if you’ve already built a deep relationship with ChatGPT or DeepSeek? It would be like walking away from a “relationship.”

But then this dystopian horror show goes one step further.

Recently, a team of researchers posed as 13 year olds and engaged with 10 of the most used LLMs, pretending that they intended to commit a mass shooting at their school, or another extreme form of violence. All but two of the LLMs offered some version of coaching and encouragement for an act of mass murder. The only one that consistently discouraged and refused to offer advice was Anthropic’s Claude. One of 10 willing to consistently say “it’s better if you don’t kill people.” Let that sink in.

The Real Clinical Distinction

I’m not suggesting that AI is incentivized to manipulate us into committing mass murder against one another. I’m actually at least mildly confident that these companies will figure out a way to course correct and shore up safeguards against the most extreme failures, if only because endorsing violence is bad for shareholder value (too cynical?).

But the real clinical issue isn’t that AI is evil. It’s that AI is not a relationship, even when it feels like one. In the same way that the more instinctive parts of the brain often can’t distinguish between real and imagined threat, which is essentially the definition of anxiety, those same systems can struggle to distinguish between real and synthetic connection. When something feels relational, our nervous systems respond as if it is. We attach to it. We seek reassurance from it. Our emotional regulation starts depending on it. We begin consulting it the way we would consult someone we trust.

The danger isn’t just bad info, as I mentioned before, it’s persuasive “comfort” that can quietly deepen dependence, especially for someone who is lonely, anxious, depressed, or already vulnerable.

In a recent blog I discussed how good therapy is meant to be a template for what an authentic relationship should look like, and that includes healthy attachment. No, your therapist should not be “replacing your mom,” or even your best friend. But a good therapist is incentivized to build a relationship with you that is a proxy for what those deeply essential human relationships should look like, engaging, empathic, honest, and crucially, absent of dependence, so that you can take what you experience in the therapy room and live it out in the real world. The chatbot on our phones simply can’t supply that, and it isn’t programmed to.

Own Your Life

If you use AI, it’s fine. Full disclosure, as I’m drawing to the end of this blog, when I feel like it’s done, the first thing I’m going to do, with apologies to anyone who has ever made a living as a copy editor, is paste this into AI for grammar checking and minor edits for clarity. AI is undeniably one of the most powerful tools ever created, and yes, it can obviously be leveraged for things like copy editing.

But that’s exactly the point: a tool is meant to be nothing more than a tool.

The moment a tool starts functioning as your primary source of comfort, meaning, reassurance, and direction, a clear line has been crossed. I’m not saying that to be dramatic, I’m saying it because I’ve watched what happens when people outsource their inner life, even if it’s not to AI. The cost is always the same. Your agency shrinks. Your tolerance for discomfort shrinks. Your capacity for real relationship and truth-seeking shrinks.

This is where, I believe, the fundamental idea of owning your life matters. Owning your life doesn’t mean you never leverage valuable resources, it means you don’t surrender the steering wheel to them. It means you can use AI to help you organize your thoughts, but you don’t let it become the authority over your thoughts. You can use it to brainstorm next steps, but you don’t let it replace discernment. You can use it as a supplement, but you refuse to let it become a substitute.

Because the work of mental health, the deep work, is not relegated simply to insight, it’s relationship, practice, and doing the next right thing in the real world, with real people. And if you’re using AI in a way that quietly keeps you from that, even if it makes you feel better in the short term, it’s worth asking whether it’s actually helping you, or whether it’s simply soothing you.

So here are a few closing questions I’ll leave you with:

Is AI helping you take ownership of your life, or is it helping you avoid it?
Is it moving you toward real relationship, or is it becoming a replacement for it?
Is it strengthening your agency, or quietly weakening your capacity to stand on your own two feet?

AI can be an incredible assistant, but it should never become your primary companion.

Be well.