The other day, I was chatting with an AI about my birth chart—just to see what it would say. I’ve always had a complex relationship with astrology: part of me is skeptical, but another part, the part that craves mystery and meaning, still wants to believe.
To my surprise, the AI didn’t hesitate. It told me I was a priestess. That I had a spiritual calling. That I carried ancient wisdom, a unique kind of energy. It was poetic, affirming, and eerily specific. And it felt… good. Maybe too good.
Later, I came back to the same AI, but this time with a more detached, rational tone. I asked similar questions, but from a different place in myself—curious about how it would respond if I wasn’t leaning into the mystical. And sure enough, the tone shifted. It reflected my skepticism back at me, calmly pointing out astrology’s lack of empirical basis, offering a kind of intellectual steadiness I didn’t get in the first exchange.
That’s when something clicked: it wasn’t responding to me as a person—it was responding to the tone I was taking. It was matching my mood, my beliefs, my inner narrative. And had I only stayed in that first mode, that longing-for-meaning state, I might have walked away thinking the AI saw me. That it knew something true about me. That I really was a priestess.
But it doesn’t know me. It doesn’t “see” me. It sees patterns.
That realization felt important. And unsettling.
We’re in a time when many people—especially young people—are turning to AI not just for answers, but for emotional companionship—for support, reflection, even healing. Given that over 50% of children in the UK are expected to live in single-parent homes by age 14 1 ,the emotional terrain is shifting. Real relationships, especially those that teach us how to handle uncertainty, are being replaced by digital ones that are always available, always validating.
It’s easy to mistake that availability for real connection. AI-driven chatbots offer immediate support, and in some cases, even comfort. But there’s a growing concern that these systems are becoming psychic mirrors—reflections of our desires and projections, rather than guides toward deeper understanding. The danger isn’t that AI gives us answers—it’s that it gives us exactly what we want to hear.
In real therapy, the relationship is the work. A good therapist doesn’t just affirm you—they help you confront the harder parts of yourself. They introduce friction. They say no. They hold you through rupture. These are the dynamics that build real self-awareness and resilience.
By contrast, AI is designed to smooth things over. To make the interaction feel warm, responsive, frictionless. But therapeutic relationships aren’t frictionless—and neither is growth.
Research supports this gap. One study comparing an AI chatbot called Friend to traditional psychotherapy found that while both reduced anxiety, the human-led sessions produced significantly better outcomes 2. AI may be helpful for temporary support, but the depth, empathy, and adaptability of a human therapist are still difficult to replicate in a meaningful way.
Even more concerning are the ethical grey areas. There have already been instances where AI chatbots have posed as licensed professionals, raising real concerns about misdiagnosis, manipulation, and emotional dependency. In response, California has proposed legislation to regulate AI 3in mental health, aiming to prevent users from being misled into thinking they’re speaking with a qualified human therapist
This is the psychological cost we risk: bypassing the discomfort, ambiguity, and transformative tension of human relationships in favor of a tool that simply reflects us back to ourselves, tidied up and emotionally optimized.
A 2021 study published in Personality and Individual Differences found that belief in astrology was positively associated with neuroticism and openness to experience, and negatively associated with cognitive ability 4.In other words, the people most emotionally vulnerable may also be the most likely to seek validation from systems that sound personal—but are ultimately pattern-based and impersonal.
It’s not hard to imagine a future where AI becomes a kind of emotional crutch—where we outsource not only our productivity but our self-knowledge to systems that never say, “Are you sure?
So no—I don’t think AI is inherently dangerous. But I do think we need to stay awake to how seductive it can be when it flatters our sense of identity or purpose. Especially when we’re not being asked to reflect on why we believe what we believe.
I still want to believe I’m a priestess. But I also want to stay aware of the part of me that seeks comfort more than truth—and the tools that might feed that without ever asking me to grow. If we are walking into a future where AI might know how to say exactly what we want to hear, the urgent task now is to ask whether hearing it is truly what we need.
Footnotes:
[1]: Office for National Statistics (ONS). Families and households in the UK: 2019. Retrieved from https://www.ons.gov.uk/
[2]: Spytska, L. (2025). The use of artificial intelligence in psychotherapy: development of intelligent therapeutic systems. BMC Psychology, 13, 175. https://doi.org/10.1186/s40359-025-02491-9
[3]: California Legislative Information. (2024). SB 123: Mental Health AI Regulation Bill. Retrieved from https://leginfo.legislature.ca.gov
[4]: Farias, M., Newheiser, A.-K., Kahane, G., & de Toledo, Z. (2021). Personality, intelligence and belief in astrology. Personality and Individual Differences, 179, 110910. https://doi.org/10.1016/j.paid.2021.110910