Potential Problems Involving AI, Relationships and Mental Health
Technology is not a substitute for human interactions and working through relational issues

More people are turning to AI-powered chatbots for emotional support, some even claiming the technology has saved their lives. Researchers and clinicians however are less convinced about the safety and benefits. Concerns are being raised about the limits, risks and ethical concerns of relying on machines.
“The number one use of AI in 2025 is companionship and therapy,” says Rachel Wood, who has a PhD in cyberpsychology and who specializes in how AI shapes mental health, relationships and the future of mental health services.
“While some are optimizing their workflows with AI, others are bonding with it, deeply but one of the more insidious risks is the potential for the atrophy of bi-directional relational skills.”
She explains what that means in layman’s terms.
“When you are relying on an always-on, always-agreeable chatbot, healthy relational skills such as patience, negotiation and conflict resolution can slowly erode over time for a lack of practice,” Wood says.
There are parts of relationship understanding that technology doesn’t fully grasp.
“Think about a time when someone sacrificed something for you or, you for them. That's a powerful experience,” Wood explains, adding, “Chatbots will never be able to give or receive that level of relational depth.”
A high cost could possibly or likely result from an overreliance on AI.
“This could potentially translate to lack of these skills in our human relationships due to over-use or extended use within the asymmetrical dynamics of a chatbot relationship over time,” Wood points out.
She has observed the landscape and it inspires questions and forecasts challenges.
“The artificial companion platform Character AI has 20-million monthly users, over half of whom are under 24-years old,” Wood says. “Think about how future generations will have inherently different relational muscles due to AI interaction being an engrained facet of their emotional ecosystem.”
It’s important to look forward and accurately as possible, determine what mental health care could or will look like, the positives and dangers, involving technology.
“We are in nascent stages with AI. This is only the beginning of the beginning,” Wood says. “With rapid advancements and adoption of it, I think the future is evolving quicker than even experts can keep up with.”
Positives
“At this point, the positive I see with AI in therapy is the administrative tools that help clinicians do their work with greater efficiency,” Wood says. “This, of course, comes with a caveat because even tools like therapy-note transcription and summarization hold potential risk of hallucinations and algorithmic bias.
“However, when these types of tools are used to aid in the therapist's reflection and insight, it can be a very powerful multiplier.”
She sees more that she deems helpful.
“AI chatbots are increasing access to mental health support in an unprecedented way, which is very positive,” Wood says. “I regularly hear stories of individuals who have found AI useful as reflective tool or interactive journaling space.
“AI tools that are developed with safety and ethics in mind are helping those who would otherwise not seek therapeutic support, find the courage to embrace difficult parts of their life and eventually even go to see a human therapist.”
The Blind Spot and Developing Gap
Wood stresses that the AI tools that are developed and being improved require ethical standards and close observation is needed. What she doesn’t see, at least yet, is a significant move away from therapists to technology for services from licensed professionals. There is a different, larger risk, that has her attention.
“I think we have an entirely new issue that will continue to present itself in our practices: AI attachment, over-dependence and psychosis,” Wood says. “Though these are edge cases right now, it is something I believe a majority of therapists will encounter in their practice within the coming years.”
Protective fences through proactive actions, more than responsive ones, can help society with where it is and is headed.
“We need widespread AI literacy, addressing both the benefits and the limitations” of the technology, Wood advises. “On the AI-development side, we need to be thinking about the following types of components as a minimum baseline for ethical design:
“Frameworks for crisis intervention, escalation, access to human help lines, consent and disclosures reminding users of AI non-personhood and cultural competence for bias mitigation. Of equal importance are issues such as data privacy, independent audits and incident response plans.”
She circles back, in conclusion, to her initial point.
“We want these systems to empower users to connect with other humans and practice important skills such as bi-directional boundaries and beneficial friction,” Wood professionally argues.
Advertising available for any section of the newsletter: To advertise, link to your business, sponsor an article or section of the newsletter or discuss your affiliate marketing program, contact CI.






I appreciate how the article highlights the need for ethical safeguards and literacy around AI in mental health. We need regulations right now for AI as it is. Companies are building at full speed when we don't have the resources in place. It’s not enough to build capable chatbots; we need frameworks for consent, crisis escalation, and cultural competence. Without that, the risk of over-dependence or misguidance could be significant.
Do you wonder what the world will look like in 2-3 years, Michael?
Interesting times we live in.
Hope you are having a good week.