In the Age of AI, much-needed protections in mental health care | OPINION
By Vincent Atchity
Artificial intelligence is already changing so much about the way we live, and important conversations are being had about what guardrails are needed to ensure the safe use of this technology. One of the most urgent of those conversations is happening right now in Colorado, as lawmakers consider House Bill 1195, a common-sense bill to protect people seeking mental health care from unknowingly receiving it from a corporately-owned machine instead of a trained professional human being.
“Artificial intelligence” can refer to various technologies, but this bill specifically addresses the use of popular AI apps like OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and so on, for psychotherapy. These tools are advancing rapidly and are increasingly being used in deeply personal ways, including for emotional support and mental health guidance. They have the potential to transform healthcare but also to do harm, which is why it is so crucial for lawmakers to pass HB 1195 this session.
One thing that is important to remember about these apps is that they are highly profitable for their corporate owners, and there are no guidelines or guarantees of privacy or safety regarding any personal health information a user may share with them. Your personal health information is protected in healthcare settings by the Health Insurance Portability and Accountability Act of 1996 (HIPAA). In contrast, any data you share with an app may be more out of your control than you realize. It may be retained forever in databases owned by entities that do not have your best interests at heart, used against you in a court of law, and, depending on regulatory actions that may or may not be taken in coming years, could end up belonging to anyone and used for any purpose.
Another important thing to remember about these apps is exactly what kind of AI they use. These are what’s known as large language models, or LLMs; their outputs are generated by predicting likely responses based on patterns in massive amounts of text, not by understanding, clinical judgment, or lived human experience. Some may argue that makes it a perfect human therapy substitute; how can one person compare to a computer that theoretically possesses all verbal therapeutic knowledge ever published?
But that framing misses something essential. Therapy is not just about information and an exchange of words — it’s about judgment, boundaries, and the ability to challenge harmful thinking and to discern when words are not telling the whole story. While LLMs have access to a great deal of written language, there is a limit to what they can do. They can take notes from a conversation, summarize documents, answer basic questions, or perform other simple language-based tasks. In those contexts, AI can be incredibly useful, including for therapists who can use it to reduce administrative burdens like notetaking and spend more time face-to-face with patients.
But crucially, LLMs often operate predictively; they aren’t really thinking the way people do, but rather they are designed to continue an exchange of words in a way that feels helpful or agreeable to the user. For low-stakes situations, that can be perfectly fine. For mental health care, it can be dangerous.
Mental health care is not low stakes. Untreated mental health needs can spiral and leave incredible destruction in their wake, and at worst, turn deadly. LLMs, when asked open-ended or emotional questions, have a demonstrated tendency to reinforce a user’s framing rather than challenge it. That is the opposite of what effective therapy requires.
Take someone experiencing undiagnosed depression: they may feel sad, and not really know why. If they were to ask a chatbot, “Am I right to be sad?”, a chatbot predisposed to entice continued engagement with the user might say, “Yes, your life is hard, and you should feel that way.” What may feel validating in the moment can actually deepen harmful patterns of thinking. The last thing we want for someone experiencing a delusion is for that delusion to be affirmed rather than gently questioned.
In contrast, trained human therapists are ethically bound and clinically trained to reframe problems, challenge distorted thinking, recognize risk, and intervene appropriately. They can identify when someone needs a higher level of care, including medical treatment or crisis support. They are not just responding to disembodied words for the purpose of keeping someone’s attention — they are actively guiding a whole person toward better health.
What’s more, we must ensure that people know what kind of care they are receiving. No one seeking therapy should have to wonder whether they are talking to a licensed professional or a profit-driven algorithm. That’s why Mental Health Colorado is supporting HB 1195. It would establish clear consumer protections by prohibiting the use or marketing of AI as a substitute for psychotherapy, diagnosis, or treatment planning, and by requiring informed consent before AI is used in supportive roles such as transcription or note-taking.
HB 1195 strikes the right balance. It does not reject innovation — it channels it responsibly. Therapists will still be able to use AI for appropriate administrative tasks, cutting down on paperwork and allowing more time for meaningful human connection with clients. At the same time, it takes an important step toward protecting Coloradans from the very real risks of relying on AI for care it is not equipped to provide.
Mental health problems are human problems, and they require human care and privacy in a safe, protected space. AI can be a powerful tool, but it is not a therapist and offers no reliable guarantees or protections of privacy or safety.
Vincent Atchity is president and chief executive of Mental Health Colorado.

