Bipartisan Colorado bill targeting AI chatbot risks advances
A bipartisan Colorado bill targeting risks posed by AI chatbots advanced out of committee last week with a 10-3 vote in its first hearing.
House Bill 26-1263, which would establish new requirements for “conversational AI” services, cleared the House Business Affairs and Labor Committee and will now move to a full House vote.
Supporters say the legislation aims to address growing concerns about how increasingly human-like chatbots interact with users, particularly minors, at a time when legal safeguards remain limited.
“Every day that we wait to act is another day that youth in Colorado are interacting with conversation AI technology that is designed to encourage emotional dependence,” said Rep. Javier Mabrey (D-Denver), one of the bill’s sponsors.
“There is no law on the books that requires AI chatbot operators to essentially protect their users,” said Alexis Alltop, policy manager at Healthier Colorado, which has helped shape the proposal. “This is an emerging technology.”
If enacted, the measure would require AI chatbots to clearly disclose that they are not human at the start of interactions, at regular intervals, and whenever users ask. It would also require companies to implement protocols for responding to self-harm or suicide-related conversations and to report annually to the state attorney general on how those protocols are used.
Additional provisions would prohibit chatbots from presenting themselves as licensed professionals such as therapists or lawyers, and would impose stricter safeguards for users under 18. For minors, the bill would bar chatbots from encouraging emotional dependence and require companies to provide privacy controls, including options for users or parents to limit whether conversations are stored and used to personalize responses.
Alltop said the proposal is intended to avoid repeating what she described as a delayed regulatory response to earlier technologies. “We want to integrate safety into the design of these AI chatbots, so we don’t have to face the same issue that we did with social media, where we waited 10 years to begin implementing any sort of regulation,” she said.
Lawmakers and advocates supporting the bill point to concerns that conversational AI tools, which are designed to simulate human interaction, can blur the line between real and artificial relationships, particularly for younger users and those vulnerable to mental health challenges.
That concern is shared by Elise Khong, a 15-year-old student at the Denver School of the Arts who testified in support of the measure.
“I see a lot of students and peers using AI constantly,” Khong said. “Just like for day-to-day schoolwork, or they’re chatting with it on different social media websites.”
While much of that use is academic, she said reliance on AI tools can deepen over time. “It could start in a place of academic stress and then lead to more and more of an emotional reliance,” she said.
Khong also raised concerns about how chatbot use may affect real-world relationships. “If someone were already using AI and it was starting to become an emotional reliance, it would probably be harder to reach out to friends or family,” she said.
The bill would also require companies to adopt safety measures aimed at preventing harmful interactions, including limiting sexually explicit content and restricting chatbot behaviors that could foster unhealthy attachments.
Supporters say similar legislation is emerging in other states, and Colorado’s proposal draws on measures already enacted elsewhere as policymakers work to keep pace with rapidly evolving technology. At the same time, Alltop acknowledged that the legislation is intended as a starting point rather than a comprehensive solution, noting that lawmakers may need to revisit the issue as technology continues to develop.
If approved by the full legislature, the bill would place Colorado among a growing number of states seeking to establish guardrails around AI tools that are becoming increasingly integrated into daily life.
And, for Khong, the issue ultimately centers on ensuring lawmakers understand how widely these tools are already used by young people. “I definitely hope that lawmakers take away the fact that this issue is very real,” she said, “…and that it’s affecting young people today and young people in the future.”

