As the AI age evolves, Colorado must step up to protect our kids | GUEST COLUMN
By Ann O’Leary
Artificial intelligence is creating new opportunities for young people to learn, create, explore and build confidence. It is opening new doors for students in Denver, on the eastern plains and on the western slope. But as our kids enter the AI era, we need to put the right guardrails in place to keep them safe from the risks that come along with those benefits.
That’s why we’d called on the Colorado General Assembly to make Colorado House Bill 26-1263 stronger. We hope potential future legislation will include more robust protections.
AI is evolving quickly, and the safeguards protecting children need to keep pace. HB26-1263 is an important step, but parents and child safety advocates made clear it did not go far enough. In fact, more than a third of the legislature voted against the bill because it does not do enough to protect minors. That’s why OpenAI had proposed amendments focused on youth well-being, crisis response, serious risk mitigation and stronger accountability.
They included:
- If a user’s age cannot be determined, companies should default to protective safeguards.
- Companies should regularly assess serious risks to children’s safety and address them proactively.
- Companies should make sure families can read and understand their child safety policies, which should explain safeguards in place, parental tools available and how protections evolve.
- Companies should design their AI services to avoid generating harmful or developmentally inappropriate content for children, and establish clear protocols for serious safety situations.
- AI services used by children should protect against manipulative, deceptive, or emotionally exploitative interactions.
- Children’s personal information should be protected from privacy-invasive, targeted advertising and from sale without parental consent.
- Companies should be held accountable for their approach to child safety through strong enforcement and privacy-preserving, independent audits.
The proposed amendments reflect OpenAI’s focus on keeping young users safe. We’ve introduced parental controls to help families shape how ChatGPT works in their homes. We’ve added Under-18 Principles to OpenAI’s internal rulebook that governs how our models should behave, which we call our “Model Spec”. And we’ve developed an age prediction model to help teens receive the right protections. When the system estimates an account belongs to someone under 18, it is designed to apply additional safeguards that reduce exposure to sensitive material, including violence, unhealthy dieting and other forms of body shaming.
Our work on teen safety is guided by close engagement with experts across many disciplines. Last year, we established an Expert Council on Well-Being and AI to help advise and define what healthy interactions with AI should look like for users of all ages. Our Global Physician Network provides insights that directly inform our safety research. We also collaborate with both national and state-level organizations on youth safety and well-being, including work focused on responsible AI protections for minors.
The American tech industry must continue investing in the research necessary to better understand how young people interact with AI over time and to close the gap between innovation and safety. That vital work cannot fall solely on parents or teachers, and AI companies have a responsibility to protect younger users from the start.
We’re grateful to the legislators who sponsored HB 26-1263, and we look forward to working with them more. Keeping kids safe in the AI era will require ongoing collaboration between lawmakers, parents, educators and technology companies, and we are committed to being a long-term partner in that work.
Colorado has always led the way in tech and innovation. As AI becomes part of daily life for young people across the state, it’s time for Colorado to lead again by setting a new national standard for how best to keep kids safe in the AI era.
Ann O’Leary is vice president of Global Policy at OpenAI.

