Colorado Senate introduces bill aimed at fixing 2024 artificial intelligence to prevent discrimination
A measure introduced this week that aims to fix a 2024 law regulating artificial intelligence use to prevent discrimination faces a tight deadline before Colorado lawmakers wrap up their work on May 7.
Senate Majority Leader Robert Rodriguez, D-Denver, said Senate Bill 318 is far from the finished product, as the tech industry and K-12 and higher education institutions have concerns about being able to implement policies by the September 2026 deadline.
The bill follows a law adopted in the 2024 session that, for the first time in the nation, sought to establish guardrails around the use of artificial intelligence, largely in employment, health care, education, and government practices, where, backers said, the risk of bias or discrimination exists.
While the governor signed Senate Bill 24-205 last year, he pushed for adjustments to address lingering worries this year. In his May 17 signing statement, Gov. Jared Polis asked lawmakers to keep working on it before its 2026 implementation date.
“I am concerned about the impact this law may have on an industry fueling critical technological advancements,” Polis said. State-level government regulation, he added, can “tamper with innovation and deter competition.”
A month later, the governor, Attorney General Phil Weiser and Rodriguez penned a joint letter to “innovators, consumers and all those interested in the AI space,” in which they pledged to provide clarity around the law and minimize the “unintended consequences” associated with the bill’s implementation. That includes convening a legislatively created task force to propose recommendations on changes requested after the bill became law.
The letter said “home-grown businesses” had highlighted a risk tied to an “overly broad definition of AI,” along with disclosure requirements that could impose high costs and result in barriers to growth and product development, job losses, and limitations on raising capital.
To address those concerns, the law would need improvements in at least five areas:
• Refining the definition of AI to align with a federal definition, as well as definitions from other states with substantial technology sectors
• Focus regulation on the developers of “high-risk” AI, rather than “deployers” — meaning smaller businesses that may use AI through third-party software
• Shift to a more traditional enforcement model under the Attorney General rather than the law’s proactive disclosure requirement
• Making clear that the consumer’s right of appeal is tied to the Attorney General’s ability to investigate discrimination, or through the Consumer Rights Commission
• Address other measures the state could take to be more welcoming for technological innovation, while preventing discrimination
No agreement between tech developers, civil rights groups
In the year that has lapsed since the signing of SB 24-205, the task force has met and made recommendations. Rodriguez told Colorado Politics he believes a draft of the bill covered 90% of what the governor sought. But the warring sides— venture capitalists and tech developers on one side, and civil rights groups on the other — could not come to an agreement on the last pieces, and that forced Rodriguez to make “an executive decision” on the first released version of the bill.
“The process has been the task force,” and a subset that included tech, civil groups, and chambers, he said. “The biggest thing we heard from the task force” is that everyone is covered under the bill, and that’s not true, he said. That’s been clarified in the bill to exclude small businesses of under 50 employees. Deployers can use AI for hiring purposes, Rodriguez explained, such as an online system that reviews resumes and makes recommendations on candidates.
The various groups, including Rodriguez, met twice weekly in the early part of the session and three times a week as he drew close to releasing a draft.
The tech groups reacted negatively to the first version, Rodriguez said, attributing part of their angst to their lack of understanding of how the legislative process works.
He said the tech side wants deployers — the businesses using high-risk AI systems — to be excluded and the employment side eliminated entirely. Rodriguez said that’s not happening.
Neither is the desire by “agents of chaos,” as Rodriguez called some of the bill’s opponents, for an out-and-out repeal of the 2024 law.
The law’s assessment piece has also been tweaked to align it with the state privacy act and address worries that assessments are too burdensome.
That’s a requirement from the 2024 law that says deployers must conduct assessments around their use of AI systems that cover purpose and intended use, foreseeable risks, performance metrics, transparency measures and how they will monitor those systems once deployed. Under SB 318, deployers can conduct their own risk assessments, hire a third party, or use the developer who can do that as a sales tool, he said.
SB 318 also streamlines the right to appeal when a complaint is lodged. That was narrowed to focus on existing employment law.
“If you could be sued for discrimination now, you should be sued under this law anyway,” he said.
That process will run through the Attorney General’s office. It will include an affirmative defense to allow for remedies without punishment and remove language around a rebuttable presumption — the principle that something is true until proven otherwise.
‘There is no ‘come after you”
Rodriguez explained the attorney general cannot pursue action against any company that causes or does harm and finds out — as long as they fix it and didn’t intentionally do anything wrong.
“As you’re working in good faith and trying not to make problems, there is no ‘come after you,'” he said.
He said the bill is not focused on everything about AI; it’s about decision-making in employment, health care, education, and biases.
“The bill currently does what was asked for,” he added.
“We’re just asking for transparency and accountability on systems that make a consequential decision on your life,” he said.
Rodriguez sees AI regulation in the 2024 law and SB 318 as addressing what should have been put in place when social media first surfaced. He is part of a national task force that began working on data privacy, including Colorado’s 2021 law. Now, 27 states have something addressing it, he said.
In the last several years, that national task force began looking at AI, with the inception of ChatGPT, and how it would try to get ahead of the game, unlike the controls that did not develop when social media took off, he said.
“It seemed right to go after the consequential decisions and just do some transparency to get ahead of the game,” he said.
‘A bad spot’
Both K-12 schools and the higher education institutions have raised concerns, but Rodriguez said they never brought those issues to his attention.
A letter recently sent to the General Assembly from the state’s public colleges and universities warned that the law could limit students’ ability to embrace new technology in the classroom and then launch their careers in Colorado.
“It also could stifle research and innovation, putting our faculty, students, and graduates at a disadvantage compared to their peers in other states. We don’t want Colorado’s best and brightest students to be compelled to leave the state to pursue jobs in the tech center.”
The letter did not spell out what they wanted to see, other than a request to mitigate “unnecessary and onerous requirements” imposed on higher ed.
At the K-12 level, the concerns are around unexpected problems for K-12 schools “simply for using everyday software;” as well as significant direct and indirect costs, a burden at a time when K-12 funding is lacking. The groups said those costs will limit the schools’ abilities to direct funds to hiring and retaining teachers.
Both the K-12 and higher education sectors said they had been excluded from the stakeholder process.
Chris Erickson, co-founder of Range Ventures, which works exclusively in the AI small venture capital space in Colorado, said every other state has retreated from this type of legislation.
“We’re really out on a limb here,” Erickson told Colorado Politics.
Had the list from the June 2024 letter been achieved, he said, Colorado could have created a framework that other states could use — one that didn’t harm local jobs, companies, or investment dollars coming into the state.
The burden of the AI law will fall on the deployers, Erickson said.
“It’s not an investment or tech issue,” he added.
He said it’s using a service, such as for resume screening. That, he said, would require the company to have a risk management system in place for how it uses AI in consequential decisions. And some companies won’t have the ability to do this on their own, he said.
It will be very expensive, he said, adding the alternative is they will have to stop using those AI systems.
Erickson raised another concern: Would national employers want to hire remote personnel in Colorado and have to comply with a law that doesn’t exist anywhere else in the country?
In the last five years, $25 billion has been invested in venture capital in Colorado, and last year, 43% of that went to AI companies, he said, adding that if the state is perceived as “tech-unfriendly,” that’s another risk.
All the deployers that aren’t in the tech sector and didn’t build these systems are trying to run their businesses as economically as possible, he said.
“We’re in a bad spot to make them disadvantaged,” he added.

