A special session to settle on a ‘secret sauce’ for statewide AI regulation | NOONAN

The biggest muddle at the Capitol during the “extraordinary session” of the General Assembly was not money, that is, the $750 million blowhole in the state’s budget due, more or less, to the nation’s federal tax cuts. It was what to do with artificial intelligence, or AI.
The muddle doesn’t affect AI’s mundane uses, such as crafting the most brilliant ninth-grade essay ever on “William Wordsworth and the Natural World.” It has to do with its interface with human decision-making and AI’s predictions related to employment, access to credit, education admissions or opportunity, rental housing, insurance decisions of many kinds, and legal services. Those items cover a lot of territory in the life of an average American.
This predictive AI environment generally breaks down into three players: AI “developers,” AI “deployers,” and “us.” The question inherent to AI is to what degree “we” will come out as winners or losers in this triangle of unequal sides.
Developers create the AI mechanisms that “learn” by grabbing as much information on a subject as they can, combining and processing it, and spitting out an array of content related to the questions about the subject. Its predictive models depend on analyzing large amounts of data based on specified criteria to suss out risk or consequences. So, for example, mortgage lenders want to understand the risks of giving money to prospective clients and home insurers want to limit the risks of providing coverage to their prospective customers. AI can be trained to rapidly review applications based on lenders’ or insurers’ criteria and predict whether particular decisions on an individual’s credit or zip code will produce or lose revenue.
The question before the legislature is to what degree these AI predictions, when deployed by lenders and insurers, comply with Colorado’s Consumer Protection Law and what remedies should occur upon non-compliance. This issue is important because the United States and the state have spent decades trying to reduce discrimination based on criteria such as race and gender. Women can have credit cards in their own name now due to anti-discrimination laws. Real estate red-lining is unacceptable and unlawful. Going backward on anti-discrimination policies should not be a result of AI’s intersection with human decision-making.
Based on the notion of protecting consumers, the legislature passed SB24-205, an AI “sunshine” act. Developers and deployers of AI freaked out. The law would make both parties, jointly and severally, liable for non-compliance with Colorado’s consumer protections administered by the state’s Attorney General’s office. Gov. Jared Polis came to regret signing that legislation.
AI developers and their supporters, including the governor, assert AI technology investment will go to other states if Colorado becomes the first state government to regulate AI’s uses. The bill required AI developers to inform deployers of the criteria of the algorithms used to produce predictions. Other disclosures regarding AI versions, etc., were also part of the transparency requirements. Developers claim that revealing the basis of their algorithms will damage their capacity to keep their “secret sauces” secret.
To complete the revelations, deployers of AI, such as human resources departments, credit lenders, insurers and the legal community had to disclose the criteria they used to evaluate their targets. If an employer chose not to hire an individual, that individual could request an explanation as to the reasoning for the decision-making. If job prospects or credit seekers felt they were discriminated against in violation of consumer protections, they could sue both the deployer and the developer of used AI tools.
Organizations in favor of consumer protections held the line during the special session on reducing transparency rules on the assumption what individuals know will lessen discrimination and support unbiased screening. They believe their transparency requirements protect the public while minimally affecting AI innovation. After all, the oil-and-gas industry made a similar protest about their “secret sauce” related to their fracking chemicals. After much complaint, they are now reporting these chemicals with some fidelity, and they are still doing business in Colorado.
Of course, any tool made by humans to offer predictions or make decisions will contain some bias because that’s just the way it is. Ancient Greeks experienced bias whenever they approached the priests and priestesses that prophesied their fates at the oracle of Delphi. AI is better than that, but there will be that lasting residue that cannot be wrung out of the algorithms no matter how large the data collection, according to experts.
This battle will go on into the 2026 General Assembly. Consumer protection advocates may not be satisfied with disclosures without audits, independent testing, and AG enforcement. It will be difficult to bring litigation without clear evidence supporting bias claims related to discrimination. AI biases are likely to be subtle and difficult to discern.
Developers and deployers will prefer a very light touch. AI is one place where large efficiencies can accrue to companies in their job selection processes, credit reviews, insurance claims and legal services. Lots of money is in prospect at every level.
As ever, it’s a juggle to find the secret sauce that encourages innovation while securing important values protecting the public.
Paula Noonan owns Colorado Capitol Watch, the state’s premier legislature tracking platform.