Contributing Columnist

Colorado AI legislation further complicates compliance equation

News
May 10, 20246 mins
Artificial IntelligenceComplianceRegulation

Senate Bill 24-205, which takes aim at company use of AI in making ‘consequential decisions,’ passed the state’s assembly, leaving gray area on how to comply.

Colorado State Capitol Building in Denver
Credit: f11photo / Shutterstock

The Colorado legislature passed this week AI regulations aimed at private companies, adding to the increasingly complex patchwork of AI statutes rising across the US and potentially giving the state’s Attorney General the authority to prosecute companies that use AI to discriminate against consumers. Colorado Governor Jared Polis has until June 7 to decide whether to sign Senate Bill 24-205 into law. 

The legislation requires companies that conduct business in Colorado to disclose to the state’s attorney general “any known or reasonably foreseeable risk of algorithmic discrimination, within 90 days after the discovery or receipt of a credible report.” 

CIOs might struggle with the bill’s language because the focus is on whether AI — in any form — helps make “consequential decisions” that could impact Colorado residents. The bill defines consequential decision as being any decision “that has a material legal or similarly significant effect on the provision or denial to any consumer,” which includes educational enrollment, employment or employment opportunity, financial or lending service, healthcare services, housing, insurance, or a legal service. 

The bill does not limit AI’s definition to any specific area, such as generative AI, large language models (LLMs), or machine learning. Instead, any means of artificial intelligence, including using an optical character reader (OCR) to scan resumes, is covered. 

Polis’s office issued a statement that didn’t address whether the governor plans to sign the legislation into law. 

“This is a complex and emerging technology and we need to be thoughtful in how we pursue any regulations at the state level. Governor Polis appreciates the leadership of Sen. [Robert] Rodriguez on this important issue and will review the final language of the bill when it reaches his desk,” said Eric Maruyama, the governor’s deputy press secretary. “The Governor appreciates that the bill creates a task force made up of experts that will be meeting to discuss the specifics of any changes that should be made before the bill takes effect in February of 2026.”

Devil in the details

Consequential decisions aren’t the only areas coming under scrutiny. The bill also takes aim at AI-generated content, stating: “If an artificial intelligence system, including a general purpose model, generates or manipulates synthetic digital content, the bill requires the deployer of the artificial intelligence system to disclose to a consumer that the synthetic digital content has been artificially generated or manipulated.”

Other parts of Senate Bill 24-205 might make it less efficient for companies to use AI. One provision, for example, provides “a consumer with an opportunity to appeal, via human review if technically feasible, an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system.”

Another provision could prove onerous for CIOs who do not have full knowledge of every AI implementation in use in their environment, as it requires companies to make “a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys, how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems and the nature, source, and extent of the information collected and used.”

Given the broad reliance on vendors and third parties in IT today, many company executives, even CIOs, may not be aware of all modes of AI assistance — which often comes via clouds, SaaS apps, third parties, remote sites, mobile devices, and home offices — impacting their customers. These hidden AI activities, what Computerworld has dubbed sneaky AI, could potentially come to bear in compliance with legislation such as this. 

Brian Levine, a manager partner at Ernst & Young who is also an attorney, reviewed the bill and doesn’t expect ignorance of a third party’s use of AI to be a major problem.

“If you know that the product you are using contains AI,” then that requires action, he said. “But if you don’t know and are not purposely sticking your head in the sand, I don’t think there is any obligation under this bill. Knowledge of what a third party is doing isn’t necessarily imputed to you,” he said, adding that the bill has no reference to strict liability. 

More to come

One especially dicey area in the legislation that should concern CIOs is when AI — especially generative AI — acts on its own. Levine argued that the legislation makes obvious illegal discriminatory actions forbidden, such as programming the system to prevent various protected classes (age, race, gender, income level, etc.) from getting services. 

But what if the instruction is to maximize profits or boost sales? That’s legal. But it is still possible that a generative AI service could extrapolate from data to block applications from specific Zip codes due to a high rate of returns, for example. If those Zip codes house a high percentage of people of a particular protected class, the company certainly looks like it is discriminating. That’s where things may get tricky for CIOs with AI legislation such as this.

AI bias management may help alleviate some of this pressure. But with transparency still an AI issue, there will always be the potential for liability. Here, the classic example would be an AI system that analyzes resumes and excludes people of a protected class because it was trained on data devoid of such candidates, leading the AI to conclude such candidates were not desirable.

“It’s unclear to me whether, if there is no intent based on an improper category, whether that is going to be problematic in this bill,” Levine said. 

Levine also predicted that if this is signed into law, many other jurisdictions are likely to follow. “Various state governments, federal governments, and foreign governments are tripping over themselves to regulate AI,” he said.

Contributing Columnist

Evan Schuman has covered IT issues for a lot longer than he'll ever admit. The founding editor of retail technology site StorefrontBacktalk, he's been a columnist for CBSNews.com, RetailWeek, Computerworld and eWeek and his byline has appeared in titles ranging from BusinessWeek, VentureBeat and Fortune to The New York Times, USA Today, Reuters, The Philadelphia Inquirer, The Baltimore Sun, The Detroit News and The Atlanta Journal-Constitution. Evan can be reached at eschuman@thecontentfirm.com and he can be followed at twitter.com/eschuman. Look for his blog twice a week.

The opinions expressed in this blog are those of Evan Schuman and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author