As the use of AI grows in sectors such as finance, healthcare, and education, the potential for algorithmic discrimination has increased. With this growth comes the responsibility to ensure that these technologies operate in a fair and equitable manner.
One of the laws designed to accomplish this is the Colorado AI Act, which aims to protect consumers from algorithmic discrimination, particularly when used in consequential decision-making, and outlines the obligations of developers and deployers of high-risk AI systems. The Colorado AI Act emphasizes the importance of mitigating these risks, especially when decisions made by AI can significantly impact someone’s life, such as in hiring processes, loan approvals, or access to essential services.
Central to the Colorado AI Act are the concepts of algorithmic discrimination and consequential decisions.
Algorithmic discrimination occurs when an AI system leads to unjust or illegal treatment of individuals or groups based on various characteristics such as age, race, gender, or disability.
Consequential decisions, on the other hand, refer to decisions that have a material legal or similarly significant effect on the denial or provision of services and opportunities to consumers, such as access to education and employment opportunities, housing, healthcare, financial insurance, government, or legal services.
How to ensure compliance with the Colorado AI Act?
The Colorado AI Act will become effective on February 1, 2026, and organizations must align their practices with the principles contained in the Act by this date to avoid engaging in unfair or deceptive trade practices.
Both developers and deployers have a duty to take reasonable care to protect consumers from algorithmic discrimination arising from the intended or contracted use of the high-risk AI system. To this end, deployers and developers must comply with disclosure and notification requirements, and an impact assessment must be conducted where required.
What are the transparency and notification requirements?
Documentation to be provided to deployers
Developers of a high-risk AI system are required to provide deployers of the system with the following:
- a general statement describing the expected uses and potential harmful or inappropriate uses of the high-risk artificial intelligence system;
- documentation disclosing high level summaries of the training data, known or foreseeable risks and benefits;
- documentation describing how the AI system was evaluated and the data governance measures implemented;
- documentation describing the intended use cases of the system, its foreseeable limitations, and the technical implications of the system;
- other relevant documents and information required for deployers and third parties of deployers to conduct an impact assessment of the high-risk AI system as required.
Website/ public statements
Developers must provide on their website or a public database, a statement by February 1st, 2026 on the type of high-risk artificial system made available to a deployer or other developers; and how known or reasonably foreseeable algorithms risk are managed. Developers must also update the statement no later than ninety days after modifying the AI system.
Deployers must provide on their website, a statement by February 1st, 2026, on:
- the types of high-risk artificial intelligence systems that are currently deployed by the deployer;
- how known or reasonably foreseeable algorithms risk are managed; and
- the nature, source and extent of information collected or used by the deployer.
Disclose foreseeable risk
Developers must disclose foreseeable risks to the Attorney General, deployers and other developers within 90 days of:
- becoming aware through ongoing testing and analysis that the AI system has caused or is likely to have caused algorithmic discrimination; or
- receiving a credible report from a deployer that the AI system was deployed and caused algorithmic discrimination.
Risk management policy
Deployers of high-risk AI systems must implement a risk management policy that incorporates risk management principles of algorithm risk and keep it updated.
Notification of deployment
Deployers must notify consumers upon deploying a high-risk artificial intelligence system that makes, or is a substantial factor in making a consequential decision, before the decision is made. They must also provide a statement on:
- the nature and purpose of the consequential decision.
- contact details of the deployer.
- how to access the disclosure statement; and
- their right to opt out of profiling for decisions that could further impact on them.
Where a substantial decision is made using the AI system, deployers must provide a statement in plain accessible language and format disclosing the principal reason or reasons for the consequential decision how the AI system contributed to it, the type and source of data used in the AI system, the opportunity to correct the data if it is inaccurate, and the ability to appeal the decision, including by requesting a human review.
Cooperating with the Attorney General
Upon request from the Attorney General, developers must disclose the following documentation within 90 days:
- high-level summaries of the type of data used to train the high-risk AI system;
- foreseeable limitations of the system (e.g., risk of algorithmic discrimination); and
- the purpose of the system, the intended benefits, and uses cases.
Deployers and/or third parties of deployers must submit completed impact assessments to the Attorney General upon request.
Who must conduct an impact assessment?
Deployers must conduct an impact assessment by February 1, 2026, and thereafter at least annually and within 90 days after any modification to the AI system has been made available. Organizations may use a single impact assessment to address comparable high-risk systems or leverage impact assessments conducted under other laws. This impact assessment must be retained for 3 years and reviewed annually to mitigate against risk of algorithmic discrimination.
Ongoing monitoring and audits
Developers must implement an ongoing monitoring and auditing process and conduct testing and analysis to determine whether the AI system has resulted in, or is likely to result in, algorithmic discrimination.
Are certain organizations exempt from these requirements?
Deployers with fewer than 50 employees throughout the period of deployment are exempt from the requirements to publish a website statement, conduct an impact assessment and implement a risk management policy if the following conditions have been met:
- continuous learning is not based on the deployer’s data; and
- the deployer has provided the consumer with the developer’s impact statement and the impact statement includes the information the deployer would have included if it had conducted an impact assessment.
Where a developer is also the deployer of a high risk AI system, they are not required to generate the documentation required for deployers unless the high-risk AI system is provided to an unaffiliated entity acting as a deployer.
How will the Colorado AI Act be enforced?
The Attorney General of Colorado has the standing to enforce the Colorado AI Act. Violations of the Act are unfair trade practice pursuant to Colorado Consumer Protection Act (§§ 6-1-101 — 6-1-1707) and there is no private right of action.
Being found guilty of unfair trade practices opens organizations to punitive measures including a maximum of $20,000 civil penalty per violation, and injunctive relief against the offending practices.
What defences are open to organizations accused of violating the Act?
Self directed curing measures
Discovering a violation as a result of monitoring, testing or an internal review and curing it, is an affirmative defense if the deployer or developer was in compliance with the latest version of NIST AI Risk Management Framework and ISO/IEC 42001 or any other national or international framework that is substantially similar to the Colorado AI Act or any framework designated and disseminated by the Attorney General.
There is also a rebuttable presumption that a developer used reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination if they complied with all the requirements under the Colorado AI Act.
What’s next for the Colorado AI Act?
Colorado created the Artificial Intelligence Impact Task Force, and tasked them with considering issues and proposing recommendations regarding protections for consumers and workers from artificial intelligence (AI) systems and automated detections systems (ADS). In their report, the task force identified a number of potential areas where the Colorado AI Act could be clarified, refined, and otherwise improved including:
- defining the types of decisions that qualify as consequential decisions;
- reviewing the definitions of key terms such as algorithmic discrimination, substantial factor, and intentional and substantial modification; and
- whether a more stringent standard is necessary beyond a basic duty of care.
The task force has also recommended further discussions on potential changes to the law.
Smarter Research. Faster Answers.
Cut through the noise with instant access to expert-curated legal summaries, operational templates, and the latest global privacy regulations—all in one place.
Explore Nymity ResearchAI Risk, Managed with Precision.
Tame AI complexity with a unified solution for managing regulatory risk, governance, and compliance across privacy and AI laws—without the extra lift.
Streamline AI compliance