Key Takeaways
- Senate Bill 26‑189 creates Colorado’s first comprehensive regulatory framework for automated decision‑making technology (ADMT) used in consequential decisions.
- “Consequential decisions” cover education, employment, housing, finance, insurance, healthcare, and essential government services.
- Deployers must give clear notice when ADMT is involved and, after an adverse outcome, provide a plain‑language explanation and a 30‑day window for consumers to request additional information.
- Consumers receive rights to correct factual errors in their personal data and to obtain a meaningful human review of the decision.
- Starting January 1, 2027, ADMT developers must supply deployers with detailed disclosures about intended uses, training data, limitations, risks, and any updates.
- The Colorado Attorney General will adopt implementing rules by December 31, 2026 and enforce the statute through the Colorado Consumer Protection Act, treating violations as deceptive trade practices.
- Liability for anti‑discrimination violations is allocated between developers and deployers based on relative fault, and contracts cannot indemnify a party for liability arising solely from its own actions.
- The bill repeals and replaces Colorado’s 2024 high‑risk AI legislation, incorporating recommendations from a governor‑appointed task force.
- After Senate passage, SB 26‑189 proceeds to the House for further consideration; its implementation could set a national precedent for AI accountability and consumer protection.
Overview of Senate Bill 26‑189
Senate Majority Leader Robert Rodriguez and Senate President James Coleman jointly sponsored SB 26‑189, which establishes a statewide regulatory regime for automated decision‑making technology when it informs consequential decisions about individuals. The legislation reflects Colorado’s ambition to stay ahead of rapidly evolving AI applications while safeguarding residents from opaque or discriminatory outcomes. By codifying notice, explanation, and remediation requirements, the bill seeks to balance consumer protection with reasonable compliance burdens for developers and businesses that deploy AI tools. Its passage in the Senate signals strong bipartisan recognition of the need for clear rules governing AI’s impact on fundamental life opportunities.
Defining Automated Decision‑Making Technology and Consequential Decisions
The bill defines ADMT as any technology that automatically processes personal data and produces an output used to make, guide, or assist a decision concerning an individual. It further clarifies that a “consequential decision” is one affecting an individual’s access to, eligibility for, or compensation related to education, employment, housing, financial or lending services, insurance, healthcare services, or essential government services. This scope ensures that the law captures high‑stakes areas where algorithmic bias could cause significant harm, while excluding low‑risk applications such as recommendation engines for entertainment or casual social media feeds.
Transparency Obligations for Deployers
Deployers—entities that actually use an ADMT—must provide a clear, conspicuous notice whenever a consumer interacts with ADMT covered by the statute. If the technology yields an adverse outcome (e.g., denial of a loan, job, or housing application), the deployer is required to deliver a plain‑language description of the ADMT’s role in the decision and to inform the consumer of a process to request additional information about the decision within 30 days. This notice requirement aims to demystify algorithmic influences and empower individuals to understand why a particular decision was made, fostering trust and accountability in AI‑driven services.
Remedies for Affected Consumers
Beyond notice, SB 26‑189 grants consumers specific remedial rights when an ADMT‑driven decision results in an adverse outcome. Consumers may request correction of any factually inaccurate personal data that contributed to the decision, and they are entitled to a meaningful human review of the outcome. These provisions ensure that individuals are not left at the mercy of opaque algorithms; instead, they can challenge errors, supplement missing context, and obtain a judgment that incorporates human discretion where appropriate. The 30‑day window for requesting information aligns with typical consumer‑protection timelines, allowing sufficient opportunity for redress without imposing undue delay on businesses.
Developer Disclosure Requirements and Implementation Timeline
Effective January 1, 2027, ADMT developers must furnish deployers with a comprehensive description of the technology’s intended uses, the categories of data used to train the model, known limitations and risks, and instructions for appropriate use and human review. Developers must also provide updates or modifications to the ADMT as they occur. This forward‑looking disclosure regime ensures that downstream users receive the information necessary to comply with the deployer‑side notice and remediation obligations. The delayed effective date gives developers ample time to adjust their documentation practices and integrate compliance workflows into product lifecycles.
Attorney General’s Role in Rulemaking and Enforcement
The Colorado Attorney General (AG) is tasked with adopting implementing rules that clarify disclosure requirements after an adverse outcome; these rules must be finalized by December 31, 2026. Once in place, the AG will have exclusive authority to enforce SB 26‑189 through the Colorado Consumer Protection Act, treating any violation as a deceptive trade practice. In enforcement actions, the AG must first provide the alleged violator—developer or deployer—with a 60‑day notice and an opportunity to cure the violation, assuming a cure is feasible. Notably, the bill does not create a private right of action, concentrating enforcement power in the state’s chief legal officer to promote consistent application and reduce fragmented litigation.
Liability Allocation Between Developers and Deployers
SB 26‑189 ties liability for anti‑discrimination violations to existing statutes such as the Colorado Anti‑Discrimination Act (CADA). It stipulates that fault in a CADA violation should be apportioned according to the relative responsibility of the developer and the deployer. Moreover, any contractual indemnity clause cannot shield a party from liability arising solely from its own actions. This approach prevents developers from offloading all risk onto deployers (or vice versa) and encourages both sides to diligently assess bias, data quality, and model performance throughout the AI lifecycle. By anchoring liability to established civil‑rights law, the bill leverages familiar legal doctrines while addressing novel algorithmic harms.
Connection to Earlier AI Legislation and the Governor’s Task Force
In 2024, Senator Rodriguez pioneered Colorado’s first consumer‑protection law for high‑risk AI systems. Over the ensuing six months, a governor‑convened task force examined best practices and produced recommendations for a more robust framework. SB 26‑189 repeals the 2024 statute and incorporates many of the task force’s suggestions, reflecting an iterative policy process that builds on prior experience. This legislative evolution demonstrates Colorado’s commitment to refining AI governance as technology matures, ensuring that regulations remain relevant and effective without stifling innovation.
Next Steps: House Consideration and Potential Impact
Having cleared the Senate, SB 26‑189 now advances to the House of Representatives for further debate, possible amendment, and a final vote. If enacted, Colorado will become one of the first states to impose comprehensive transparency and accountability measures on AI used in consequential decisions, potentially influencing federal discourse and inspiring similar legislation elsewhere. Businesses operating in Colorado will need to adapt their AI governance practices, while consumers stand to gain clearer insights and stronger redress mechanisms when algorithms affect their access to essential services and opportunities. The bill’s implementation could thus serve as a model for balancing innovation with protection in the era of pervasive artificial intelligence.

