Colorado Senate Democrats Introduce Bill to Regulate Automated Decision‑Making Technology in Colorado

0
3

Key Takeaways

  • SB26-189 establishes a statewide regulatory framework for automated decision‑making technology (ADMT) used in consequential decisions.
  • ADMT is defined as technology that automatically processes personal data to generate outputs that guide or assist decisions about individuals.
  • Consequential decisions include access to or eligibility for education, employment, housing, finance, insurance, healthcare, and essential government services.
  • Deployers must give clear notice when an ADMT is used and, after an adverse outcome, provide a plain‑language explanation and a 30‑day window for consumers to request additional information.
  • Consumers receive the right to correct factually inaccurate personal data and to obtain a meaningful human review of adverse decisions.
  • Beginning January 1 2027, ADMT developers must supply deployers with detailed descriptions of intended use, training data categories, known limitations and risks, and update procedures.
  • The Colorado Attorney General will adopt disclosure rules by December 31 2026 and enforce the law through the Colorado Consumer Protection Act, treating violations as deceptive trade practices.
  • The AG must issue a 60‑day notice and opportunity to cure before pursuing enforcement; the bill does not create a private right of action.
  • Liability for anti‑discrimination violations is shared between developers and deployers based on relative fault, and contracts cannot indemnify a party for liability arising solely from its own actions under the Colorado Anti‑Discrimination Act (CADA).
  • SB26-189 repeals the 2024 high‑risk AI legislation and incorporates recommendations from the governor’s AI task force, moving next to the Senate Business, Labor and Technology Committee for a hearing.

Legislative Introduction and Sponsors’ Remarks
Senate Majority Leader Robert Rodriguez (D‑Denver) and Senate President James Coleman (D‑Denver) introduced SB26-189 to create Colorado’s first comprehensive regulatory regime for automated decision‑making technology. Rodriguez emphasized that AI has evolved from a nascent field to a pervasive force affecting housing, jobs, health care, and insurance, and argued that statutes must keep pace to ensure transparency and guard against discrimination. Coleman highlighted that the bill reflects years of work to craft a balanced framework that protects consumers, mandates openness about how important decisions are made, and remains reasonable for businesses to comply with. Their statements underscore a bipartisan‑leaning commitment to harnessing AI’s benefits while mitigating its risks.

Defining Automated Decision‑Making Technology and Consequential Decisions
The legislation defines ADMT as any technology that automatically processes personal data and produces an output used to make, guide, or assist a decision concerning an individual. It further clarifies that a “consequential decision” is one that affects an individual’s access to, eligibility for, or compensation related to education, employment, housing, financial or lending services, insurance, healthcare services, or essential government services. By tethering the regulation to these high‑impact areas, the bill targets scenarios where algorithmic outcomes can profoundly shape life opportunities and well‑being, while leaving lower‑stakes uses (such as recommendation engines for entertainment) outside its scope.

Notice Requirements and Consumer Awareness
To promote transparency, SB26-189 obliges deployers—entities that actually use an ADMT—to provide a clear, conspicuous notice whenever a consumer interacts with technology covered by the bill. This notice must inform the individual that an automated system is being employed to influence a consequential decision. The requirement applies regardless of whether the outcome is favorable or adverse, ensuring that consumers are aware of the role of algorithms in processes that affect their rights and interests. Early awareness is intended to empower individuals to scrutinize decisions and seek redress when necessary.

Response to Adverse Outcomes: Explanation and Information Request
If an ADMT yields an adverse outcome—such as a denial of housing, employment, or health coverage—the deployer must furnish the consumer with a plain‑language description of the technology’s role in the decision. Additionally, the consumer gains the right to request further information about the decision within a 30‑day window. This dual‑pronged approach seeks to demystify opaque algorithmic processes while giving affected persons a concrete avenue to understand why a particular result was produced and what data or factors drove it.

Rights to Data Correction and Human Review
Beyond explanation, the bill grants consumers two substantive remedies when faced with an adverse ADMT decision. First, they may request correction of any factually inaccurate personal data that the system used. Second, they are entitled to a meaningful human review of the decision, ensuring that a qualified person can reassess the outcome and consider contextual factors that the algorithm might have missed. These protections aim to reduce the risk of erroneous or biased outcomes persisting unchecked and to restore a level of accountability that purely automated processes may lack.

Developer Disclosure Obligations Effective 2027
Starting January 1 2027, ADMT developers must provide deployers with a comprehensive disclosure package. This includes a description of the technology’s intended uses, the categories of personal data used to train the model, known limitations and risks, and instructions for appropriate use and required human oversight. Developers must also communicate any updates or modifications to the ADMT as they occur. By placing these duties on developers, the legislation ensures that downstream users receive the information necessary to deploy the technology responsibly and to comply with the notice and remediation requirements imposed on them.

Attorney General Rulemaking and Enforcement Mechanism
The Colorado Attorney General (AG) is tasked with adopting detailed rules that clarify the disclosure requirements following an adverse outcome; these rules must be finalized by December 31 2026. Enforcement of SB26-189 will be conducted exclusively through the Colorado Consumer Protection Act (CCPA), with violations deemed deceptive trade practices. The AG’s authority includes investigating complaints, issuing cease‑and‑desist orders, and seeking civil penalties. Centralizing enforcement under the AG aims to provide a consistent, expert‑driven approach to overseeing compliance across industries.

Cure Period and Absence of Private Right of Action
Before pursuing enforcement, the AG must give the alleged violator—whether a developer or deployer—a 60‑day notice and an opportunity to cure the violation, provided a cure is feasible. This notice‑and‑cure mechanism encourages voluntary compliance and allows businesses to rectify issues without immediate litigation. Notably, the bill deliberately does not create a private right of action; individuals cannot sue developers or deployers directly under SB26-189. Instead, redress must flow through the AG’s enforcement actions, a design choice intended to avoid fragmented litigation while still providing a robust deterrent.

Liability Allocation Under Anti‑Discrimination Law
SB26-189 ties liability for anti‑discrimination violations to existing statutes, specifically the Colorado Anti‑Discrimination Act (CADA). If an ADMT contributes to a discriminatory outcome, fault is apportioned between the developer and the deployer based on their relative responsibility. This proportional approach recognizes that both parties may contribute to harmful outcomes—developers through biased training data or flawed model design, and deployers through improper implementation or insufficient oversight. By linking liability to CADA, the bill leverages established civil rights protections while clarifying how responsibility is shared in the AI supply chain.

Contractual Limitations, Relation to Prior Law, and Legislative Outlook
The legislation explicitly prohibits contracts between developers and deployers from indemnifying a party against liability under CADA that arises solely from that party’s own actions. This clause prevents businesses from shifting accountability away from the entity whose conduct directly caused a discriminatory harm. SB26-189 also repeals the 2024 first‑of‑its‑kind high‑risk AI consumer protection law, incorporating many of the recommendations produced by the governor’s AI task force over the preceding six months. Having been assigned to the Senate Business, Labor and Technology Committee, the bill awaits its committee hearing; stakeholders can track its progress online as Colorado moves toward implementing one of the nation’s most detailed state‑level frameworks governing automated decision‑making.

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here