Key Takeaways
- Senate Bill 189 would replace Colorado’s original 2024 AI regulation, delaying its effective date to January 2027.
- The bill does not require developers to disclose how AI systems make decisions, but it mandates that organizations using AI notify consumers when AI is involved in consequential decisions (e.g., hiring, loans, housing) and provide an appeal process.
- AI developers must share with deployers information about intended use, limitations, risks, training data, and known harmful uses; deployers must give consumers ways to request human review and to verify the data used by AI.
- A three‑year “right‑to‑cure” provision lets violators fix shortcomings before facing civil penalties, after which the exemption expires.
- The measure aims to balance industry growth with consumer protection, reflecting a compromise sought by lawmakers, the governor, and a working group convened by Gov. Jared Polis.
- The U.S. Department of Justice has joined a lawsuit filed by Elon Musk’s xAI, arguing Colorado’s AI law is unconstitutionally vague and burdensome for small businesses.
Background of Colorado’s AI Legislation
Colorado became the first state to pass a comprehensive law regulating artificial intelligence in 2024, aiming to set standards for transparency, accountability, and consumer protection. The original statute was slated to take effect in February 2025 but was postponed to June to allow lawmakers more time to negotiate between consumer advocacy groups, which deemed the rules too weak, and technology companies, which argued the requirements were overly burdensome and would stifle innovation. The delay reflected a recognition that finding a workable middle ground would require further deliberation and stakeholder input.
Introduction of Senate Bill 189
On Friday, the Colorado Senate introduced Senate Bill 189, the third legislative attempt to revise the 2024 AI law. Sponsored by Senate Majority Leader Robert Rodriguez (D‑Denver) and joined by Senate President James Coleman, House Majority Leader Monica Duran, and Assistant House Majority Leader Jennifer Bacon, the bill seeks to narrow the scope of the original regulation while preserving core consumer safeguards. Rodriguez described the measure as a “notice bill” that focuses on informing individuals when AI is used in consequential decisions rather than demanding detailed technical disclosures from developers.
Core Disclosure and Notification Requirements
Under Senate Bill 189, companies and other organizations that deploy AI systems must notify consumers whenever AI is involved in making decisions that significantly affect them—such as hiring, credit approval, loan underwriting, or housing applications. The notice must be clear and conspicuous, allowing individuals to understand that an algorithm played a role in the outcome. Importantly, the bill does not require developers to reveal the inner workings, weighting, or proprietary logic of their AI models; instead, the emphasis is on transparency about AI’s involvement and the availability of recourse.
Appeal and Human Review Mechanisms
Consumers affected by AI‑driven decisions would be granted the right to appeal those outcomes. Deployers must provide a straightforward process for requesting a human review or reconsideration of the automated decision. Additionally, individuals would be able to examine the specific data points—such as credit scores, employment histories, or demographic information—that the AI used to reach its conclusion, enabling them to verify accuracy and challenge any erroneous inputs. These provisions aim to empower consumers while placing a modest operational burden on businesses that use AI.
Obligations for AI Developers
While the bill eases disclosure duties for developers regarding model internals, it imposes new informational responsibilities on those who create AI systems. Developers must supply deployers with documentation detailing the intended use of the technology, known limitations, potential risks, and any harmful or inappropriate applications identified during testing. They must also share training materials and notices about the data sets used to train the model. This information transfer is designed to help deployers understand the context in which the AI operates and to mitigate misuse downstream.
Right‑to‑Cure Provision and Its Sunset
A notable compromise in Senate Bill 189 is the inclusion of a three‑year “right‑to‑cure” period. During this window, organizations that violate the law—by failing to notify consumers of AI use or by not offering an appeal process—can rectify the deficiency without incurring civil penalties under the Colorado Consumer Protection Act. After three years, the exemption expires, and any subsequent violations would be subject to enforcement and fines. Senator Rodriguez argued that this temporary safe harbor encourages compliance while giving companies time to adapt their practices, though some industry leaders, such as Ibotta CEO Bryan Leach, contend the sunset is problematic and hope lawmakers will reconsider its timing.
Liability Allocation Between Developers and Deployers
The legislation also clarifies how liability is shared when AI causes harm. Developers would be shielded from enforcement actions arising from improper uses of their technology by deployers, unless they contributed to the misuse. When both parties share fault, penalties must be apportioned based on each party’s relative share of responsibility. This approach seeks to prevent developers from being held liable for scenarios they could not control while ensuring that deployers remain accountable for how they implement and monitor AI systems in real‑world settings.
Enforcement Role of the Attorney General’s Office
Senate Bill 189 retains the Colorado Attorney General’s Office as the primary enforcer of the AI law. The office will be responsible for investigating complaints, issuing guidance, and pursuing civil penalties for non‑compliance. This continuity ensures that a single state agency maintains oversight, providing a clear point of contact for both businesses seeking clarification and consumers alleging violations.
Legal Challenges and Federal Intervention
The bill’s introduction coincides with ongoing litigation concerning Colorado’s original AI statute. In April 2025, Elon Musk’s xAI filed a lawsuit asserting that the state’s law is unconstitutionally vague and invites arbitrary enforcement. The U.S. Department of Justice subsequently joined the suit, arguing that the regulation hampers free speech, compels discriminatory outcomes, and imposes disproportionate burdens on startups and small businesses. At the request of both xAI and Attorney General Phil Weiser, the federal judge overseeing the case agreed to stay proceedings until the Attorney General’s Office finalizes implementing rules and rules on a pending request to block the law’s implementation pending the lawsuit’s outcome.
Political Consensus and Outlook
Despite the contentious history surrounding AI regulation in Colorado, Senate Bill 189 reflects a concerted effort to achieve a middle ground. Sponsors acknowledge that no party is entirely satisfied with every provision, but they view the bill as a pragmatic step forward. Senator Rodriguez remarked that widespread dissatisfaction across stakeholder groups often signals a successful compromise. As the legislation moves through the legislative process, its ultimate fate will hinge on whether lawmakers can balance the imperatives of innovation, consumer protection, and constitutional viability amid mounting legal scrutiny.

