Key Takeaways
- Senate Bill 435 would impose notice, bias‑audit, explanation, and human‑review requirements on employers using AI for hiring, scheduling, performance evaluations, and monitoring.
- The bill applies to all employers “doing business in the state,” covering both private companies and public‑sector agencies.
- For state and municipal employers, decisions about AI become mandatory subjects of collective bargaining, giving unions a formal role in technology adoption.
- Union leaders support the measure as a way to protect jobs, wages, and benefits from being undermined by automation.
- Implementation would cost taxpayers roughly $722,000 per year (plus fringe benefits) for state agencies, with additional administrative burdens for local governments.
- State agencies warn the bill could limit their ability to buy or keep modern software, increase costs, and hinder modernization efforts.
- Business groups argue the regulations are overly broad, costly, and likely to stifle innovation while exposing employers to litigation.
- The legislation highlights a broader policy challenge: balancing worker protection with organizational flexibility in an era where AI is embedded in everyday software tools.
Overview of Senate Bill 435
Senate Bill 435, currently on the Connecticut Senate calendar, creates a comprehensive framework governing the use of artificial intelligence in the workplace. The proposal targets AI systems employed in hiring, employee scheduling, performance evaluations, and workplace monitoring. If enacted, it would require employers to inform workers when AI is involved in employment‑related decisions, conduct annual bias audits, provide clear explanations for those decisions, and offer a pathway for employees to request human review. The bill also establishes a private right of action, allowing employees and their unions to sue in Superior Court for alleged violations. By setting these baseline standards, the legislature aims to increase transparency and accountability in AI‑driven employment practices while giving workers a mechanism to challenge potentially discriminatory or unfair outcomes.
Core Requirements for Employers
Under SB 435, any employer “doing business in the state” must notify employees whenever an AI system is used to make or influence employment decisions. Annual bias audits would be mandated to detect disparate impacts based on protected characteristics such as race, gender, age, or disability. When AI contributes to a decision—such as denying a promotion or adjusting a schedule—the employer must furnish a comprehensible explanation of how the AI reached its conclusion. Employees would then have the right to request a human review of that decision, ensuring that automated outcomes are not final without oversight. Finally, the private right of action empowers workers and their unions to pursue legal remedies if they believe the employer has failed to meet any of these obligations, potentially leading to damages, injunctive relief, or attorney’s fees.
Scope: Public and Private Employers
The bill’s language is deliberately broad, applying to any entity conducting business within Connecticut, regardless of size or sector. This means private corporations, nonprofit organizations, and all levels of government—state agencies, municipalities, and school districts—would be subject to the same notification, audit, explanation, and review requirements. By sweeping in both public and private employers, the legislation seeks to create a uniform baseline of AI governance across the state’s labor market. However, the universal applicability also raises concerns about the administrative burden on small businesses and the potential for overlapping obligations in sectors where multiple governmental layers intersect.
AI and Collective Bargaining in the Public Sector
A distinctive feature of SB 435 is its treatment of AI as a mandatory subject of collective bargaining for public‑sector employers. Under the proposal, decisions to adopt, modify, or discontinue AI systems used for hiring, scheduling, performance evaluations, or monitoring would have to be negotiated with unions representing state and municipal employees. This places technology choices alongside traditional bargaining topics such as wages, hours, and working conditions. The intent is to give workers a formal voice in how emerging tools are deployed, ensuring that automation does not erode job security or bargaining unit work without union consent. By embedding AI oversight within the collective‑bargaining process, the bill attempts to align technological change with existing labor‑relations structures.
Union Testimony and Objectives
Union leaders have voiced strong support for the bill, framing it as a safeguard against job displacement and wage suppression. Zak Leavy, Deputy Director of AFSCME Council 4, emphasized that the legislation would prevent AI from being used to cut wages, fringe benefits, or non‑overtime hours, and would bar the technology from assuming duties normally performed by bargaining‑unit members. Ed Hawthorne, President of the Connecticut AFL‑CIO, echoed this sentiment, stating that the bill makes AI use a mandatory bargaining subject and prohibits employers from leveraging technology to undermine wages, reduce hours, or replace union work. Travis Woodward, President of CSEA SEIU Local 2001, added that workers must have a voice in implementing AI tools through collective bargaining to ensure the technology supports rather than supplants them. Collectively, the testimony underscores a priority: protecting the size and strength of unions by preventing AI from eroding the very jobs that define membership.
Costs and Implementation Challenges
The fiscal note attached to SB 435 estimates annual implementation costs of roughly $422,000 for the Department of Administrative Services (DAS) and more than $300,000 for the Department of Labor, plus associated fringe‑benefit expenses. These figures cover activities such as approving independent auditors, monitoring compliance, and providing guidance to covered employers. Because the bill also applies to municipalities, local governments could incur additional administrative and legal expenses as they establish internal processes for notices, audits, and union negotiations. Critics argue that these costs, while perhaps modest at the state level, could become burdensome for cash‑strapped towns and school districts, diverting resources from core services to compliance‑related tasks.
Concerns from State Agencies
State agencies tasked with enforcing the bill have raised operational concerns. DAS warned that Section 14, which restricts the use of certain AI components, could significantly limit the technology the state can purchase, given that AI elements now appear in most commercial software and hardware. The agency cautioned that the bill might force the state to halt use of already‑acquired products until they receive legislative approval, potentially disrupting ongoing projects. DAS also argued that the proposal could impede modernization efforts, increase costs, and hinder the state’s ability to keep pace with technological advancements. The Office of Policy and Management noted that existing collective‑bargaining agreements already provide strong, enforceable protections for employees, suggesting that added statutory requirements might create unnecessary complexity without improving outcomes. Labor Commissioner Danté Bartolomeo highlighted that her agency lacks in‑house AI expertise and would face a significant fiscal impact to build the capacity needed to carry out the bill’s mandates.
Business and Industry Response
Business groups have largely opposed SB 435, arguing that its scope is overly broad and its compliance demands burdensome. The Connecticut Business and Industry Association warned that the bill creates a complex regulatory framework that would make compliance extraordinarily difficult, discourage innovation, and expose employers to significant litigation risk. Local chambers, such as the Connecticut River Valley Chamber of Commerce, labeled the proposal “unworkable and costly,” predicting significant compliance burdens that could deter investment and slow economic growth. Healthcare providers, represented by the Connecticut Hospital Association, claimed the bill would be operationally unworkable and could disrupt workforce management amid existing staffing shortages. The Lumber Dealers Association of Connecticut cautioned that the definition of automated decision systems is so expansive it could capture routine business technologies, subjecting everyday tools to costly audits, compliance obligations, and lawsuit exposure. Across these responses, a common theme emerges: applying extensive regulatory requirements to widely used AI‑enabled tools risks increasing costs, slowing adoption, and creating legal uncertainty for employers of all sizes.
A Broader Policy Question
SB 435 epitomizes the ongoing challenge of regulating emerging technologies in a way that protects workers while preserving organizational flexibility. Artificial intelligence is increasingly embedded in everyday software—ranging from applicant‑tracking systems to scheduling platforms—making it difficult to delineate where “AI use” begins and ends. Private employers would confront new compliance duties, mandatory audits, and the heightened prospect of lawsuits. Public agencies would face those same burdens, plus the added obligation to negotiate AI‑related decisions with unions. Even the agencies expected to enforce the law warn that it could slow modernization, increase expenses, and complicate the use of tools already woven into standard business software. Earlier this session, lawmakers briefly inserted union‑focused language into an unrelated online‑safety bill (SB 5) before removing it, illustrating a broader trend of extending labor‑policy considerations into diverse legislative areas. As Connecticut debates SB 435, the central issue is not whether AI should be governed, but how to strike a workable balance between oversight, worker protection, and the practical realities of a rapidly evolving technological landscape.

