The AI Regulation Battlefront

0
18

Key Takeaways

  • The lack of federal regulation on AI has led to a surge in state-level legislation, with over 1,000 AI bills introduced and nearly 40 states enacting over 100 laws in 2025.
  • Efforts to protect children from chatbots may inspire rare consensus, with states passing child safety laws that require AI companies to verify users’ age, offer parental controls, and undergo independent child-safety audits.
  • States will also try to regulate the resources needed to run AI, including bills requiring data centers to report on their power and water use and foot their own electricity bills.
  • Tech titans will continue to use their deep pockets to crush AI regulations, while super PACs funded by organizations advocating for AI regulation will back candidates who support regulation.
  • The rules written in state capitals could decide how the most disruptive technology of our generation develops far beyond America’s borders, for years to come.

Introduction to AI Regulation
The increasing concern about the potential harm of Artificial Intelligence (AI) on mental health, jobs, and the environment has led to a growing demand for regulation. With Congress failing to take action, states have taken it upon themselves to introduce and enact laws to keep the AI industry in check. In 2025, state legislators introduced over 1,000 AI bills, and nearly 40 states enacted over 100 laws, according to the National Conference of State Legislatures. This surge in state-level legislation is a clear indication that the public is no longer willing to wait for federal action to regulate the AI industry.

The Role of States in Regulating AI
States are taking the lead in regulating AI, with a focus on protecting children from the potential harm of chatbots. The recent settlement between Google and Character Technologies, a startup behind the companion chatbot Character.AI, and the lawsuit filed by the Kentucky attorney general against Character Technologies, alleging that the chatbots drove children to suicide and other forms of self-harm, have brought attention to the need for regulation. OpenAI and Meta are also facing similar lawsuits, and it is expected that more will follow. Without AI laws on the books, it remains to be seen how product liability laws and free speech doctrines apply to these novel dangers. As a result, states will move to pass child safety laws, which are exempt from proposed bans on state AI laws.

Child Safety Laws and AI Regulation
The proposed Parents & Kids Safe AI Act in California, backed by OpenAI and the child-safety advocacy group Common Sense Media, is a significant step towards regulating the interaction between chatbots and children. The measure proposes requiring AI companies to verify users’ age, offer parental controls, and undergo independent child-safety audits. If passed, it could be a blueprint for states across the country seeking to crack down on chatbots. This law is a rare example of consensus between tech companies and advocacy groups, and it highlights the importance of protecting children from the potential harm of AI.

Regulating the Resources Needed to Run AI
States will also try to regulate the resources needed to run AI, including data centers. Fueled by widespread backlash against data centers, states will introduce bills requiring data centers to report on their power and water use and foot their own electricity bills. This is a significant step towards reducing the environmental impact of AI and ensuring that the industry is held accountable for its actions. Additionally, labor groups may float AI bans in specific professions if AI starts to displace jobs at scale. A few states concerned about the catastrophic risks posed by AI may pass safety bills mirroring SB 53 and the RAISE Act.

The Battle for AI Regulation
The battle for AI regulation is not just being fought in state capitals, but also in the courts and in the world of politics. Tech titans will continue to use their deep pockets to crush AI regulations, with Leading the Future, a super PAC backed by OpenAI president Greg Brockman and the venture capital firm Andreessen Horowitz, trying to elect candidates who endorse unfettered AI development to Congress and state legislatures. On the other hand, super PACs funded by Public First, an organization run by Carson and former Republican congressman Chris Stewart of Utah, will back candidates advocating for AI regulation. This battle highlights the complexity of the issue and the need for a nuanced approach to regulating AI.

The Future of AI Regulation
In 2026, the slow, messy process of American democracy will continue to grind on, with states playing a crucial role in shaping the future of AI regulation. The rules written in state capitals could decide how the most disruptive technology of our generation develops far beyond America’s borders, for years to come. As the world watches, it is clear that the regulation of AI is a complex and multifaceted issue that requires a thoughtful and nuanced approach. With states taking the lead, it is likely that we will see a patchwork of regulations across the country, with some states taking a more aggressive approach to regulating AI and others taking a more hands-off approach. Ultimately, the future of AI regulation will depend on the ability of policymakers to balance the benefits of AI with the potential risks and harms.

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here