Key Takeaways
- President Trump, who previously championed a hands‑off AI policy, is now weighing government oversight of new AI models.
- The administration is discussing an executive order that would create an AI working group to develop a formal review process for emerging models.
- Early talks have included executives from Anthropic, Google, and OpenAI, and the proposed system resembles the United Kingdom’s safety‑standards framework.
- Motivations include preventing devastating AI‑enabled cyberattacks and harnessing AI for Pentagon and intelligence‑agency capabilities.
- Industry leaders warn that excessive regulation could blunt U.S. innovation, especially in the strategic competition with China.
- Leadership changes at the White House—most notably the departure of AI czar David Sacks—and a Pentagon‑Anthropic contract dispute are complicating policy formation.
Background on Trump’s Initial AI Stance
Upon returning to office, President Trump positioned himself as a vigorous booster of artificial intelligence, framing the technology as essential to winning the geopolitical contest against China. He swiftly rolled back a Biden‑era regulatory process that required AI developers to conduct safety evaluations and disclose models with potential military applications. Trump’s rhetoric celebrated AI as a “beautiful baby” that must be allowed to thrive without “foolish rules” or political interference, signaling a clear preference for minimal federal intervention in the sector.
Shift Toward Government Oversight
Despite his earlier laissez‑faire approach, Trump is now considering a reversal that would introduce some level of government oversight over new AI models. Administration officials, speaking on condition of anonymity, said the shift stems from mounting public concern about AI’s impact on jobs, privacy, mental health, and national security. The president’s aides acknowledge that while innovation remains a priority, certain safeguards may be necessary to avert catastrophic outcomes, especially in the realm of cybersecurity.
Details of Proposed Executive Order and AI Working Group
The White House is deliberating an executive order that would establish an AI working group tasked with examining potential oversight procedures. This group would bring together senior technology executives and federal officials to design a formal review process for forthcoming AI models. Although the exact mechanics remain under discussion, officials indicated that the working group would likely recommend a system whereby the government evaluates models before they are widely released, without necessarily blocking their deployment.
Consultations with Tech Leaders
In meetings held last week, White House officials briefed leaders from Anthropic, Google, and OpenAI on the contemplated oversight framework. Participants reported that the administration emphasized the need to balance safety with innovation, seeking input on how a review system could be structured to avoid stifling technological progress. The tech executives expressed cautious interest but also voiced concerns that overly stringent requirements could impede the rapid development cycles that have driven U.S. leadership in AI.
Comparison to UK Oversight Model
Officials noted that the proposed U.S. review process could mirror the approach being developed in the United Kingdom, where several government bodies share responsibility for ensuring that AI models meet predefined safety standards. The UK model distributes oversight among agencies specializing in cybersecurity, data protection, and emerging technologies, aiming to create a comprehensive yet flexible regime. Adopting a similar multi‑agency structure in the United States would allow the government to leverage existing expertise while avoiding the creation of a wholly new bureaucracy.
Motivations: Cybersecurity and National Security Concerns
A primary driver behind the policy reconsideration is the desire to prevent a devastating AI‑enabled cyberattack that could provoke political backlash. The administration is also evaluating whether cutting‑edge AI models could yield cyber‑capabilities useful to the Pentagon and U.S. intelligence agencies. Anthropic’s recent model, Mythos—capable of pinpointing software vulnerabilities with extraordinary precision—has exemplified the dual‑use nature of advanced AI, prompting officials to seek a mechanism that grants the government early access to such tools for defensive purposes.
Industry Pushback and Debate Over Innovation
While some officials advocate for a review system, industry representatives have warned that excessive regulation could slow U.S. innovation, particularly in the strategic race with China. Dean Ball, a former senior AI adviser in the Trump administration, characterized the challenge as a “tricky balance,” noting that the technology is evolving faster than formal procedures can keep pace. Executives argue that any oversight must be narrowly tailored to address genuine risks without imposing burdensome compliance costs that could erode America’s competitive edge.
Leadership Changes and Internal Disputes
The policy shift coincides with notable changes in White House leadership. In March, AI czar David Sacks, who had championed deregulation, announced his departure from the role. Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent have assumed greater influence over AI policy, though their efforts have been complicated by a public dispute between the Pentagon and Anthropic over a $200 million contract concerning AI use in warfare. The disagreement led the Pentagon to suspend use of Anthropic’s technology in March, prompting a lawsuit from the startup and creating ripple effects across agencies that had come to rely on its tools.
Implications for Federal Agencies and Standards Body
If the administration proceeds with vetting AI models, the working group would help designate which federal agencies should participate in the review process. Given the absence of a single agency responsible for all government cybersecurity work, officials suggest that the National Security Agency, the White House Office of the National Cyber Director, and the Director of National Intelligence could jointly oversee model evaluation. The group may also assess whether the Center for AI Standards and Innovation—a Biden‑era entity tasked with voluntarily reviewing government‑shared AI models—should be revitalized, a role the White House previously endorsed but which has been sidelined under Trump.
Conclusion: Balancing Regulation and Innovation
The evolving discourse within the Trump administration underscores a growing recognition that AI’s transformative potential carries significant risks that may warrant some form of governmental oversight. While the president’s earlier enthusiasm for unfettered innovation remains influential, rising anxieties about cybersecurity threats, societal impacts, and strategic competition with China are prompting a reconsideration of policy. The coming weeks will likely reveal whether the administration can craft a framework that safeguards national interests without undermining the dynamism that has propelled the United States to the forefront of AI development.

