Key Takeaways
- Companies’ AI tools are highly vulnerable to cyberattacks, with many systems breaking almost immediately when subjected to adversarial scans.
- Enterprises are feeding AI tools more data, which expands the target for cybercriminals and requires strict governance protocols.
- Security personnel need to constantly test their AI systems and apply real-time defense measures to mitigate risks.
- Companies’ security policies are becoming more effective, with roughly 40% of attempted AI transactions being blocked.
- The use of AI is increasing rapidly, with a 91% increase in AI transactions from 2024 to 2025.
Introduction to AI Vulnerability
The increasing use of Artificial Intelligence (AI) tools in enterprises has also led to a growing concern about their vulnerability to cyberattacks. According to a threat report published by Zscaler, a security firm, AI tools remain highly vulnerable to cyberattacks, even as companies are using them in more ways. The report found that enterprises are feeding AI tools vastly more data, which paints an expanding target on AI platforms for cybercriminals across the globe. This highlights the need for organizations to focus on visibility, real-time defense, and consistent governance controls to protect their AI systems.
The Brittleness of AI Systems
One of the most striking findings in Zscaler’s report is the brittleness of many AI systems. The report found that AI systems break almost immediately when subjected to full adversarial scans, with critical vulnerabilities surfacing within minutes. During red-teaming exercises in 25 corporate environments, it took a median of 16 minutes for an AI system to experience its first major failure, and by 90 minutes, 90% of systems had failed. In one case, it took only a single second for a system to fail. The failures observed included biased and off-topic responses, failed URL verifications, and privacy violations. This highlights the need for security personnel to constantly test their AI systems and apply strict governance protocols to mitigate risks.
The Need for Governance and Security
The report warns that models can still be coerced into exposing sensitive data or participating in harmful workflows. In 72% of corporate environments, Zscaler’s first test of an AI system uncovered a critical vulnerability. This emphasizes the need for CISOs to recognize that critical risk is present from day one, even in mature environments. The report recommends that organizations focus on visibility, real-time defense, and consistent governance controls to protect their AI systems. This includes constantly testing AI systems, applying strict governance protocols, and ensuring that security policies are in place to block attempted AI transactions.
The Increase in AI Transactions
Zscaler’s analysis of nearly one trillion AI data transactions in its cloud environment in 2025 revealed some promising signs. The company observed a 91% increase in AI transactions from 2024, with the U.S. accounting for roughly 38% of the transactions, followed by India (14%) and Canada (5%). The finance and manufacturing sectors led the way in using AI for the third year in a row, representing 23% and 20% of AI transactions, respectively, in 2025. This highlights the increasing use of AI in various sectors and the need for organizations to ensure that their AI systems are secure and protected from cyberattacks.
The Effectiveness of Security Policies
The report found that companies’ security policies are becoming more effective, with roughly 40% of attempted AI transactions being blocked. This phenomenon reflects "governance in action… as leaders balance the tradeoff between innovation speed and risk tolerance." The 989.3 billion AI transactions that Zscaler observed in 2025 represented a significant increase from the previous year, and the company tracked activity from more than 3,400 different AI tools. This highlights the need for organizations to continue to invest in security measures and governance protocols to protect their AI systems from cyberattacks.
Conclusion and Recommendations
In conclusion, the report highlights the vulnerability of AI tools to cyberattacks and the need for organizations to focus on visibility, real-time defense, and consistent governance controls. The increasing use of AI in various sectors and the growing number of AI transactions emphasize the need for organizations to ensure that their AI systems are secure and protected from cyberattacks. The report recommends that CISOs recognize the critical risk present from day one, even in mature environments, and that security personnel constantly test their AI systems and apply strict governance protocols to mitigate risks. By following these recommendations, organizations can protect their AI systems and ensure that they are used in a secure and effective manner.


